In January 1982, an Air Florida B-737 crashed in a raging snowstorm just after .... The driving force for decision making in human factors domains is some cue or pat- ..... systems were not influenced by outcome framing (Perrin, Barnett, Walrath, ...... Team members need to have a common knowledge base to support their ...
Reviews of Human Factors and Ergonomics http://rev.sagepub.com/
Judgment and Decision Making by Individuals and Teams: Issues, Models, and Applications Kathleen L. Mosier and Ute M. Fischer Reviews of Human Factors and Ergonomics 2010 6: 198 DOI: 10.1518/155723410X12849346788822 The online version of this article can be found at: http://rev.sagepub.com/content/6/1/198
Published by: http://www.sagepublications.com
On behalf of:
Human Factors and Ergonomics Society
Additional services and information for Reviews of Human Factors and Ergonomics can be found at: Email Alerts: http://rev.sagepub.com/cgi/alerts Subscriptions: http://rev.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Downloaded from rev.sagepub.com by guest on April 5, 2012 Citations: http://rev.sagepub.com/content/6/1/198.refs.html
>> Version of Record - May 1, 2010 What is This?
Downloaded from rev.sagepub.com by guest on April 5, 2012
CHAPTER 6
Judgment and Decision Making by Individuals and Teams: Issues, Models, and Applications Kathleen L. Mosier & Ute M. Fischer Consistent with technological advances, the role of the operator in many human factors domains has evolved from one characterized primarily by sensory and motor skills to one characterized primarily by cognitive skills and decision making. Decision making is a primary component in problem solving, human-automation interaction, response to alarms and warnings, and error mitigation. In this chapter we discuss decision making in terms of both front-end judgment processes (e.g., attending to and evaluating the significance of cues and information, formulating a diagnosis, or assessing the situation) and back-end decision processes (e.g., retrieving a course of action, weighing one’s options, or mentally simulating a possible response). Two important metatheories—correspondence (empirical accuracy) and coherence (rationality and consistency)—provide ways to assess the goodness of each phase (e.g., Hammond, 1996, 2000; Mosier, 2009). We present several models of decision making, including Brunswik’s lens model, naturalistic decision making, and decision ladders, and discuss them in terms of their point of focus and their primary strategies and goals. Next, we turn the discussion to layers in the decision context: individual variables, team decision making, technology, and organizational influences. Last, we focus on applications and lessons learned: investigating, enhancing, designing, and training for decision making. Drawing heavily on sources such as the Human Factors journal, we present recent human factors research exploring these issues, models, and applications.
In January 1982, an Air Florida B-737 crashed in a raging snowstorm just after takeoff from Washington National Airport. The Captain made a series of decisions which resulted in a takeoff with a snow-covered aircraft, devoid of engine anti-ice protection, on a short, snow-covered runway. During taxi out the crew discussed the snow at length, but never as a decrement to aircraft performance. There was an absence of past experience in snow, the Captain had ten cold weather operations, and little if any formal airline training in winter operations…and the first office had rarely encountered such conditions.... We do not know if the failure to use engine anti-ice was a conscious decision, or...an error caused by the crews habitual routine of seldom using anti-ice.... The Captain was...confident enough to stay with his takeoff decision disregarding six, subtle, statements of concern by the First Officer during the ill fated launch. (Maher, 1989, pp. 441–442) Consequently, the crew attempted the takeoff at roughly three-quarter engine power settings, could not sustain flight, and crashed into a roadway bridge across the Potomac seconds after attempting lift-off (Nagel, 1988, p. 286).
T
he Air Florida accident illustrates that researchers need to consider operators’ judgment
DOI 10.1518/155723410X12849346788822. Copyright 2010 by Human Factors and Ergonomics Society. All rights reserved. Downloaded from rev.sagepub.com by guest on April 5, 2012
198
Judgment and Decision Making
199
and decision processes in order to explain human behavior in task environments. The crash was not the result of a system malfunction or inaccurate displays but, rather, was a case of poor situation assessment and decision making. This vignette highlights the importance of understanding how people make sense of data and information and how they arrive at their decisions. The need to take decision making into account as a human factors issue was underscored by the formation of a Cognitive Engineering and Decision Making Technical Group in the Human Factors and Ergonomics Society and the introduction of the society’s Journal of Cognitive Engineering and Decision Making. Consistent with technological advances, the role of the operator in many human factors domains has evolved from one characterized primarily by sensory and motor skills to one characterized primarily by cognitive skills and decision making. Decision making is a primary component in problem solving, human-automation interaction, response to alarms and warnings, and error mitigation. As researchers and practitioners, we are responsible for the appropriateness of not only the physical ergonomic aspects of design but also the cognitive engineering aspects—that is, aspects that enable and facilitate required cognitive processing. It is critical that the design of systems and technological aids supports “good” decision making, in terms of both front-end judgment processes (e.g., attending to and evaluating the significance of cues and information, formulating a diagnosis, or assessing the situation) and back-end decision processes (e.g., retrieving a course of action, weighing one’s options, or mentally simulating a possible response; see Figure 6.1). Decision making in human factors cannot be examined in a vacuum. It must be seen in context: Contextual variables impact people’s judgment and decision processes, and their decisions, in turn, impact the context. The decision framework in Figure 6.2 illustrates the layers of context that surround individual decision makers in complex environments. At the center is the individual decision maker. Individuals bring their particular characteristics to the decision situation—memory, motivation, attention, knowledge and skills, experience, expertise, age—and these interact with the layers around them.
Figure 6.1. Components of decision making in human factors domains. Ovals signify cognitive processes; rectangles refer to process outcomes. Downloaded from rev.sagepub.com by guest on April 5, 2012
200
Reviews of Human Factors and Ergonomics, Volume 6
ORGANIZATION TECHNOLOGY TEAM INDIVIDUAL IN TASK ENVIRONMENT JUDGEMENT
DECISION
Figure 6.2. Human factors decision framework.
The task environment within which decision making typically occurs is vastly different from the lab environment in which decision making has traditionally been studied. It is complex (as the task itself often is), with ambiguous cues, many sources of information, and unclear, shifting, or conflicting goals. Time constraints are often present, as are distractors, noise, or hazards. Conditions in the environment may change rapidly or may slowly and continually evolve, and the nature of the change may impact the decision maker’s ability to detect and incorporate it into a situation assessment (Wiegmann, Goh, & O’Hare, 2002). Perhaps most important, no single “right” decision may be discernible, even when the outcome is of high consequence. The next layer of context acknowledges that decisions often entail multiple players. Expertise is typically distributed across team members, and individual as well as team goals must be taken into account. Team decision making has taken on new importance as technology has enabled team members to participate from disparate locations, in different time zones, and via different communication media. Technology has also heightened the need for common assessments or mental models of situations and collaboration during judgment and decision making. Although teams may not be colocated, their judgment and decision processes require shared knowledge and cognition as well as efficient communication. Most human factors environments are characterized by technological advances. Automation and technology profoundly impact the decision context by creating ecologies that are hybrid—a combination of naturalistic and electronic elements. Technology provides advantages such as enhanced monitoring and data-processing capabilities; more precise measurement, calculation, and navigation; and high reliability of system diagnoses. However, technological environments also enable flawed judgment and decision processes, such as overreliance and complacency, misuse or heuristic use of automation (automation bias, the use of automation as a heuristic replacement for vigilant situation assessment and resultant errors, discussed later), and blind compliance (Mosier & Skitka, 1996; Parasuraman & Riley, 1997; Sarter & Schroeder, 2001). Moreover, the presence of automated Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
201
decision aids complicates the issue of responsibility: Who (or what) is ultimately responsible for diagnoses and decision outcomes? The outer layer of the framework pertains to the organization, which exerts explicit and implicit influences on decision makers’ behavior. The organization explicitly or implicitly prescribes cultural norms and corporate goals, defines criteria for rewards, and provides training. It may also implicitly create frames for decision making that foster risky versus conservative decisions in the field, reinforce or discourage decisions to engage in noncompliant behaviors, and sabotage or encourage team decision making through its performance review or reward systems. In this chapter, we discuss the front-end (judgment) and back-end (decision) processes of decision making. Two important metatheories of truth—correspondence (empirical accuracy) and coherence (rationality and consistency)—provide ways to assess the goodness of each phase (e.g., Hammond, 1996, 2000; Mosier, 2009). We then present several models of decision making that have been used successfully by human factors practitioners and discuss them in terms of their point of focus and their primary strategies and goals. This discussion includes Brunswik’s lens model, a classic policy-capturing model, models of naturalistic decision making, and decision ladders. Next, we turn the discussion to the contextual layers in the decision context as shown in Figure 6.2, including individual variables, team decision making, technology, and organizational influences. Last, we focus on applications and lessons learned: studying, enhancing, designing, and training for decision making. Drawing heavily on sources such as the Human Factors journal, we present recent human factors research exploring these issues, models, and applications.
DEFINING DECISION MAKING: FRONT-END AND BACK-END PROCESSES In much of the judgment and decision making (JDM) literature, the terms judgment and decision making are used almost interchangeably. Traditional experimental paradigms exploring judgment, for example, set up the situation by presenting individuals with a set of cues with specified probabilities of occurrence and/or consequences and asking them to make a choice between competing alternatives (e.g., Kahneman, Slovic, & Tversky, 1982). The researchers evaluate the choice against what should have been selected according to a Bayesian mathematical model (Bayes, 1763/1958) or an objective or subjective utility (i.e., value). Not only has this tradition blurred the two phases of decision making (the dependent variable is actually the decision rather than judgment), but it has also minimized the importance of active processes such as information seeking, situation assessment, and diagnosis in “real-world” settings and has reduced decision making to option selection. For this reason, we offer the terms front end and back end to clearly delineate the two phases of decision making in human factors domains. The distinction between front-end and back-end processes is not merely of theoretical significance; rather, human factors applications such as system design or training must be approached very differently, depending on the target phase. Downloaded from rev.sagepub.com by guest on April 5, 2012
202
Reviews of Human Factors and Ergonomics, Volume 6
The driving force for decision making in human factors domains is some cue or pattern of cues, some state of affairs that an operator perceives as unsatisfactory. As shown in Figure 6.1, front-end cognitive processes not only concern problem identification but also include information search, problem diagnosis, risk assessment, and the evaluation of time constraints. Front-end processes make up what is most often referred to as the judgment phase of decision making. Related terms (though bound to specific research traditions) are diagnosis, situation awareness and assessment, situation model, and pattern recognition. Front-end processes result in a judgment that may be a rather straightforward evaluation of the initial cue (as when members of a flight crew judge their fuel remaining as insufficient to reach their destination airport) or may involve a complex mental representation (as when a physician pulls together various test results to formulate a diagnosis). We are not suggesting that an individual’s judgment always has to be based on deliberation; for instance, as we will discuss later, pattern recognition is an intuitive, rather than an analytical process. However, we do posit that once operators make a judgment about a problem, their judgment will trigger decision processes. Decision processes form the back end of decision making, concern the response to the problem, and culminate in a final decision. Back-end processes may involve retrieving an appropriate course of action from memory, locating a prescribed response in the appropriate manual, adapting a known response to the specific demands of the current situation, mentally simulating a possible response, planning a sequence of actions, or evaluating alternatives. In the past two decades, a “paradigm shift” has occurred (Cannon-Bowers, Salas, & Pruitt, 1996) as many human factors researchers moved away from traditional JDM paradigms and toward models that focus on both the front-end and back-end processes underlying decision making in a complex and dynamic world. These models enable a comprehensive perspective of the decision context, what decision makers bring to the situation, and how these variables impact their cognitive processes. Rather than setting up the front end and constraining the back end with a limited set of choice options, as do traditional models, these new frameworks examine decision making in tasks that are not artificially constrained, concentrating not only on the decisions people make but also on the underlying processes leading to their decisions. This does not mean that traditional choice tasks are never used in human factors research, but when they are, the focus of exploration is typically on capturing the front-end judgment process and activities such as information search or cue use rather than on choice selection (e.g., Camasso & Jagannathan, 2001; Pierce, 1996). This shift is highly appropriate for human factors applications, as researchers and practitioners seek to answer questions, such as, How should we design decision aids to ensure accurate situation assessment and diagnosis? How do individual variables such as age, experience, or attention impact judgment and decision processes? How do we foster collaborative decision making in teams? What is the impact of technology on judgment and decision processes? How can human factors principles and knowledge facilitate the strategies and goals of decision making? A (more or less) implicit assumption in the framing of these questions is that facilitating better judgment and decision processes will result in better decisions. It is important that we, as human factors/ergonomics researchers and Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
203
practitioners, strive to answer these questions and test our responses within and for the environment in which we practice.
EVALUATING DECISION MAKING: CORRESPONDENCE AND COHERENCE Two standards exist for evaluating the goodness of decision-making processes: correspondence and coherence. These terms have been variously used to describe perspectives of truth, strategies, and goals of decision making. Correspondence refers to the criterion of empirical accuracy, and coherence refers to the criteria of rationality and consistency (Hammond, 1996, 1999, 2000). Evaluating the outcome of decision processes (decision makers’ actual decisions) is relatively straightforward because decisions are overt and available. The ultimate test of a specific decision outcome is typically its correspondence, or empirical accuracy, with respect to predictions or goals (i.e., did the decision/action “work”?). Coherence of back-end processes can be evaluated by examining whether outcomes reflect consistency and rationality across decision situations. Evaluating correspondence and coherence of the front end—judgment, diagnosis, situation assessment—is more tricky because front-end processes are often covert and not easily available. Moreover, decision makers’ problem understanding or situation models, and consequently their resulting judgments, entail subjective components—risk is a subjective assessment (Yates & Stone, 1992)—or task conditions may be consistent with different interpretations (Fischer, Orasanu, & Davison, 2003; Fischer, Orasanu, & Wich, 1995). Nonetheless, both correspondence and coherence are important in the front end, and different tactics have been employed to measure each criterion.
Correspondence Correspondence, or empirical accuracy, is a function of how well an individual uses multiple and often fallible (i.e., probabilistic) indicators to make judgments and decisions in the world. A pilot, for example, strives for correspondence when checking cues to judge height and distance from a runway and deciding when to turn for final approach. A physician may use cues such as heart rate, skin tone, or pain points to diagnose patients’ conditions and choose among treatment options. These front-end and back-end processes are evaluated according to how well they represent, predict, or explain objective reality. In theories such as multiattribute utility analysis (Keeney & Raiffa, 1976) and Brunswik’s lens model (Brunswik, 1943, 1955; Hammond & Stewart, 2001), each cue’s weight should derive from its ecological validity—that is, its value when predicting the criterion. According to these linear-additive models, the most appropriate weighting scheme will result in the most correspondent judgments. Features of the environment and the quality of the cues utilized will impact the accuracy of both front-end and back-end processes. Cues that are highly ecologically valid, are concrete, and/or can be perceived clearly will facilitate accurate judgments and correct decisions. Cues that are ambiguous, or are murkier because they are not as concrete in nature or are obscured by factors in the environment, will hinder accurate judgments Downloaded from rev.sagepub.com by guest on April 5, 2012
204
Reviews of Human Factors and Ergonomics, Volume 6
and adversely impact decisions (Mosier, 2002). Sometimes, accurate decisions may entail creative processes when no known solution fits the perceived situation or diagnosis, as in the following Mann Gulch vignette: In 1949, in the early days of “smoke jumping”...15 young men parachuted into the head of Mann Gulch in the Montana wilderness to put out a small fire at the bottom of the gulch where it meets the Missouri River. After safe landings, the crew proceeded down the gulch according to plan. Suddenly, however, the plan became irrelevant…. Instead of the minor fire the crew expected, the fire exploded.... The crew was now faced with a roaring inferno that began racing up the gulch toward the surprised smoke jumpers. Their only chance of escape was to try to run up the steep slope of the gulch to the top and go over its side. But it quickly became clear that their chances of success—and of living—were vanishing; the fire was rapidly overtaking them. When the foreman saw this, he did something that, as far as anyone knew then...had never been done before. He lit a grass fire in front of him, ran into the center of it, and laid down in its ashes. It worked; the main fire went around him and he survived. (Hammond, 2000, pp. 78–79)
One back-end process individuals use to test the accuracy of their choices is mental simulation (e.g., Klein, 1993a, 1993b)—that is, imagining how a course of action will play out. The foreman in the Mann Gulch vignette, for example, may well have imagined the main fire going around the borders of his grass fire because the ground was already scorched. Although the foreman’s solution did not appear to be rational to his crew, it withstood the test of correspondence. Although correspondence is most obvious as a criterion for the outcome of decision making—clearly, one wants accurate decisions—it is also a criterion for underlying frontend processes. In the past 20 years, the concept of situation awareness (SA) has taken on importance (see the Tenney & Pew, 2006, chapter in Volume 2 of Reviews of Human Factors and Ergonomics for a more extensive treatment of this topic) as a measure of accuracy in the judgment phase of decision making under uncertainty (Kirlik & Strauss, 2006). SA is a correspondence concept that measures the achievement of an accurate mapping between the objective, outside world and one’s inner representation of that world (Dekker & Lützhöft, 2004). Endsley (1988) coined a broadly applicable definition of SA as “the perception of elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future” (p. 97). SA results from bottom-up processes, in which recognitional perceptions of the situation trigger expectations and goals, and top-down processes, in which deliberation occurs as triggered goals direct information search (Endsley, 2000). SA is guided by the operator’s past knowledge and is the subjective interpretation rather than an objective rendition of a situation. The level of obtainable SA is always less than ideal because of dynamic, changing conditions; shifting goals; and limitations in knowledge and information (Pew, 1995). Definitions of what constitutes comprehensive or sufficient SA are not agreed upon in the literature. Also, consistent with the notion of decision making as a two-part process impacted by individual and contextual variables, not even perfect SA can necessarily guarantee the correct or most accurate decision in less than a perfectly predictable environment (e.g., Endsley, 1995a, 1995b; Hammond & Stewart, 2001; Kirlik & Strauss, 2006). Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
205
Coherence The criterion of coherence, or rationality and consistency, is most often applied to frontend processes and is more difficult to measure than correspondence. Coherence has traditionally been evaluated against the standard of mathematical models and is expected to follow Bayesian (Bayes, 1763/1958) laws of probability. In the traditional JDM paradigms described earlier, the situation is typically set up so that one choice is more consistent with laws of probability than another is (e.g., an event that involves two conjoint circumstances is always less likely than an event with one conjoint circumstance), and coherence in judgment is inferred from the choice made. In human factors research and practice, the standard of mathematical models is often inappropriate or insufficient for evaluating coherence. In technological environments, for example, deterministic information rather than probabilistic cues provides the basis for front-end processes, and ecological validity depends upon decision makers using all relevant information logically and consistently. When striving for coherence, a power plant operator might compare readings on an array of gauges to diagnose a system anomaly. A pilot seeks coherence when configuring the aircraft for landing, ensuring that all system parameters displayed in the cockpit are appropriately set. What is important for coherence in many human factors environments is completeness and consistency of the judgment, diagnostic, or situation assessment process, especially in cases in which empirical accuracy is not known or not easily accessible (Hammond, 1996, 2000, 2007; Mosier, 2002, 2009). Noncoherence in the front end: Heuristics and biases. Interestingly, much of the research on coherence in front-end processes has focused on the difficulty humans have maintaining it. The “heuristics and biases” research is perhaps the best-known example of a coherence approach to judgment research (e.g., Kahneman et al., 1982; Tversky & Kahneman, 1971). Researchers in this tradition compare human judgment against normative or mathematical models. A recurrent finding is that people use various heuristics (shortcuts) to speed up the decision-making process. Accordingly, human judgment is declared to be, at best, an approximate process and, at worst, irrational and subject to systematic biases. The focus of this research is on the lack of coherence of the process, rather than on decision accuracy. The key issue is not whether heuristics may result in accurate decisions; rather, it is the notion that they exemplify the flawed nature of the human judgment process. The use of heuristics (e.g., Kahneman et al., 1982; Tversky & Kahneman, 1971) is fostered by the tendency of humans to be “cognitive misers”—that is, to take the road of least cognitive effort (Fiske & Taylor, 1994). According to research on these shortcuts, decisions based on heuristics are not rational or coherent, because they are based on incomplete and/or incorrect information or on faulty representations of probabilities. The use of heuristics such as availability, the judgment of the likelihood of events dependent on the ease of their recall, or representativeness, the tendency to use the similarity of a past event to a present situation to judge the likelihood that the current event will have the same outcome, have been identified as factors in aviation incidents and accidents (e.g., Nagel, 1988; Wickens & Flach,1988). In another domain, participants simulating repeated diagnoses of Downloaded from rev.sagepub.com by guest on April 5, 2012
206
Reviews of Human Factors and Ergonomics, Volume 6
the validity of pump failures within a waste-processing facility fell prey to cognitive anchoring effects, as they guessed causes of system failures and then relied on these “hunches” to decide whether they agreed with a diagnostic aid (Madhavan & Wiegmann, 2005). Adults of all ages have fallen prey to framing effects, the framing of decision options as either gains or losses, a finding that has implications in applied contexts such as product warning interactions and medical decision making (Mayhorn, Fisk, & Whittle, 2002). In aviation, researchers found that pilots for whom the diversion of a flight in adverse weather was framed in terms of a loss (e.g., of time, money, effort) were more likely to continue the flight than those for whom it was framed in terms of a gain (e.g., personal safety; O’Hare & Smitheram, 1995). Heuristics also may have played a part in the Air Florida crash described at the beginning of this chapter. The crew did anchor on only the EPR [engine pressure ratio] reading while reading engine instruments on takeoff. (Maher, 1989. p. 441; emphasis added)....after they had made the ill-fated decision to begin their takeoff roll, their ability to imagine that their engine power readings were improper because of accumulations of ice or snow may have been impaired by the overwhelming representativeness of their current takeoff conditions to successful takeoffs in the past: engines spooling up, power indications appearing to correspond appropriately, no exceptional warning signals, and so forth. (Nagel, 1988, pp. 289–290; emphasis added)
When do heuristics work? Not all researchers and designers see heuristics as irrational. In fact, heuristics that are calibrated and adapted to features of the ecology have been viewed as adaptive tools. Some researchers posit that the key to using heuristics successfully is to match heuristics with environmental structures in which they work (ecological rationality). Gigerenzer (2008) and Gigerenzer and Selten (2001), for example, demonstrated that fast and frugal heuristics, such as take the best cue with the highest validity, result in judgments that are just as accurate as a more coherent multiple-regression model. The position Gigerenzer and his colleagues take is that when heuristics are applied in situations “to which they have been adapted” (Wegwarth, Gaissmaier, & Gigerenzer, 2009, p. 1), they can be useful to professionals such as doctors. Others have argued that in complex choice situations with much information, less deliberative thinking may result in more accurate decision making than conscious and rational deliberation (Dijksterhuis, Bos, Nordgren, & van Baaran, 2006; Maule, 2009). One suggested approach to the design of decision support systems has been to align them with the heuristic-based information acquisition strategies and preferences of users (Wiggins & Bollwerk, 2006). The effectiveness of heuristics may depend also on their grounding in experience and knowledge. Experienced decision makers may rely on heuristic principles such as pattern recognition because they already have structured mental representations of their domain and have a breadth of knowledge behind the utilization of these shortcuts. Because proficient decision makers have a much broader, more representative database at their disposal, the heuristics they utilize are likely to be less prone to some errors and biases. Klein (1989), for example, found that more proficient decision makers were less susceptible than novices to confirmation bias, the tendency to seek out evidence that confirms one’s initial assessment; rather, they monitored the situation constantly for disconfirming cues. In other work, active U.S. Navy officers handling simulated state-of-the-art ship defensive Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
207
systems were not influenced by outcome framing (Perrin, Barnett, Walrath, & Grossman, 2001). If the Air Florida crew in the previous examples had been more experienced in winter operations, their use of heuristics might have been more accurate and effective. Is coherence in judgment really necessary? Most if not all human factors environments are what Vicente (1990) has referred to as correspondence-driven work domains—that is, the actual objective state of the world imposes goal-relevant, dynamic (time-dependent) constraints within these domains. Because of this, judgment and decision processes will have accuracy as the ultimate goal. One could make the argument, then, that coherence in people’s judgment and decision processes is of minor importance when compared with accuracy. After all, achieving coherence may not be sufficient to ensure good or accurate outcomes (e.g., Sunstein, Kahneman, Schkade, & Ritov, 2001), whereas a judgment process that lacks coherence may in some cases produce a decision that is quite accurate (Gigerenzer, 2008; Hammond, 2007). The best road to empirical accuracy depends a great deal on the decision environment. In the natural world, correspondence is a matter of knowing which of the available cues to look for and rely on and recognizing patterns of these cues that signify particular situations or predict particular outcomes. Coherence is most important as a road to accuracy when the task environment includes electronic systems, data, and information. In the electronic world, operators are required to think logically and rationally about data and information in order to establish and maintain consistency among indicators. This involves knowing what data are relevant and integrating all relevant data, creating and identifying a rationally ‘good’ picture—all system parameters are in sync and consistent with the situation at hand—and making decisions that rationally follow from what is displayed. In these environments, a primary route to accuracy in the physical world is through coherence in judgment (diagnosis, situation assessment) and decision processes in the electronic world. Coherence errors in the electronic world may have real consequences in the physical world. So coherence is a primary goal in electronic environments because accurate decision making cannot be accomplished without it; the accuracy of judgment is ensured only as long as coherence is maintained. However, it should also be noted that decision makers in electronic environments must be able to trust the empirical accuracy of the data they are using to achieve correspondence. Technology acts as a highly accurate mediator between the naturalistic world and the decision maker, but technology’s correspondence with the natural world may be eroded by imperfect programming, incorrect modes, or contextual blind spots. Many diagnosis and decision errors in electronic environments stem from a failure to detect signs of noncoherence—something in the electronic “story” that is not consistent with the rest of the picture—that signify lack of correspondence (Mosier, 2009). Many task environments include both electronic and naturalistic elements; moreover, decision-making goals may vary as a function of the particular task at hand. Jacobson and Mosier (2004), for example, noted in an analysis of airline pilots’ incident reports that different decision strategies were mentioned as a function of the context within which a decision event occurred. For example, during traffic problems in good visibility conditions, pilots tended to focus on correspondence goals, reacting to patterns of probabilistic cues Downloaded from rev.sagepub.com by guest on April 5, 2012
Reviews of Human Factors and Ergonomics, Volume 6
208
for accurate traffic avoidance; however, for incidents involving equipment problems, pilots were more likely to strive for coherence, checking and cross-checking indicators to formulate a diagnosis of the situation.
MODELS OF DECISION MAKING The broad goal of many decision models is to determine what front-end and back-end processes are associated with the best or most accurate decisions. One common methodology is policy capturing, in which a series of judgments of the same variable is made on the basis of sets of manipulated cues, and individuals’ judgment strategies can then be mathematically or statistically modeled. Camasso and Jagannathan (2001), for example, asked private pilots to make a series of choices between pairs of airports for their base of flying activities and then used policy capturing to determine what variables or cues were weighted most heavily in their choices. Pierce (1996) explored the strategies women used for evaluating treatments for breast cancer and heart ailments and identified problematic aspects in their judgment behaviors, such as discounting relevant information. Policy capturing is an important technique in organizational psychology investigations of selection decisions and performance appraisal (e.g., Rotundo & Sackett, 2002). It has also been used to study decisions in academic, military, and human-machine systems situations (e.g., Carretta, 2000; Karren & Barringer, 2002; Rotundo & Sackett, 2002). The assumption in policy capturing is that a linear-additive model that relates cues to criteria as determined by linear regression will provide the best assessment of judgment performance. Beta weights associated with cues indicate their importance, and the most successful judges give the highest weights to the best predictor cues (Kirlik & Bertel, in press). Although most policy-capturing studies rely on linear-additive models, Dorsey and Coovert (2003) studied experts in a simulated task to allocate rewards in the form of merit pay and found that “fuzzy system models generally perform as well as or better than both linear and nonlinear regression methods in terms of model fit” (p. 117).
The Lens Model Perhaps the best known policy-capturing model is Brunswik’s (1943, 1956) lens model, which offers a classic description of the correspondence process. According to this model, people base their judgments of other individuals or objects on probabilistic cues or attributes in the environment (Hammond, 1996; Hammond & Stewart, 2001; see Figure 6.3). Examinations of individual judgments must take into account the role of the environment in which an individual is performing and the ecological validity of cues being used (i.e., the degree of relatedness between cues and criterion). Moreover, to understand the factors that impact judgment, it is necessary to examine systematically not only variations in context but also variations in decision makers. Figure 6.3 illustrates an iteration of the lens model that has recently been employed in human factors research (e.g., Bisantz et al., 2000; Bisantz & Pritchett, 2003). In this situation, two (or more, in an n-system design) “cognitive systems” interact in a known task ecology and can be compared with respect to cue utilization and correspondence with the Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
209
Achievement for A
ECOLOGY (CRITERION)
CUES
Cue Utilization Validities for A
X1
SUBJECT A JUDGEMENTS
YA
X2 Ye
Agreement between A and B
X3 Ecological Validities
Xk Cue Utilization Validities for B
YB
SUBJECT B JUDGEMENTS
Achievement for B
Figure 6.3. Lens model, From Judgment Analysis: Theory, Methods, and Applications (p. 68), by R. W. Cooksey, 1996. San Diego, CA: Academic Press. Copyright 1996 by Academic Press. Reprinted with permission.
environment (i.e., G, achievement, or accuracy of judgments). The cognitive systems can be different human judges, or comparisons can be made, for example, between human judges and automated systems. The array of cues can also be varied, as well as the ecological validity of cues. The coherence of the task ecology is measured by the relationships (multiple correlations) between judgments and cues on the one side, and cues and the criterion on the other side. Multiple regression techniques are applied to capture certain aspects of the judgment process, such as cue weights, and consistency and predictability of judgments; so the model can be used to examine accuracy of judgments and decisions as well as coherence or consistency of front-end and back-end processes across many situations. The method is sensitive to changes produced by cue variations or changes in the environment and also allows for cognitive feedback to judges on specific facets of their judgment policies. The lens model has been applied in a variety of contexts, including interpersonal perception, weather forecasting, and conflict resolution (see Cooksey, 1996, for a review). Recently, some human factors researchers have relied on the Brunswikian framework and the lens model to perform judgment analyses in high-technology environments. Because technology is both a facet of the judgment system ecology and a mediator within it, the Brunswikian framework can be used in several overlapping configurations—to compare, for example, judgments of technology as decision maker with those of human decision makers in a task environment; to explore human judgment with technology as cue within an array of cues; or to capture judgment strategies with technology as environment or contrast them with judgments in nontechnological environments. Bisantz and Pritchett (2003) related technology as decision maker to human judgments. Downloaded from rev.sagepub.com by guest on April 5, 2012
210
Reviews of Human Factors and Ergonomics, Volume 6
They utilized the lens model to distinguish participant judgment strategies and outcomes in a simulated conflict detection task across a variety of conditions, and they compared them with those generated by several automated collision alert systems. In a variant of this work, Bass and Pritchett (2002) investigated the human-automated judgment system interaction, using an adapted lens model to characterize the interaction and resultant judgments in an air traffic conflict prediction task. Technology as cue explorations have included investigations of presentation mode or display design of automated decision aids as well as the impact of technological cues on judgment strategies. Research on automation overreliance and automation bias, for example, has demonstrated that high-tech sources of information are often weighted much more heavily than are other cues in the diagnosis process (e.g., Mosier, Skitka, Heers, & Burdick, 1998). In other work, Horrey, Wickens, Strauss, Kirlik, and Stewart (2006) examined the impact of automated aids on performance in a simulated battlefield threat assessment task and used lens model analyses to parse out the effects of automation on cue utilization. Individual as well as team performance in technology as environment has also been investigated using the lens model approach. Bisantz et al. (2000) employed the lens model to characterize dynamic command and control task performance in a naval combat information center environment, and extended the approach to create dynamic models of the task environment that were specific to participants in order to examine aspects of their judgment and decision making. Rothrock and Kirlik (2003, 2006) added to this research by constructing a user-policy model of the same task. Adelman, Cohen, Bresnick, Chinnis, and Laskey (1993) pioneered the use of the Brunswikian approach to model the judgment strategies used by Patriot missile defense teams, and Adelman, Miller, and Yeo (2006) applied lens model analyses to reveal the extent to which environmental and task limitations imposed by computer enhancements impact distributed team performance. The lens model has proven to be a valuable research tool to explore decision making in dynamic environments and may be seen as a complementary approach to naturalistic decision-making models described later in this section. Complexities of the environment and the way in which they are handled by human decision makers, as well as features of the cues they utilized, such as “goodness” or relevance, will impact the accuracy of their resultant judgments. Situational factors are critical in examinations of dynamic decision-making environments, as is taking into account the impact that changes in these factors can have on judgment processes and outcomes. There is also evidence that characteristics of the decision maker will affect judgments in dynamic environments. Domain experts, for example, exhibit tactics that are different from those of novices and are more likely than novices to understand relationships among cues, to evaluate the validity of cues accurately, and to utilize cues effectively in making their judgments (e.g., Zsambok & Klein, 1997). The lens model can complement qualitative methods such as cognitive task analysis (e.g., Hoffman & Militello, 2008; Schraagen, Chipman, & Shalin, 2000) to identify the cues experts are using. The population of scientists who are adopting the lens model approach in human factors investigations is relatively small, but new research attesting to its relevance continues to appear (e.g., Kirlik, 2006).
Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
211
Decision Ladders The decision ladder (Rasmussen, 1976; Vicente, 1999; Vicente & Pawlak, 1994) is a control task model that incorporates front- and back-end processes of decision making as legs of a ladder. Figure 6.4 shows a generic decision ladder that was annotated by Naikar, Moylan, and Pearce (2006). Boxes represent stages of information-processing activities, and ovals indicate the resultant knowledge. On the left side of the ladder are the front-end processes: the perception of a state of affairs that is unsatisfactory or threatening, information gathering, situation assessment, and diagnosis. At the top of the ladder are the back-end
QUALITY MEASURES GOALS
Evaluate performance AMBGUITY OR UNCERTAINTY ABOUT TARGET STATE
CHOSEN GOAL
OPTIONS
Predict consequences
Predict future states and responses of the present state of the system to hypothetical acts with reference to goals and constraints PRESENT STATE OF THE SYSTEM
Identify state of the system that explains the set of observations DIRECTLY OBSERVABLE INFORMATION Observe information/ data present in the environment
Compare options and choose
DESIRED STATE WHICH WILL FULFILL SYSTEM GOALS
Plan what needs to be done to bring system to target state, e.g., identify tasks and resources
Define task
Identify state
INFORMATION
Interpret the consequences of the chosen goal with respect to the target system state
TARGET STATE
SYSTEM STATE
Leap
WHAT NEEDS TO BE DONE TO BRING SYSTEM TO TARGET STATE
TASK
Formulate procedure
Observe information/data
QUALIFY MEASURE FOR PRIORITY SETTING AND CHOOSING AMONG POSSIBLE TARGET STATES
Plan the steps required to perform the task
Shunt NEED FOR ACTION
Detect need for action
ALERT
Activation
PROCEDURE
Execute
WHAT IS THE SEQUENCE OF STEPS
Execute the steps
Figure 6.4. The decision ladder as annotated by reviewing a range of papers by Rasmussen and Vicente. From “Analysing Activity in Complex Systems with Cognitive Work Analysis: Concepts, Guidelines and Case Study for Control Task Analysis” by N. Naikar, A. Moylan, & B. Pearce, 2006, Theoretical Issues in Ergonomics Science, 7, p. 379. Copyright 2006 by Taylor & Francis. Reprinted with permission. Downloaded from rev.sagepub.com by guest on April 5, 2012
212
Reviews of Human Factors and Ergonomics, Volume 6
processes—prediction of consequences and evaluation of options—and the descending right leg represents the planning and execution of the action. Decision ladders have often been used to describe and design complex systems, such as a military airborne early warning and control system (Naikar et al., 2006), a nextgeneration U.S. Navy surface combatant system (Bisantz et al., 2003), and DURESS II, a real-time, interactive thermal-hydraulic process control simulation (Vicente & Pawlak, 1994). The model has several unique facets that enable it to capture differences in operator decision processes. For example, it incorporates the ability to take shortcuts, or “leaps,” between knowledge points, as when an experienced operator skips from the point of system state diagnosis to knowledge of the corrective procedure. Bisantz et al. (2003), for instance, used different variants of the ladder model to illustrate that detection and classification methods varied as a function of expertise level, with more expert operators “leaping” directly from identification of a submarine to knowledge of associated tracking procedures. Shortcuts also include “shunts” from activities to knowledge states, as when the process of information collection guides the operator directly to knowledge of the appropriate procedure. Additionally, the model recognizes that decision activities can start on the right as well as the left side of the ladder—as when an expert begins with identification of the target state—and can flow in either a left-to-right or right-to-left sequence (Naikar et al., 2006).
Naturalistic Decision-Making Models The notion that expertise may enable shortcuts in the decision process is an essential element of naturalistic decision making (NDM) models. In the last 2 decades a body of research has explored characteristics of judgment and decision making within dynamic and challenging environments such as firefighting, combat operations, medicine, and aviation (see, e.g., Bogner, 1994; Buch & Diehl, 1984; Cesna & Mosier, 2005; Crandall & GetchellReiter,1993; Klein,1993a,1993b, 2008; Klein, Calderwood, & Clinton-Cirocco,1986; Zsambok & Klein, 1997). This work, broadly classified under the framework of NDM, describes the process of making decisions in situations involving poorly structured problems, uncertain and dynamic environments, shifting or competing goals, action feedback loops, time pressure, and multiple players (Orasanu & Connolly, 1993). These situations often call for immediate action, with potentially dire consequences for bad choices. Characteristics of NDM models include (a) process orientation, emphasizing the cognitive processes of proficient decision makers—what information they use and how they use it; (b) situation–action-matching decision rules—decision making by proficient decision makers is a matter of sequential option evaluation (if indeed more than one option is evaluated), and the decision is made in a pattern-matching process rather than a choice; (c) context-bound informal modeling of decision processes—models are applied and knowledge is domain specific; and (d) empirical-based prescription—prescriptions for decision making are derived from descriptive models of expert performance (Lipshitz, Klein, Orasanu, & Salas, 2001). The recognition-primed decision making (RPD) framework (Klein, 1993a, 2008) and recognition/metacognition theory (R/M) framework (Cohen, in press; Cohen, Freeman, Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
213
& Thompson, 1997; Cohen, Freeman, & Wolf, 1996) have highlighted the importance of front-end processes—accurate situation assessment in particular—in expert decision making. Options are generated and evaluated sequentially and may come from prescribed responses (e.g., procedures), from the retrieval of similar situations from memory, or from the invention of a novel course of action (Orasanu & Fischer, 1997: Serfaty, MacMillan, Entin, & Entin, 1997). They are tested sequentially via back-end processes such as mental simulation (Klein, 1993a, 1993b) or matching, in which the expert matches a plan of action against his or her model of the situation and visualizes its effectiveness (e.g., Serfaty et al., 1997). The RPD model (Klein, 1989, 1993a, 1993b; see Figure 6.5) outlines perhaps the most often cited framework of NDM and is defined by three core aspects: the quality of the decision maker’s situation assessment, his or her level of experience/expertise, and the use of recognitional rather than analytical decision processes. Situation assessment is at the heart of the RPD model and is highly dependent on the individual’s level of experience/ expertise. The degree to which operators can recognize that a situation is similar to one previously encountered comes into play. In familiar situations, operators expect patterns of certain, relevant cues, and the identification of these cues confirms that the situation is indeed similar to something they had experienced before. Experts can use past scenarios to generate a workable solution to the current problem situation. One caveat here is that experts must take care not to “recognize” a situation that is not within their scope of experience. The Air Florida pilots in the opening vignette thought they recognized the cockpit readings; tragically, they did not. From the cockpit voice recorder it is possible to determine that they did indeed notice that the throttle settings that normally corresponded to takeoff power had, during the takeoff roll, led to engine power instrument readings well in excess of normal takeoff thrust. Rather than imagining the readings to be faulty, the pilot’s comments indicate clearly his judgment that the apparent increase in power must be due to what were for him unusually cold conditions. In other words, he apparently thought that the engines were more efficient in the cold! (Nagel, 1988, pp. 290)
Recent iterations of the recognitional model acknowledge that analytical processes may be involved when situations are not familiar or facets of the situation do not meet expectancies. Situations may call for analytic or constructive thought processes such as sensemaking, or coming to an understanding of the situation by connecting situational elements into a causal and/or temporal order, placing elements into frameworks, accounting for unexpected or inconsistent elements, and planning and replanning (Klein, 2008; Klein et al., 2003; Weick, 1995). Cohen’s R/M model in particular focuses on these types of situations, in which recognition does not work, and instead the decision maker initiates a process of constructing causal models or stories. These expand and change as experts collect and synthesize data. Both the R/M model’s STEP procedure (construct a story, test its plausibility, evaluate the goodness of the assessment through critical questioning, plan for the possibility of error; Cohen et al., 1996) and Lipshitz’ RAWFS heuristic (reducing uncertainty, assumption-based reasoning, weighing pros and cons of alternatives, forestalling to Downloaded from rev.sagepub.com by guest on April 5, 2012
Reviews of Human Factors and Ergonomics, Volume 6
214
Experience the Situation in a Changing Context
Seek More Information
No
Is the Situation Familiar?
Reassess Situation
Yes
Recognition has four aspects
Yes
Are Expectancies Violated?
Plausible Goals
Relevant Cues
Expectancies
Actions 1...n
No
Mental Simulation of Action (n)
Yes, but
Modify
Will it Work?
No
Yes
Implement Figure 6.5. Recognition-primed decision-making model from “A recognition-primed decision (RPD) model of rapid decision making,” by G. A. Klein, 1993. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision Making in Action: Models and Methods (pp. 138–147). Norwood, NJ: Ablex Publishing.
anticipate undesirable contingencies, suppressing uncertainty; Lipshitz et al., 2001; Lipshitz & Strauss, 1997) describe how decision makers cope with uncertainty. Metacognition, or the process of monitoring and controlling one’s cognitive processing, as well as the metarecognitional skills of critiquing and correcting, are used to verify the correctness of diagnosis or situation assessment. Experts have been shown to be more likely than novices to recheck and confirm or revise their initial assessments (e.g., Chi, Glaser, & Rees, 1982; Khoo & Mosier, 2008; Patel & Groen, 1991), and according to the R/M model this process continues until the situation model that has been created is satisfactory, or until the potential benefits of further processing are outweighed by its costs (Cohen et al., 1996, 1997). Metacognitional monitoring is also a facet of back-end processes—for instance, when Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
215
battle commanders continuously evaluate how their selected course of action may impact the situation and whether it may need to be modified (Serfaty et al., 1997). NDM models have been used to explore decision making in diverse operational settings, examining, for instance, how offshore installation managers, fire ground commanders, military platoon leaders, U.S. Army tactical battlefield leaders, design engineers, or commercial airline pilots make decisions (Cohen, 1993; Cohen, Adelman, Tolcott, Bresnick, & Marvin, 1993; Flin, Slaven, & Stewart, 1996; Klein, 1989; Klein et al., 1986; Mosier & Chidester, 1991).
Other Decision Models The models discussed previously do not represent an exhaustive set, but space limitations preclude the discussion of every decision framework. We encourage the reader to examine models such as Norman’s (1986) seven stages of action, which have been used, for example, in identifying and categorizing action error points (e.g., Norman, 1986, 1990; Zhang, Patel, Johnson, & Shortliffe, 2004) as well as in usability analyses (e.g., T. S. Andre, Hartson, Belz, & McCreary, 2001; Cuomo & Bowen, 1992; Rizzo, Marchigiani, & Andreadis, 1997); image theory (Beach, 1993, 1998), which is a common approach to studying decision making in organizations; and Hammond’s cognitive continuum theory (e.g., Hammond 1996, 2000; Hammond, Hamm, Grassia, & Pearson, 1997), which posits that judgments and decisions vary in the extent to which they are based on intuitive or analytical processes, or some combination of both. Individuals may move along a continuum from intuition to analysis, shifting strategies many times within the same problem. According to Hammond (Hammond et al., 1997), the most appropriate strategy depends on features of the individual, the task, and the context.
LAYERS OF THE DECISION CONTEXT: INDIVIDUAL VARIABLES How do people actually decide in realistic settings? ...such decisions are sometimes so tough: time pressures, ill-structured problems, uncertain and incomplete information, shifting goals, action-feedback loops, high stakes, multiple players, and organizational context. The list is formidable, yet people carry on—and sometimes excel—under these circumstances. How do they do it? (Lipshitz, 1993, p. 103)
The contextual layers shown earlier, in Figure 6.2, have a profound effect on how frontend decision processes will be conducted as well as on available and appropriate back-end choices. Perhaps the most influential variable at the individual level is the decision maker’s level of expertise; research has also been done on the impact of aging on decision processes.
Expertise Expertise impacts the decision process in several ways. First, experienced decision makers exhibit high competence within their domain and have accumulated a vast repertoire Downloaded from rev.sagepub.com by guest on April 5, 2012
216
Reviews of Human Factors and Ergonomics, Volume 6
of instances or cases they can draw on (e.g., Chi, Feltovich, & Glaser, 1981; Chi, Glaser, & Farr, 1988; Klein, 1993a, 1993b; Patel & Groen, 1991; Simon & Chase, 1973). Supporting the relationship between domain expertise and high-quality outcomes, Charness and Tuffiash (2008) noted that experts’ superior performance in domains such as sports, medicine, and aviation is directly dependent on their domain-specific knowledge and that this knowledge enables experts to anticipate future actions and prepare for them more efficiently. Second, experts see and process information differently than novices do. It is important to note that in naturalistic environments experts are able to quickly identify the most ecologically valid cues—that is, the subset of information most critical to accurate situation assessment—and to match these cues against patterns in their experience base. For expert firefighters, for example, cues such as the sponginess of the floor predict the path of a fire. Experienced pilots match the shape, color, and size of clouds with patterns in memory to determine whether they should fly through them or around them. The ability to use pattern-matching processes in naturalistic environments enables quick diagnosis and decision making; experts are able to comprehend quickly what may be at the core of a problem and what actions should be taken (Cellier, Eyrolle, & Marine, 1997; Orasanu & Connolly, 1993). This is a critical factor in time-constrained situations. Experts are sensitive to constantly changing values of information and adapt their mental models accordingly (Waag & Bell, 1997). Often, decisions are made incrementally and iteratively as decision makers use feedback from the environment to adjust their actions and are able to change course in the middle of a situation when a decision is not working effectively (Connolly, 1988; Lipshitz, 1993). In the medical domain, physicians often monitor results of a treatment to refine their diagnoses (Orasanu & Connolly, 1993). Situation assessment is characterized by action/feedback loops, as experts use an iterative process to incorporate changes that result from incremental decisions. Experts also employ strategies that enable them to cope with ambiguity, dynamic conditions, and time pressure. They are proactive, anticipating potential failures, risks, or conflicts, and prepare for them. They are aware of time constraints and know how to “buy time.” For instance, pilots may request a holding pattern to gather additional decisionrelevant information (Orasanu, 1990). To manage time wisely, experts anticipate developments, make contingency plans, prioritize tasks, and use low-workload periods to prepare for upcoming events (Fischer, Orasanu, & Montalvo, 1993; Orasanu, 1990; Xiao, Milgram, & Doyle, 1997a). Xiao, Milgram, and Doyle (1997b) found that expert anesthetists planned points for consideration that enabled them to activate knowledge and rules for problem regulation, action preparation, and arrangement of material assistance. In other work, expert physicians were found to activate a significantly larger proportion of knowledgebased control for regulating conflict—problems involving contradictions between surgical goals and patient status and properties—than did intermediates and residents (Morineau et al., 2009). Mental models and schemata. The term mental model is widely used in explanations of how experts make decisions in complex operational environments. Experts are said to have mental models of the domain they are operating in, their task, their current situation, and, if decision making takes place in a team context, of their team (Cannon-Bowers, Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
217
Salas, & Converse, 1993; Orasanu, 1994; Rouse, Cannon-Bowers, & Salas, 1992). These models are believed to direct decision makers’ attention to relevant cues and information, to guide their problem understanding and action selection, and to support coordinated action with others. Although the term mental model undeniably has explanatory appeal, its meaning remains rather elusive. What are mental models? How do they differ from traditional cognitive psychology concepts, such as schemata or scripts? Definitions offered by the naturalistic decision-making literature commonly refer to mental models as organized knowledge or internal representations that make explicit how, for example, elements of the operational environment or the task relate to one another and allow people to explain and predict environmental events or task progression (e.g., Cannon-Bowers et al., 1993; Orasanu & Salas, 1993). These attributes and functions have traditionally been associated with schemata and scripts in the cognitive science community. Schemata are defined as mental structures that represent aspects of people’s physical and social world, ranging from individuals and objects to events (Rumelhart, 1980). Similarly, Schank and Abelson (1977) argued that people’s understanding of the world around them and their actions are guided by script-like knowledge structures. However, three characteristics of mental models seem to go beyond schema- and scriptlike structures. First, some authors have remarked that mental models are “instantiated knowledge structures” (Salas, Stagl, & Burke, 2004, p. 57) that represent specific situations (Lipshitz & Ben Shaul, 1997). Schemata, in contrast, provide generic information that may concern a specific entity or may refer to a class; for instance, a schema may indicate features and behavior typical of an individual or describe events of a typical workday. The important aspect is that these representations reflect invariances. Relating schemata and mental models, Lipshitz and Ben Shaul (1997) suggested that schemata drive the construction of mental models insofar as generic knowledge is used to build situation models. This proposal echoes Rouse and Morris (1986), who conceived of a mental model as “a mechanism whereby humans generate descriptions of system purpose and form, explanations of system functioning and observed system states, and predictions of future system states” (p. 360). A second characteristic of mental models is that they enable humans to envision possible developments of their current situation. Klein (1999) described how the leader of an emergency rescue team integrates features of a car accident scene into a “running” model of the situation, simulating a novel way to get to the unconscious driver trapped in the car, and how he uses this simulation as a basis for his decision. Scripts as defined by Schank and Abelson (1995) seem too rigid to support this flexibility. Expectations based on a script reflect typical outcomes: Once a situation has been identified as an instance of a known script, expectations about what will happen next are simply read off the script. Experts, in contrast, are able to imagine what is possible, and this may not be the typical outcome. Third, the term mental model refers to a specific type of knowledge structure. Unlike schemata or scripts, mental models are characterized as being representational in nature. As Johnson-Laird (2004) noted, “A principle of the modern theory of mental models is that a model has the same structure as the situation that it represents. Like an architect’s model or a molecular biologist’s model, the parts of the model and their structure correspond to what is represented” (p. 181). Downloaded from rev.sagepub.com by guest on April 5, 2012
218
Reviews of Human Factors and Ergonomics, Volume 6
The assumption that mental models of experts are indeed models of the world has rarely been examined by NDM researchers, yet it seems implicit in claims of mental simulations. Some evidence is reported by Serfaty et al. (1997), who observed that action plans proposed by experienced battle commanders were more likely to account for the sequencing and timing of events than those drafted by less experienced colleagues. Experienced commanders were also more likely to use maps during their decision making, a finding that suggests that visualization may be an important component in complex decision making. This conjecture is consistent with research on the benefits of external visualization tools, such as diagrams or process maps, to people’s problem solving, reasoning, and decision-making performance (e.g., Bauer & Johnson-Laird, 1993; Fiori & Schooler, 2004; Hoffmann, 2005). Limitations of expertise. Because domain experts have more experiences to draw on and more accurate mental models than novices have, it seems logical to posit that they will be better than novices according to both coherence and correspondence criteria. Research investigating expert judgment processes, however, has not always supported this hypothesis, particularly with respect to coherent processes or immunity to decision biases. Contrary to Klein’s (1989) finding that expertise ameliorated confirmation bias, Cook and Smallman (2008) found that experienced naval trainee analysts and reservists did exhibit confirmation bias in their selection of evidence in a realistic intelligence analysis task. In other work, expert pilots have been shown to be as susceptible to automation bias as novices (Mosier et al., 1998; Mosier, Skitka, Dunbar, & McDonnell, 2001; Skitka, Mosier, & Burdick, 1999) and are likely to display noncoherent processes under time pressure (Mosier, Sethi, McCauley, Khoo, & Orasanu, 2007). In the medical domain, Eddy (1982) compared expert physician diagnoses against mathematical models (the traditional test of coherence), and found that they made significant errors. Kuipers, Moskowitz, and Kassirer (1988) found that physicians did not make decisions after having checked all the facts but, rather, made an initial decision at an abstract level and then tailored their processes to further specify that decision more precisely. One result of this strategy was that information inconsistent with initial diagnoses was not always weighted appropriately in the judgment. In short, although experts should be able to make more coherent judgments because they have more experience, more domain knowledge, and a better grasp of what is relevant and important, they do not always do so. Even experts may not identify all of the information necessary for situation diagnosis and may be stymied by a seemingly incoherent situation, as illustrated in the following vignette (see also Mosier & McCauley, 2006). On February 1, 2003, mission control engineers at the National Aeronautics and Space Administration (NASA) Johnson Space Center, Houston, struggled to formulate a coherent picture of events aboard the space shuttle Columbia (Langewiesche, 2003). At mission control, automatically transmitted data gave the engineers exact information on the status of systems on the shuttle, but the information did not gel or point to a common root for the indications they were seeing. The first piece of information that was inconsistent with a normal, routine approach was a low tire pressure indicator for the left main landing gear. Other disparate indicators included a “strange failure of temperature transducers in a hydraulic return line.... All four of them...located in the aft part of the left wing” Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
219
(p. 61). All of the telemetric indications were valid, but the combination of readings did not make sense. Phrases such as “Okay, is there anything common to them?” “All other indications for your hydraulic systems indications are good?... And the other temps are normal?” and “And there’s no commonality between all these tire pressure instrumentations and the hydraulic return instrumentations?” (Langewiesche, 2003, pp. 61, 62) illustrate the process of seeking coherence, of trying to construct a story that would encompass the information being transmitted to ground control. Even though they had precise readings of system states, the engineers at mission control were not able to understand what was going on as they failed to fit seemingly contradictory indications together for a diagnosis. Frustration with the incoherence of indications, as well as an unwillingness to accept the unknown disaster they portended, is reflected in phrases such as “What in the world?”. . . “Is it instrumentation . . . ? Gotta be”. . . “so I do believe it’s instrumentation” (quotations from Langewiesche, 2003, pp. 61, 62). Interestingly, at least one engineer in Houston had envisioned a scenario that could account for all of the data they were receiving. NASA program managers had been aware that a chunk of solid foam had broken off during launch and smashed into the left wing at high speed and had dismissed the event as “essentially unthreatening” (Langewiesche, 2003, p. 61). Engineer Jeff Kling, however, had discussed with his team possible ramifications of the foam hit and the telemetric indications they might expect if a hole had been created, allowing hot gases to enter the wing during reentry. His model was consistent with the data that were coming in and could have been utilized to achieve a coherent picture of what was happening. Unfortunately, the possibility of this diagnosis was precluded because Kling’s model was not even mentioned during the event—in part because “...it had become a matter of faith within NASA that foam strikes—which were a known problem—could not cause mortal damage to the shuttle” (quotations from Langewiesche, 2003, p. 73). This event dramatically illustrates not only the need for coherence in effective use of technological systems, but also the requirement that data within the electronic environment be integrated in the context of an accurate correspondent world.
Aging and Decision Making Investigations of aging and attention as related to decision making have taken on importance in domains such as driving. Older drivers, for example, are more subject to change blindness, the failure to detect rapid changes that occur during a blink or flicker, and their failure to attend to changes may lead them to underestimate the risk involved in maneuvers and/or to engage in unsafe maneuvers (Caird, Edwards, Creaser, & Horrey, 2005). Other work on aging effects has indicated that older adults may interact with automated decision aids differently than do younger adults. Specifically, older adults were found to rely less on automation and to react more strongly (in terms of reliance) to changes in reliability. On the other hand, older adults were less responsive than younger adults to changing costs associated with reliance on the aid (Ezer, Fisk, & Rogers, 2008; Hitt, Mouloua, & Vincenzi, 1999; Rogers, Stronge, & Fisk, 2006; Sanchez, Fisk, & Rogers, 2004). Differences in cognitive processing between older and younger adults present some challenges for designers of automated systems and decision aids. Downloaded from rev.sagepub.com by guest on April 5, 2012
220
Reviews of Human Factors and Ergonomics, Volume 6
The good news is that older adults are as capable as younger adults of making “good” decisions in their domains of expertise. They may require more time to make a driving decision such as selecting a route, but given sufficient time, their decisions in the familiar domain of driving are qualitatively the same as those of younger drivers (Walker, Fain, Fisk, & McGuire, 1997). Similarly, research by Morrow et al. (2009) showed no age-related decline in the quality of pilots’ decision making. Experienced pilots irrespective of age spent more time reading problem-critical information in aviation scenarios, provided more elaborations to the problems, and were more likely to generate appropriate responses to the problem than novice pilots.
DECISION MAKING AS A TEAM EFFORT All decisions, including those made by professionals, are made in a social context. People are never completely alone. The social perspective is especially clear when the focus is on how teams (as opposed to single individuals) make decisions. —Montgomery, Lipshitz, and Brehmer (2005, p. 5)
In many professional settings decisions are carried out by teams of experts rather than by individuals. For instance, medical decisions involved in patient diagnosis and care often require the input of different specialists and the collaboration of physician and nurse teams. Similarly, decisions during military or firefighting operations and other high-risk, dynamically changing and time-pressured tasks are the product of a team effort even though one individual may ultimately bear the responsibility for the decision. Decision making in these contexts is a complex process, as information from many sources has to be integrated into a coherent representation of the situation and errors may be fatal. However, merely putting several individuals to work ensures neither that they function as a team nor that their decision making will be sound. The Air Florida crew mentioned at the beginning of the chapter is a case in point. Here were two professionals making a disastrous decision, apparently as the result of an inaccurate situation assessment (National Transportation Safety Board [NTSB], 1982). As documented in the cockpit voice recording of the flight, the first officer expressed some unease at the ice buildup on the aircraft’s wings and said the engine readings “don’t seem right.” Yet the captain dismissed—“We get to go here in a minute”—or rejected the first officer’s concerns—“Yes, it (the engine reading) is (right), there’s 80.” Faulty decision making by flight crews is not an uncommon factor in aircraft accidents. A review by the NTSB mentioned this as an issue in about two thirds of flight-crewrelated major accidents (NTSB, 1994). Problems in the decision making of teams have also been reported in experimental and field studies. For example, Mosier et al., (2001) observed automation bias with both individual pilots and crews. Other research noted the unwillingness of team members to collaborate with others (Perry & Wears, in press) or to correct inaccurate diagnoses and decisions (Orasanu et al., 1998; Tschan et al., 2009), suggesting the presence of social phenomena, such as “silo” thinking, conformity, and confirmation bias. On the other hand, the proliferation of work teams in organizations Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
221
makes it ever more important to understand which factors facilitate, and which impede, the effectiveness of teams. By analyzing cases in which teams have failed and by comparing successful and unsuccessful teams, researchers have identified components of effective teamwork. Authors of this body of literature understand teamwork broadly, including both problem solving and decision making by teams. Although we acknowledge that the boundary between problem solving and decision making is fluid, especially in naturalistic task environments, we nonetheless limit our discussion to those behaviors and knowledge structures that research has identified as essential to team decision making. Moreover, we focus on research that examined professionals and/or involved tasks reflecting critical components of team decision making in complex operational environments.
What Makes a Team? Teams are generally defined as a social entity consisting of at least two individuals who work together on a common cause. Teamwork, moreover, takes place in an organizational context that enables and constrains team members’ activities. Team members are brought together for their individual competencies and are tasked by organizations with operationally relevant missions. In contrast to groups, team members have defined roles and responsibilities and are dependent on each other with respect to their task performance (Dyer, 1984; Koslowski & Ilgen, 2006; Salas, Dickinson, Converse, & Tannenbaum, 1992). Accordingly, team decision making concerns decisions “undertaken by interdependent individuals to achieve a common goal” (Orasanu & Salas, 1993, p. 328). Team decision making, moreover, is a process in the course of which several individuals gather, interpret, communicate, and integrate information in support of a mutually accepted decision. Consistent with their distinct roles, team members frequently bring different perspectives or expertise to the decision process. Whereas some teams—such as those concerned with policy decisions—exist solely for the purpose of issuing courses of action or guidelines, other teams, such as firefighters, flight crews, or surgical teams, face decisions as part of their overarching task. Success in team decision making may be measured in terms of goal achievement—did the team’s decision lead to the desired outcome? In addition, considerations of team success also need to address a team’s judgment and decision processes. The team literature has identified several process variables—cognitions and behaviors—that are crucial to effective team decision making.
Shared Cognition in Teams Team members need to have a common knowledge base to support their joint decision making (Orasanu, 1994; Rouse et al., 1992; Salas, Cooke, & Rosen, 2008; Salas, Sims, & Burke, 2005). Coordinated action by team members requires that they have critical domain knowledge in common, in particular knowledge concerning the operational environment, equipment, standard procedures, practices, and strategies. In addition, they must have a common understanding of their team’s mission: what their objectives are, their plans, their Downloaded from rev.sagepub.com by guest on April 5, 2012
222
Reviews of Human Factors and Ergonomics, Volume 6
individual roles and responsibilities, and how they as a team are to interact (CannonBowers & Salas, 2001). Shared task and teamwork knowledge and, to a lesser degree, common domain knowledge were found to foster effective team processes (i.e., extent of planning, collaboration, and communication), which in turn increased task performance (Mathieu, Heffner, Goodwin, Salas, & Cannon-Bowers, 2000; Smith-Jentsch, Cannon-Bowers, Tannenbaum, & Salas, 2008). Shared task and teamwork knowledge may also increase team members’ sensitivity to the information needs of others, resulting in implicit coordination—that is, team members volunteer task-relevant information before it is requested (Entin & Serfaty, 1999; MacMillan, Entin, & Serfaty, 2004; Stout, Cannon-Bowers, Salas, & Milanovich, 1999). Rousseau, Aubé, and Savoie (2006) noted that mission-specific knowledge is not necessarily shared but needs to be explicitly established while team members prepare for their upcoming task. In some domains, such as design teams, mission specification, goal setting, planning, and role assignments may be part of a team’s decision making (BadkeSchaub, in press). In other domains, such as aviation or military, task preparation is conducted in mission briefings. These briefings are often rendered by the team leader and follow a scripted sequence, but they make explicit the team’s objectives and game plan and have been found to support shared team member models (Marks, Zaccaro, & Mathieu, 2000). Through their briefings leaders also set the standard for subsequent team interactions (Ginnett, 1993). For successful task performance, team members must also have a shared understanding of current task conditions and shared expectations of future events. As teams operate in dynamically changing environments, plans that were appropriate at some point in time may become inadequate. Thus, accurate situation assessment is critical. Team members must monitor their task environment for cues and information that are inconsistent with their original assumptions and may call for a change of action. Orasanu (1994) coined the term shared situation models to capture the dynamic nature of team members’ situational awareness. Shared situation models may develop spontaneously if team members attend to the same cues and interpret them in a similar fashion. However, research indicates that this assumption may be overly optimistic. For instance, air traffic controllers and pilots were found to frame flight situations differently (Barshi & Chute, 2001), to focus on different cues in traffic situations, and, as a result, to disagree in their risk assessments and action responses in identical flight scenarios (Davison & Orasanu, 2001). Moreover, even when experts have the same professional background, they may nonetheless provide different interpretations of the same data, as shown in research by Fischer and Orasanu (2001) and Fischer et al. (1995, 2003), which involved commercial flight crews. These findings highlight the important role team communication plays in fostering shared situation models, an issue we discuss in the section on team communication. Since the concepts of shared knowledge and shared mental models were introduced into the teamwork literature, most researchers have equated their meaning with knowledge convergence (Cannon-Bowers et al., 1993; Orasanu, 1990, 1994; Orasanu & Salas, 1993). Team processes were interpreted in terms of common knowledge structures (e.g., Entin & Serfaty, Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
223
1999; Orasanu, 1994), and cognitive measures were developed to tap similarity in team members’ knowledge (e.g., Smith-Jentsch, Campbell, Milanovich, & Reynolds, 2001). Recently, some scholars have advanced a more differentiated view on shared knowledge by emphasizing the importance of complementariness in team members’ knowledge as a means to optimize resources (Cooke, Salas, Cannon-Bowers, & Stout, 2000; Mohammed & Dumville, 2001; Wright & Endsley, 2008). For instance, Mohammed and Dumville (2001) argued that many team tasks, such as surgery, require the pooling of diverse expertise or, because of their complexity, the division of labor. Task performance thus depends critically on team members bringing in their share of unique knowledge. However, what an individual team member knows needs to be compatible with knowledge held by other team members (i.e., their knowledge structures need to cohere with one another). In addition, the knowledge distribution present in their team needs to support a coherent and accurate representation of their team’s task. This assumption was experimentally tested by Sperling and Pritchett (2005). In a simulator study involving military helicopter crews, they were able to show that apportioning task-relevant knowledge across crew members enhanced crews’ ability to cope with off-nominal (i.e., high-workload) flight conditions. Similarly, Cooke et al. (2003) found that high position-specific task knowledge coupled with low intrateam similarity was associated with superior task performance in a U.S. Navy helicopter mission simulation. However, in addition to unique knowledge, team members undoubtedly need to have some common knowledge base. Exactly what information this base needs to include is open to debate. Cannon-Bowers and Salas (2001) enumerated the knowledge aspects that have traditionally dominated the discussion of shared knowledge. Proposals by others (e.g., Mohammed & Dumville, 2001; Sperling & Pritchett, 2005) have emphasized interpositional knowledge (i.e., team members know what information others have) as well as certain domain knowledge (e.g., technical vocabulary, standard procedures). Moreover, discussions of common knowledge need to address the question, Which team roles need to share what information? (Mohammed & Dumville, 2001, p. 96). That is, whereas some knowledge needs to be held in common by all team members, other knowledge needs only to be shared by a subset of team members. A related question concerns the level of understanding required by shared knowledge (Cannon-Bowers & Salas, 2001). Team members may share knowledge, yet what they do know may differ in specificity. For instance, in teams with delineated roles, team members will have a detailed understanding of their own role; their knowledge of the responsibilities of others, in contrast, may be less specific. Both the converging and the complementary conceptualization of shared knowledge emphasize its representational nature: Knowledge refers to mental representations that team members have. A third view, though not denying the role of mental representations, conceives of shared knowledge as a process: Team members contribute information, their situation assessment, and their expertise, and in so doing they jointly create—“coconstruct” (Resnick, Salmon, Zeith, Wathen, & Holowchak, 1993)—coherent and accurate situation representations and action plans. The communication between team members plays a pivotal role in this process (Orasanu & Fischer, 1992; Salas et al., 2005, 2008). Downloaded from rev.sagepub.com by guest on April 5, 2012
224
Reviews of Human Factors and Ergonomics, Volume 6
Team Communication Unquestionably, communication is essential to team decision making. Whether team members are inches apart or hundreds of miles distant, they need to share task-critical information and to ensure mutual understanding. They need to call attention to problems in the task environment or concerning their teamwork, to direct the actions of others, and to bring their expertise to their team’s decision making. Although the importance of team communication is self-evident, its successful execution is anything but guaranteed. Failures and breakdowns in team communication have been implicated in many accidents and incidents—for instance, in commercial aviation (Cushing, 1994; Fischer & Orasanu, 2000; Orasanu, Fischer, & Davison, 1997), offshore oil drilling operations (Flin, O’Connor, & Crichton, 2008), and health care (Leonard, Graham, & Bonacum, 2004; Reader, Flin, & Cuthbertson, 2008). Effective and efficient team communication is especially critical in task environments in which conditions are dynamically changing and decisions have to be made under considerable time pressure and in the face of ambiguity concerning the nature of the problem. To identify communication features that support team performance under these task conditions, researchers have applied a multitude of methodologies, ranging from observations of expert teams in their operational environments, and analysis of team interactions during simulations, to scenario-based laboratory studies and surveys. These research efforts yielded several characteristics of effective team communication. Explicit communication. A recurrent finding has been that successful team performance is associated with explicit communication. Explicitness is used here to refer to both the amount of task-related communication and the cognitive transparency of team members’ contributions. With respect to the former, research showed that well-performing teams shared more information about the problem they faced and talked more about their response to it than did poorly performing teams (Krifka, Martens, & Schwarz, 2004; Mazzocco et al., 2009; Sexton & Helmreich, 2000); in particular, high-performing teams were more likely to articulate new plans and changes in task allocation and expectations about future events (Gillan, 2003; Orasanu & Fischer, 1992). Members of successful teams were also more explicit about their reasoning and their intentions and tended to contextualize information (Johannesen, 2008; Tschan et al., 2009). Tschan and her collaborators (2009) found that diagnostic accuracy of physician teams was more likely when members not only stated task-relevant information but also made clear how the data were causally related and supported a particular diagnosis and treatment. Likewise, research by Fischer and Orasanu (2000) and Fischer, Rinehart, and Orasanu (2001) indicates that pilot error is best mitigated when the other crew member not only corrects the mistake but also mentions a reason for his or her intervention. It is important to note that pilots did not consider these complex utterances (i.e., correction plus reason) as more polite; rather they judged them as more informative and consequently as more effective in changing their behavior. In detailing their reasoning and providing a context for their assessments, team members lay out their situation model, ensure that others share their understanding, and thus reduce the likelihood of misunderstandings as fewer inferences are required by their team mates. At the same time, they invite others Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
225
to verify the accurateness of their judgments and associated decisions and, on a more subtle level, create a team climate characterized by open communications. Tight discourse structure. Efficiency in communication is enhanced when team members provide immediate feedback on each other’s contributions (e.g., observations are acknowledged, and questions are answered). Closed-loop communication (Kanki, Lozito, & Foushee, 1989) is believed to reduce team members’ cognitive load as their conversations adhere to normative and thus predictable patterns, and thematically related contributions follow one another in a coherent fashion. However, team members’ feedback may vary in the extent to which they support mutual understanding (Clark & Schaefer, 1989). Simple acknowledgments (e.g., “uh huh,” or “OK”) or generic answers (e.g., “yes” or “will do”) merely indicate that the previous utterance has been heard but do not reveal what has been understood. Verbatim read-backs of critical information that are mandatory in the interactions between pilots and air traffic control are one strategy to mitigate the effects of possible misunderstandings. Adherence to communication standards that, in turn, foster accurate situation models and shared understanding has been found to be characteristic of high-performing flight deck crews (Dietrich, 2004; Kanki et al., 1989), whereas violations have been associated with aviation incidents and accidents (Cardosi, Falzarano, & Han, 1998; Cushing, 1994; Orasanu et al., 1997). A second strategy to ensure mutual understanding was reported by Johannesen (2008), who observed neurosurgical teams, and Fischer, McDonnell, and Orasanu (2007), who examined team interactions in a synthetic task environment simulating Antarctic search and rescue missions. In both task environments members of successful teams elaborated on each others’ contributions, thus providing strong evidence of their understanding and simultaneously advancing their decision-making efforts. Team-oriented error mitigation. If another team member makes a mistake in the execution of an action, or has an incorrect understanding of the problem situation or the team’s response, calling this to the team member’s attention may involve a direct challenge to his or her status, judgment, or skill. Effective error mitigation was found to be explicit about the nature of the problem and desired actions while emphasizing the team’s shared responsibility for task success. Fischer and Orasanu (2000) and Fischer et al. (2001) presented commercial pilots with aviation scenarios, each describing an error by the pilot flying the aircraft. Participants were asked to rate the effectiveness and directness of communication strategies that pilots (in a previous study) had used to prevent or mitigate the error. Captains and first officers both favored crew-oriented communications, rather than interventions based on status (i.e., captain commands, first officer hints). Top communication strategies were: crew obligation statements (e.g., “We need to deviate right about now”), crew suggestions (e.g., “Let’s go around the weather”), preference statements (e.g., “I think it would be wise to turn left”), and strong hints (e.g., “That return at 25 miles looks mean”). Common to these strategies is that they address a problem without disrupting the team climate. Crew obligation statements seek compliance by appeal to a shared obligation; preference statements do so by referring to the solidarity between speaker and hearer. Strong hints are similar to crew obligation statements insofar as they, too, seek compliance by Downloaded from rev.sagepub.com by guest on April 5, 2012
Reviews of Human Factors and Ergonomics, Volume 6
226
appeal to an external necessity. The hints included in the study were problem or goal statements that strongly implied what action should be taken. That is, once addressees acknowledge the problem, they are also committed to the appropriate action. Open and inclusive communication. Certain interaction patterns have also been associated with successful team performance. Fischer et al. (2007) noted that open and inclusive interactions between team members helped their joint performance. Team members contributed equally to the team discussion and talked freely with each other. This interactive structure, however, may be beneficial only in a decentralized task environment as in the Fischer et al. (1997) study. For instance, Kiekel, Cooke, Foltz, and Shopes (2001) and Kiekel, Cooke, Foltz, Gorman, and Martin (2002) observed in the context of an unmanned aerial vehicle navigation task that communication in well-performing teams centered around one team member who, as the “hub” of the team, had an integrating function (i.e., the “hub” received information from team members and disseminated information to the team). The potential weakness of centralized teams is that information may flow only down the organizational ladder from the “hub” or team leader to subordinate members. In teams involving team members of different status, members of lower status may be reluctant to speak up and to contribute to their team’s discussion (Blatt, Christianson, Sutcliffe, & Rosenthal, 2006; Pronovost, Wu, Dorman, & Morlock, 2002; Pronovost, Wu, & Sexton, 2004), or their contributions may go unacknowledged or may be rejected (Linde, 1988; Orasanu & Fischer, 1992). The aviation industry’s answer to this problem has been to focus on crew resource management in their pilot training and to encourage consultative leadership in their captains and assertiveness in their first officers (Helmreich & Foushee, 1993). In the medical domain, a rather different solution has been introduced by Uhlig et al. (2001), with promising success. The morning rounds by cardiac care teams followed a communication protocol that emphasized open and respectful communication while minimizing role-based hierarchies.
THE IMPACT OF TECHNOLOGY ON DECISION MAKING A balance is needed between the undisputed capabilities of the computer and the unique problem-solving and pattern recognition skills of the human operator. —Gilson, Deaton, and Mouloua (1996, p. 16)
Whether people are making decisions by themselves or in teams, the extent to which technological aids are available is likely to impact decision processes. The notion of concordance between task properties and decision making is a critical one for human factors researchers and practitioners, and technology changes the task properties in significant ways.
The Hybrid Ecology Sophisticated technological domains such as aviation, medicine, nuclear power, air traffic Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
227
control, the military, or manufacturing and process control create decision environments that are very different from those of lower technology domains. Automated cockpits, for example, contain a complex array of instruments and displays in which information and data are layered and vary according to display mode and flight configuration. Aircraft flight management systems are designed not only to keep the aircraft on course but also to assume increasing control of “cognitive” flight tasks, such as calculating fuel-efficient routes, navigating, and detecting and diagnosing system malfunctions and abnormalities. Figure 6.6 shows advances in automation with each new generation of aircraft. Note that most of these advances are geared toward decision making—making information available, analyzing data, providing alerts, or improving or integrating system and navigation displays. The capability of electronic systems to reduce the ambiguity inherent in naturalistic cues (they process probabilistic cues from the outside environment and display them as highly reliable and accurate information) has transformed many decision domains into complex, hybrid ecologies. These ecologies are deterministic in that much of the uncertainty has been engineered out through technical reliability, but they are also probabilistic in that conditions of the physical and social world (including ill-structured problems, ambiguous cues, time pressure, and rapid changes) interact with and complement conditions in the electronic world (Mosier, 2008, 2009). In a hybrid ecology, cues originating in the external, physical environment must be integrated with information from internal, electronic deterministic systems. Table 6.1 illustrates the characteristics of each side of the hybrid ecology. Input from both sides must be incorporated into the decision process. On the naturalistic side of the hybrid ecology, information and cues are absorbed primarily via sensory and kinesthetic mechanisms: the look of the weather, the smell of smoke, the feel of the physical environment. Cues are ambiguous and probabilistic. Accuracy in the front end, or correspondence between situation assessment and objective reality, is central as this enables the selection of a high-quality option. In the naturalistic world, domain expertise affords shortcuts, as the expert is able to use his or her repertoire of previous situations for pattern-matching processes. On the electronic side of the hybrid ecology, information is absorbed primarily via cognitive processing: the parameters of the system; the status of the patient in terms of temperature, heart rate, and so forth; the steps to take to correct anomalies; the precise position of the vehicle. Data and information are specific and highly reliable. Coherence in the front end, or consistent and comprehensive integration of all relevant information for situation assessment, is central as this enables accurate diagnoses and decisions. In the electronic world, domain expertise affords only limited shortcuts, as characteristics inherent in automated systems such as layered data, multiple modes, and opaque functioning demand analytical processing from experts as well as novices.
Automated Decision Aids As hybrid ecologies have become more complex and data intensive, automated aids and decision support tools have become commonplace and critical to performance. Overall, these aids have been beneficial to the decision process. The advantages they offer Downloaded from rev.sagepub.com by guest on April 5, 2012
Downloaded from rev.sagepub.com by guest on April 5, 2012
228
Automation and Technology
1st Generation DeHavilland Comet Boeing 707 Douglas DC-8 Douglas DC-9
Winnie Mae (Wiley Post’s around-theworld flight, 1933) Lockheed 14 2nd Generation Boeing 727 Boeing 737-100, 200 Boeing 747-100-300 DC-10, L-1011 Airbus A-300
3rd Generation Boeing 767/757, 747-400 McDonnell-Douglas MD-80 Airbus A-310, 300-600 Fokker F-28-100 MD-11 (transition to 4th Gen)
4th Generation Airbus A-319/320/321 Airbus A-330, 340 Boeing 777, 787
INFORMATION AUTOMATION ‘All-glass’ cockpit Primary flight display INFORMATION AUTOMATION Navigation displays (moving map) First ‘Glass cockpit’ Multifunction displays Primary flight display Weather radar Navigation displays (moving map) System/sub-system status Multifunction displays displays Weather radar Collision avoidance system System/sub-system status (TCAS) displays Integrated alerting systems Collision avoidance system Windshear displays (TCAS) Datalink displays Integrated alerting systems CONTROL AUTOMATION ‘Fly-by-wire’ (no tactile feedback) CONTROL AUTOMATION Integrated systems operation Mode control panel
MANAGEMENT AUTOMATION Flight Management System (FMS – program flight takeoff to landing)
Next Generation planned enhancements
CONTROL AUTOMATION Low-visibility taxi guidance High-precision in-trail guidance in terminal areas Automated collision avoidance maneuvers Automated wind shear avoidance maneuvers
INFORMATION AUTOMATION Electronic library – ‘paperless cockpit’ Satellite navigation Digital data link communication ‘Big picture’ integrated displays Enhanced head-up displays Enhanced or synthetic vision systems
Figure 6.6. Evolution of technology in civil air transport. See also Fadden (1990) and Billings (1996). FMA = flight mode annunciator.
CONTROL AUTOMATION 3-Axis Autopilot Yaw damper
CONTROL AUTOMATION 3-Axis Autopilot
CONTROL AUTOMATION Automatic spoilers Autoland systems Integrated flight systems
INFORMATION AUTOMATION Flight director VHF navigation Configuration warning systems Malfunction alerts
MANAGEMENT AUTOMATION Area navigation systems (RNAV)
MANAGEMENT AUTOMATION Flight Management System (FMS – program flight takeoff to landing) Electronic checklist
MANAGEMENT AUTOMATION Easier FMA interfaces Direct FMS-ATC computer communication Error monitoring and trapping Improved electronic checklist
Judgment and Decision Making
229
Table 6.1. The Hybrid Ecology
Naturalistic World
Electronic World
Sensory and kinesthetic Correspondence strategies and goals Ambiguity Probabilistic cues Expertise affords intuitive “shortcuts” and pattern-matching processes
Cognitive Coherence strategies and goals Reliability Deterministic data and information Analytical processes required at all levels of expertise
in terms of increased efficiency and data storage and manipulation, for example, are selfevident—computers can assimilate more information and process it faster than humans. Some shortcomings of advanced technology, however, must also be taken into account. Researchers in medicine have pointed out mismatches between what practitioners need and what developers of decision support systems provide (e.g., Wears & Berg, 2005; see Morrow, North, & Wickens, 2006, in Volume 1 of Reviews of Human Factors and Ergonomics for a more detailed account of automated decision support in medicine). Many automated systems are opaque in their operations and do not enable operators to track their functioning. They cannot take into account context-specific information and, thus, may generate faulty recommendations (Gawande & Bates, 2000; Mosier, 2002). Several researchers have documented problems in the use of advanced automated systems, including mode misunderstandings and mode errors, failures to understand automation behavior, confusion or lack of awareness concerning what automated systems are doing and why, and difficulty tracing the functioning or reasoning processes of automated agents (e.g., Billings, 1996; Sarter & Woods, 1994). Automation surprises, or situations in which operators are surprised by control actions taken by automated systems (Sarter, Woods, & Billings, 1997), may be the result of noncoherent situation assessment processes—that is, when operators have incomplete or inconsistent information about the situation, or misinterpret or misassess data on system states and functioning (Woods & Sarter, 2000). Mode error, or confusion about the active system mode, is another type of front-end coherence error that has resulted in several incidents and accidents in aviation as well as in medicine (e.g., Sarter & Woods, 1994; Sheridan & Thompson, 1994; Woods & Sarter, 2000). Automation as heuristic: Automation bias. One potential problem with automated aids in high-tech environments is that they provide decision makers with a new heuristic or shortcut for decision making and task performance, along with associated biases. Automated decision aids, for example, may disrupt patterns of coherent cue utilization. Most automated decision aids and warning systems are by design very “bright lights” and may engulf or overwhelm other diagnostic information. Research has demonstrated that decision makers may focus on salient cues and ignore other critical but less obvious information, especially under time pressure (Wickens & Flach, 1988). Electronic stimuli are readily available, are widely believed to be accurate, and are a highly salient source of information. The computational, system observational, and diagnostic capabilities of automated aids are advertised as being superior to that of their human Downloaded from rev.sagepub.com by guest on April 5, 2012
230
Reviews of Human Factors and Ergonomics, Volume 6
operators. Additionally, the implementation of automated aids may be accompanied by diminished access to other traditionally utilized cues. In combination, these factors contribute to the perception that automated aids are more important, more diagnostic, and more reliable than conventional information sources. The presence of one “most powerful cue” may lead to a confidently generated decision, particularly if time is short (Gigerenzer, Hoffrage, & Kleinbölting, 1991). Automated systems are, in fact, designed specifically to decrease human workload by performing many cognitive tasks, including information synthesis, system monitoring, diagnosis, planning, and prediction, in addition to, for instance, controlling the physical placement of the aircraft. However, the presence of automated cues also diminishes the likelihood that decision makers will either make the cognitive effort to seek other diagnostic information or process all available information in cognitively complex ways. In addition, automated cues increase the chances that decision makers will cut off situation assessment or decision processes prematurely when prompted to take a course of action by a computer or automated aid. Layton, Smith, and McCoy (1994), for example, examined pilot use of a graphical flight planning tool and found that computer generation of a suggestion or recommendation early in the course of problem evaluation significantly impacted decision processes and biased pilots toward the computer’s suggestion, even when the computer’s brittleness (e.g., in terms of an inadequate model of the “world”) resulted in a poor recommendation with potential adverse consequences. Moreover, continuous use of an automated system may erode expertise, permanently altering the decision maker’s judgment and diagnosis processes (Patel, Kaufman, & Arocha, 2002). Because automated aids and decision support tools, when used correctly, are extremely consistent and provide cues that are highly correlated with other information, using them heuristically will generally be effective. Diagnoses are made correctly, power plants function efficiently, and airplanes fly safely using automated aids. However, overreliance on or misuse of these tools will result in errors, as would indiscriminate use of any other decision heuristic, and may in some cases have the paradoxical effect of increasing errors rather than eliminating them (Parasuraman & Riley, 1997; Patel et al., 2002; Vicente, 2003). The use of automated aids as a heuristic replacement for vigilant information seeking and processing has been labeled automation bias (Mosier & Skitka, 1996). Two types of decision errors, omission and commission errors, are negative consequences of this heuristic. Automation omission errors result when decision makers do not take appropriate action because they are not informed of an imminent problem or situation by automated aids. For example, a China Airlines B747-SP, flying at 41,000 feet, lost power in its Number 4 engine. The autopilot, which was set for pitch guidance and altitude hold, attempted to correct for the loss by holding the left wing down and did not warn the crew of approaching loss of control of the airplane. Although the engine gauges displayed the loss of power, the crew did not realize that there was a problem with the engine and took no action. When the captain disengaged the autopilot, the airplane rolled to the right, yawed, and then entered a steep descent in clouds. Extensive damage occurred during descent and recovery (NTSB Report AAR-86-03, in Billings, 1996).
Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
231
Automation commission errors—that is, errors made when decision makers inappropriately follow automated information or directives (e.g., when other information in the environment or the situation contradicts or is inconsistent with the automation)—are also potential by-products of automated systems (Morrow et al., 2006; Mosier & Skitka, 1996; Patel et al., 2002). Experimental evidence of automation-induced commission errors was provided by a full-mission flight simulation study (Mosier, Palmer, & Degani, 1992). During takeoff, crews received contradictory fire indications. An autosensing electronic checklist suggested that the crew shut down the Number 1 engine, which was supposedly on fire. Traditional engine parameters indicated that the Number 1 engine was recovering and that the Number 2 engine was actually more severely damaged. Seventy-five percent of the crews in the autosensing condition incorrectly shut down the Number 1 engine, whereas only 25% of crews with the traditional paper checklist did. Analysis of pilots’ communications indicated that crews in the automated condition tended to discuss less information before deciding whether to shut down the engine, suggesting that automated cues can curtail information search. Subsequent research on automation issues (see Sheridan & Parasuraman, 2006, and Pritchett, 2009, for chapters on human-automation interaction) has documented the fact that automated systems and decision aids alter the way operators make decisions—and that even domain experts are susceptible to potential negative effects such as automation complacency, the failure to monitor highly reliable automation, and automation bias (Mosier et al., 1998, 2001). To avoid errors that may result from using heuristics in decision making, human factors researchers propose debiasing or training interventions (e.g., Cook & Smallman, 2008; Mosier & Skitka, 1996; Mosier et al., 1998), mathematical decision aids (e.g., McDaniel & Rankin, 1991), or other decision support interfaces (e.g., Perrin et al., 2001). Automation and teams. In automated decision environments such as the aircraft cockpit or the modern hospital arena, more than one person is typically responsible for the performance of tasks and monitoring systems. Charging several people with monitoring system events should increase the chances that their decision processes will be coherent and rational—that they will detect an anomaly even if it were not detected by an automated decision aid or will recognize an inconsistent or inappropriate automated recommendation. Many team procedures, in fact, are designed on the premise that team members will cross-check system indicators as well as each others’ actions. Are crews and teams better than individuals at avoiding automation-related decision errors? Not necessarily. As reported in Mosier et al. (2001) and Skitka, Mosier, Burdick, and Rosenblatt (2000), the presence of a second crew member was not sufficient to eradicate automation bias and associated errors. Error rates were not significantly different between one-person and two-person crews. Different forms of automation were found to have different effects on teamwork in a more recent study (Wright & Kaber, 2005). Automating front-end tasks—information acquisition and analysis—improved team coordination. Team members were more likely to anticipate each other’s information needs and provide information before it was requested; they also tended to rate their teamwork more Downloaded from rev.sagepub.com by guest on April 5, 2012
232
Reviews of Human Factors and Ergonomics, Volume 6
positively. Automation of the back end, decision selection, led to better team effectiveness, but only under low levels of task difficulty and at the cost of higher workload.
Responsibility for Decision Making When workers’ actions are mediated by a powerful computer…the lines of responsibility are not so clear, and the workers may, in effect, abandon their responsibility for the task performed…,believing instead that it is in the “hands” of the computer. (Sheridan & Parasuraman, 2006, p. 123)
The proliferation of automated systems and decision aids poses challenges concerning the issue of decision responsibility. Historically, automated and expert systems have functioned as consultants during decision making. Through tradition or regulation, the user has maintained ultimate decision-making authority and borne responsibility for the outcomes of decisions. According to federal aviation regulations, for example, the pilot in command of an aircraft has direct authority and responsibility for the operation of the aircraft. In medicine, the physician is ultimately responsible for the accuracy of a particular diagnosis or treatment. However, as automated decision support systems become more capable of autonomous functioning, and are given more power in the guidance of human activity, the question of ultimate responsibility becomes more complex and muddled. Human operators may endow automated aids with personae of their own, capable of independent perception and willful action (Sarter & Woods, 1994), and may see reliance on them as the most efficient way to get decisions made, particularly in times of high workload. In fact, because one explicit objective of automating systems is to reduce human error, automating a decisionmaking function may communicate to the operator that the automation should have primary responsibility for decisions. Although responsibility for actions still rests with the user (Mockler, 1989), the legal status of the user of decision-aiding technology is not always clear (Will, 1991), especially in instances in which the user ignores or overrides the advice of the system. Lawsuits may result in the future if system aids are consulted and fail to perform correctly, give inaccurate or misleading indications, or are incorrectly used; conversely, operators may be liable for the nonuse of an available system or for going against a system’s decision recommendations (Miller, Schaffner, & Meisel, 1985; Zeide & Liebowitz, 1987). In fact, for “certain sensitive, delicate or hazardous tasks (such as aircraft requiring fast and accurate response beyond human capability), it may be reasonable to rely upon an expert system” (Gemignani, 1984, p. 1045). The question of responsibility for decision making is certain to take on more importance and relevance in the future. As automated decision support systems become ubiquitous and more autonomous, deskilled experts may no longer have the degree of expertise to judge when the system is performing correctly or to carry out the necessary degree of supervision (Morgan, 1992; Morrow et al., 2006; Sheridan, 2002). “There is also a problem of over-expectation that obscures responsibility...there’s a kind of mystique—an objectification—that if the expert system says so, it must be right” (Winograd, 1989, p. 62). Five out of eight of the crews in the bird-strike simulation described earlier (Mosier et al., Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
233
1992) did follow the directive of the electronic systems. At least one of these captains, articulating the thought processes preceding his shutdown of the Number 1 engine despite contradictory indications on the engine parameter gauges, commented that the electronic checklist said they should shut it down. Moreover, the issue of appropriate levels of automation to facilitate judgment and decision making continues to prompt research. Should decision aids process cues and present an action recommendation (command level), or should they present status information only? The former is faster but reduces operator SA and involvement in diagnosis; the latter enhances situation comprehension but increases time to decision and action (e.g., A. Andre & Wickens, 1992; Sarter & Schroeder, 2001). Several researchers (e.g., Billings, 1996; Sheridan & Parasuraman, 2006; Sheridan & Thompson, 1994) have proposed levels of decision assistance, ranging from no assistance; to help with diagnosis; to action suggestion; to action execution with human approval; to completely automated diagnosis, decision, and action implementation. Each successive level presents unique challenges to the human operator and makes the issue of responsibility for decisions less salient to the operator.
ORGANIZATIONAL INFLUENCES ON DECISION MAKING The “fast buck” program initiated by Braniff International in 1968 required the airline to pay each passenger a dollar if a flight did not arrive at its destination within 15 minutes of schedule. This program may have contributed to the crash of a Braniff Electra Turboprop in May of that year. The flight had been delayed on departure and was pushing the 15-minute limit as it neared the destination airport. The crew attempted to penetrate a line of thunderstorms rather than navigate around them, and lost control of the aircraft in turbulence. —Nance, 1984, cited in Hackman & Helmreich, 1987, pp. 311–312
The organization within which the decision maker or team resides exerts a degree of influence varying from subtle to explicit. Broadly, the organization sets the cultural norms in terms of decision priorities—safety, profit, quality, excellence, expedience, and so forth. These supposedly set the frame for decision making. An organization that has a safety culture, for instance, should promote judgments and assessments that weight safety aspects heavily and decisions that are conservative and entail low risk. The organizational climate prevalent in an organization, however, may or may not be consistent with stated norms and values. The climate motivates decision makers’ attitudes and behaviors and is a product of their perceptions and beliefs about and emotional reactions to cultural elements as well as to what is overlooked and what is rewarded (Helmreich & Merritt, 1998; Mearns & Flin, 2001). This is at the heart of the professional pilot’s eternal conflict. Into one ear the airlines lecture, “Never break regulations. Never take a chance. Never ignore written procedures. Never compromise safety.” Yet in the other they whisper, “Don’t cost us time. Don’t waste our money. Get your passengers to their destination—don’t find reasons why you can’t.” (Wilkinson, 1994, in Dekker, 2006, p. 197) Downloaded from rev.sagepub.com by guest on April 5, 2012
234
Reviews of Human Factors and Ergonomics, Volume 6
[A]n airline’s commitment to safety can be easily determined by answering this simple question. Is there a gap between its policies and its practices? Colgan repeatedly states that safety is a top concern and yet, here’s the gap. Colgan claimed to have a policy prohibiting pilots from overnighting in crew lounges, yet it was a well known fact that commuting pilots did just that. Colgan claimed use of personal electronic devices was prohibited, and yet, the 24-year-old first officer on the flight not only felt free to send text messages, but when she did so, the captain failed to say anything to her about it. (Negroni, 2010)
Decision makers’ perceptions of consistency between espoused culture and actual climate depend on whether the cultural norms are supported by aspects such as organizational processes (operations, procedures, oversight); resource management (allocation of human, monetary, and equipment resources); supervision; training; assignment of work; and attention to existing rules, regulations, and standard operating procedures (see Shappell et al., 2007, for descriptions of organizational and supervision influences on behavior). Through its practices and priorities, the organization contributes latent circumstances that will influence the way individuals at all levels will form judgments and make choices. For example, if the organizational culture espouses safety but operators perceive that management is likely to compromise safety for profit, to reward expediency over safety, or even to punish safer but more costly decisions, operators are likely to assess situations in terms of their expense to the company and to select risky options (Mearns & Flin, 2001; Mearns, Flin, Gordon, & Fleming, 2001). An organization’s advertising campaign, slogan, or promotional programs can also have a direct impact on decision makers, as illustrated by the Braniff vignette at the beginning of this section. Decision makers often deal with conflicting goals dictated by the organization, the situation, and their own judgment. In a society that has become increasingly litigious, some organizations prefer to “cut loose” any decision makers whose choices turn out badly rather than dealing with any latent organizational conditions that may have fostered the choices. Failure on the part of the organization to support and back up operator decisions can have a chillingly negative impact on judgment and decision processes as well as the decision makers themselves. Potential revelations about systemic vulnerabilities were deflected by pinning failure on one individual in the case of November Oscar 1. November Oscar was an older Boeing 747 “Jumbojet.” It had suffered earlier trouble with its autopilot, but on this morning everything else conspired against the pilots too. There had been more headwind than forecast, the weather at the destination was very bad, demanding an approach for which the co-pilot was not qualified but granted a waiver, while he and the flight engineer were actually afflicted by gastrointestinal infection. Air traffic control turned the big aircraft onto a tight final approach, which never gave the old autopilot enough time to settle down on the right path. The aircraft narrowly missed a building near the airport, which was shrouded in thick fog. On the next approach it landed without incident. November Oscar’s captain was taken to court to stand trial on criminal charges [for] “endangering his passengers” (something pilots do every time they fly, one fellow pilot quipped). The case centered on the crew’s “bad” decisions. Why hadn’t they Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
235
diverted to pick up more fuel? Why hadn’t they thrown away that approach earlier? Why hadn’t they gone to another arrival airport? These questions trivialized or hid the organizational and operational dilemmas that confront crews all the time. The focus on customer service and image; the waiving of qualifications for approaches, putting more work on qualified crewmembers; heavy traffic around the arrival airport and subsequent tight turns; trade-offs between diversions in other countries or continuing with enough but just enough fuel. And so forth. The vilified captain was demoted to co-pilot status and ordered to pay a fine. He later committed suicide. The airline, however, had saved its public image by focusing on a single individual who, the court showed, had behaved erratically and unreliably. (Wilkinson, 1994, in Dekker, 2006, p. 62)
One goal of human factors research and practice is to clarify the role of the organization and its influence on decision-making behavior. Even decisions to adopt a cheaper but less transparent system display, for example, will likely have ramifications in terms of operator errors in decision making (e.g., Dekker, 2006; O’Hare, 2003; Reason, 1997). Many accident investigators now embrace a systemic approach when determining causes of accidents involving decision errors and include latent organizational influences such as unsafe or poor supervision, inadequate training or resources, and disregard of rules or regulations (Dekker, 2006; Reason, 1990, 1997; Shappell et al., 2007).
APPLICATIONS AND LESSONS LEARNED Choosing Methods for Decision-Making Research The most important implication of the issues and models described in this chapter is that decision making in human factors must be studied in context. Laboratory studies, the traditional approach in early work, can give insights on specific microaspects of judgment and decision making, but they alone cannot provide viable predictions of how operators will perform in dynamic environments. Rather, as exemplified by the research cited in this chapter, a range of methodological tools for investigating decision making should be employed with the goal of converging results at varied levels of context fidelity. These include lab studies in which factors and variables replicate aspects of the environment, archival analyses of accidents and incidents, simulations of varying fidelity, synthetic environments, and field studies. At all levels, it is important to match the features of the tasks as closely as possible to the formal properties of the environment (Hammond, 1993). Studies of expert decision making, for example, should entail conditions and scenarios that are representative of situations professionals in a specific domain will encounter. This is not a new idea—the concept of representative design introduced by Brunswik (1955, 1956) specifies “a formal representation of all those conditions toward which a generalization is intended; representative design thus refers to the logical requirement of representing in the experiment, or study, the conditions toward which the results are intended to generalize” (Hammond, 1993, p. 208). It is, however, an idea that is particularly important in human factors research, as such work is not successful unless it can be effectively translated from research to practice. Downloaded from rev.sagepub.com by guest on April 5, 2012
236
Reviews of Human Factors and Ergonomics, Volume 6
In addition to representative situations, human factors research must include participants who are representative of the target population. The responses and behaviors of college sophomores, the staple of laboratory research studies, may not predict those of older adults using decision aids (Ezer et al., 2008) or those of professionals making decisions in their domain of expertise. In order to predict how variables such as decision aids, changes in context, new designs and displays, training programs, and so forth will impact decision processes, hypotheses must be tested within the target population. One of the most prevalent qualitative methods for examining decision making, particularly with respect to expert/novice differences, is cognitive task analysis (Hoffman, Crandall, & Shadbolt, 1998; Hoffman & Militello, 2008) or applied cognitive task analysis (Militello & Hutton, 1998). Methodologies include the critical decision method (e.g., Crandall & Getchell-Reiter, 1993; Klein, Calderwood, & MacGregor, 1989), interviews, and concept mapping. A comprehensive discussion of task analysis methods for cognitive work appears in Volume 3 of Reviews of Human Factors and Ergonomics (Bisantz & Roth, 2008).
Designing to Aid Decision Making A body of literature exists in the area of engineering design to aid decision making, including several chapters in Reviews of Human Factors and Ergonomics volumes (Laughery & Wogalter, 2006; Rogers et al., 2006; Sheridan & Parasuraman, 2006; Smith, Bennett, & Stone, 2006). Technological innovations to enhance decision making must be based on correct assumptions about human judgment processes and about how people think (Maule, 2009), and they should provide support for both coherence and correspondence as well as for requirements at the front and back ends of decision making. The layers of context illustrated in Figure 6.1 must all be considered when studying or striving to improve decision making. At least two potential avenues for facilitating cognitive processes exist: through the careful design of information presentation, and through the facilitation of metacognition.
Presentation of Data and Information Attention guidance. Implicit in the use of all relevant cues is an understanding of how the cues combine to create present awareness and future prediction of a situation. Attention distribution based on tactical considerations has been found to be systematically related to decision performance (Endsley & Smith, 1996). Moreover, attention to relevant cues is a predictor of both coherent diagnosis and accurate decisions. In the medical field, a large percentage of medication errors are linked to insufficient information (Bates et al., 2001). Consistent with this assessment is the finding that the most effective decision support systems guide attention via clinical reminders (Garg et al., 2005). Studies in such diverse fields as aviation, medicine, and auditing have demonstrated that experienced decision makers are likely to spend more time seeking out and attending to these cues than are those who are less experienced (e.g., Cesna & Mosier, 2005; Khoo & Mosier, 2008; Salterio, 1996; Schriver, Morrow, Wickens, & Talleur, 2008). Experts also tend to “chunk” cues and to create patterns of relevant or structurally related cues (Endsley & Smith, 1996; Simon & Chase, 1973). Because attention is the critical first step in the Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
237
front end of decision making, researchers and designers share the goals of supporting diagnosis and decisions through attention guidance (e.g., Horrey et al., 2006) or the identification of and training for efficient attention patterns (e.g., Schriver et al., 2008) to avoid attentional tunneling (the allocation of attention to a particular channel of information at the cost of other sources of information; e.g., Wickens & Alexander, 2009), “cognitive lockup” (the tendency to stay focused on one task even if a second task has higher priority; e.g., Kerstholt, Passenier, Houttuin, & Schuffel, 1996), or attentional failures (e.g., Caird et al., 2005). Transparency. Many researchers have pointed out the need for more transparency and less hidden complexity in decision-aiding systems (e.g., Sarter et al., 1997; Woods, 1996). This is especially important when the logic of humans does not match the logic of the machine, a growing danger as computers and automation become more complex (Sheridan & Parasuraman, 2006). Differences in users, such as age or expertise levels, must be taken into account, as novice users as well as older adults may not be as adept as experts at searching for information and selecting the information most relevant to task goals (Rogers & Fisk, 2001; Rogers et al., 2006). Beyond this, systems could also provide roadmaps for locating relevant information, tracking system recommendations, and pointing out sources of confirming or disconfirming information. Because individuals rely on technological systems for accurate decision making, they must be able to evaluate the correspondence of the data they are using. Through appropriate design technological aids could, for example, assist individuals by highlighting relevant data and information, helping them determine whether all information is consistent with a particular diagnosis or judgment, and providing assistance in accounting for missing or contradictory information. Level of intervention. Critical issues to address when designing decision aids are the target processes (front end or back end) and the level of intervention. Decision aids geared toward enhancing the front end of decision making will target judgment, diagnosis, and situation assessment, and those geared toward its back end will focus on processes such as option generation, evaluation, and planning. Parasuraman, Sheridan, and Wickens (2000) developed a taxonomy of levels, or stages of automation. The first two stages consist of acquiring and filtering (Stage 1) and integrating and assessing (Stage 2) information. These represent the front-end decision processes. Stages 3 and 4, recommending a course of action and executing the action, represent the back end of decision processes. In a study of decision aids for dealing with in-flight icing, Sarter and Schroeder (2001) found that both status (Stage 1/2) and command (Stage 3/4) displays led to improved performance over the baseline condition when accurate information was presented. However, when the decision aid provided inaccurate information, performance deteriorated below the baseline condition—more so with the command display than with the status display. The authors posited that the interaction could be related to the fact that status and command displays support different stages of decision making. The status display supports the front end of the process—that is, diagnosis of a problem. The command display in contrast supports the back end of the process, the choice or action selection stage. If the Downloaded from rev.sagepub.com by guest on April 5, 2012
238
Reviews of Human Factors and Ergonomics, Volume 6
pilot is removed cognitively from the diagnosis phase, the risk is that he or she may enter a “purely reactive mode” and follow system recommendations blindly (p. 581). In a similar vein, participants performing an air traffic control–related task exhibited superior performance when automated aids supported information acquisition and action implementation rather than performing cognitive functions such as information integration and analysis (Kaber, Perry, Segall, McClernon, & Prinzel, 2006). The use of automation for presentation of information is critically important when fully reliable decision automation cannot be guaranteed (Rovira, McGarry, & Parasuraman, 2007). Multiple modes. One prevalent design challenge for decision aid systems is avoiding overload in the visual channel, the mode most often used for information, warnings, and system displays. Interest in multimodal systems has increased, and these offer the potential for presenting data without data overload. Issues in the use of multimodal displays include modality selection; mapping of modalities to tasks and information; combining, synchronizing, and integrating modalities; and adapting information to a multimodal presentation (Sarter, 2006). This intriguing design concept could provide high benefits in complex decision-making domains, but further research is required to realize its potential. Facilitation of appropriate cognition. The format of automated displays has a predictable effect on the type of cognitive processes that will be elicited. For example, in a study of expert highway engineers, Hammond et al. (1997) found that specific characteristics of a task (i.e., display format and hidden relationships among variables within the task) tended to induce corresponding cognitive responses. Judgments of highway aesthetics from film strips representing segments of the highways tended to be performed intuitively. Judgments of highway capacity via mathematical formulas were accomplished analytically. Quasirational cognition was induced by requiring judgments of highway safety from bar graphs. Designers of complex automated decision aids have emphasized presenting data in pictorial, “intuitive” formats whenever possible. However, making sense of data in an electronic world, searching for hidden information, comprehending digital data, maintaining coordination between system states, and planning and programming future states are coherence-based activities that require analytical processing. Experience may offer hints as to where to look for anomalies in technological ecologies, but it does not insulate domain experts from the need to navigate their way through layers of data when anomalies occur. Numbers, modes, indicators, and computer readouts are some of the types of data that must be checked, processed, and integrated. Shortcuts may short-circuit the situation assessment process. Electronic data can help decision makers assess situations and make decisions much more accurately than ever before; however, the potential for heightened accuracy comes with a cost in terms of process. A critical component of expertise in the hybrid environments characteristic of many human factors domains is that of knowing whether and when the situation is amenable to pattern recognition. Pattern matching, the cornerstone of NDM frameworks of expert decision making such as RPD (Klein, 1993a), must be used carefully and may not work when data are in digital format, are hidden below surface displays, or mean different things in different modes. In the electronic side of the hybrid environment, a single small error Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
239
can be fatal to the process, and one small detail can make a huge difference. Some current displays, however, may in fact be leading operators astray by fostering the assumption that all data can be managed in a nonanalytical fashion. This is an especially potent trap for highly experienced decision makers, who may be particularly vulnerable to errors if pattern recognition is attempted before they have sought out and incorporated all relevant information into the pattern. It is important to note, then, that data that cannot be used intuitively (e.g., data that require interpretation or take on different meanings in different modes) must be presented in a format that facilitates analysis. Facilitating teamwork and shared mental models. The joint prevalence of teams and technological systems has created additional demands for decision aids. In particular, when computers or automated aids can function under many different modes, it is critical that all team members share the same correct knowledge of their current function. Mode changes by one team member must be recognized by all team members, and everyone must know when actions need to be or have been taken (Moray, 1992; Sheridan & Parasuraman, 2006). Decision aids, then, must facilitate team cognition by reminding all users of the current set mode and what they can expect to happen with this mode. Failure to do this can lead to inaccurate mental models and a breakdown of team coordination and decision making, as illustrated by this vignette: In the hospital, patient monitoring equipment had been adjusted by a health-care worker to serve one need. When that monitor was used for another need, it was effectively inoperative to monitor what the worker thought it was monitoring. Because there was no alarm, either locally or at the central nurses’ station, the patient died. (Sheridan & Thompson, 1994, p. 154)
Aiding Metacognition In addition to presenting data and information appropriately, decision-aiding technology should be designed to enhance individual metacognition—that is, it should help people to be aware of how they are thinking and are making judgments and whether or not their process is appropriate, coherent, and accurate. Decision aids need to be adaptive not only in the when of aiding, but also in the how. Perhaps one of (if not the) most important function of decision aids may be to help operators to monitor their decision making by prompting processes (e.g., check system parameters, look for disconfirming evidence, consider the consequences of the action) as well as by providing information in an appropriate manner. Guerlain et al. (1999), for example, proposed an interactive critiquing system with which the human decision maker can submit a diagnosis to the automation to be checked. Also, in order to be truly effective, decision aids must help operators to navigate within and between decision contexts, especially when they are initiating a new task or changing contexts (e.g., when pilots transition from a visual to an instrument approach, or when physicians move from visual inspection of a patient to analysis of charts and X-rays), and to adapt their own cognition effectively to the demands of the situation. Downloaded from rev.sagepub.com by guest on April 5, 2012
240
Reviews of Human Factors and Ergonomics, Volume 6
Training How can people be taught to be better decision makers? Training for decision making typically entails a focus on front-end processes. Some have taken a lens model approach to training—for example, providing feedback intended to calibrate the input of automated aids, or providing cognitive feedback that includes information about the relationship among cues, judgments, and the environment (Gattie & Bisantz, 2006; Seong, Bisantz, & Gattie, 2006). This approach is geared toward identifying and teaching appropriate weighting and prioritizing. The rationale underlying this tactic is that an understanding of appropriate weighting of cues in terms of their ecological validity will result in more accurate diagnoses and choices. Decision-enhancing interventions may also take the form of training for process vigilance. This approach builds on the notion of metacognitive monitoring of decision processes. In aviation research, it has been found that the perception that one is directly accountable for one’s decision-making processes fosters more vigilant information seeking and processing, as well as more accurate judgments. Students who were made accountable for either overall performance or accuracy (which demanded checking several sources of information) displayed more vigilant information seeking as compared with nonaccountable students (Skitka, Mosier, & Burdick, 2000). Pilots who reported a higher internalized sense of accountability for their interactions with automation, regardless of assigned experimental condition, verified correct automation functioning more often and committed fewer errors than other pilots (Mosier et al., 1998). These results suggest that aiding the metacognitive monitoring of judgment processes may facilitate more thorough information search, more analytical cognition, and more coherent judgment strategies. Training for process vigilance in hybrid environments would need to include instruction and practice in analytical skills relevant to assessing automated system functions and displays as well as the development of highly accurate mental models of how the automation works. Coherence-based critical incidents presented via computer-based trainers or high-fidelity scenarios are potential vehicles for this training. Some process interventions attempt to foster a systematic or vigilant decision process through the use of acronyms such as DESIDE (for detect, estimate, set safety objectives, identify, do, evaluate; Murray, 1997) and QPIDR (questioning, promoting ideas, decide, review; Prince & Salas, 1993) to delineate the desired steps. These have had limited success, as they do not foster the development of patterns of situations and do not offer the decision maker the flexibility to adapt his or her decision strategy to the situation and context at hand (O’Hare, 2003). A third and perhaps more promising approach to decision training entails teaching decision makers to emulate and eventually learn to decide like experts. In this approach, domain-specific knowledge and skills critical to expert performance must be discerned through methods such as critical incident analysis or cognitive task analysis (e.g., Hoffman et al., 1998; Klein, 2003; Militello, Wong, Kirschenbaum, & Patterson, in press). Training designed to focus learners’ attention on experts’ knowledge and skills has shown some success in facilitating skill acquisition among nonexperts and in reducing errors by experts on representative tasks (Charness & Tuffiash, 2008). One caveat to this type of training is that the training environment initially must be conducive to learning and reflection— Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
241
introducing environmental conditions such as time pressure or high workload too soon can hinder development of context-based knowledge (Gonzalez, 2004, 2005). Process training targets include metacognitive skills, including the ability to choose the appropriate judgment strategy and to modify it in response to situation events; reasoning skills; domain-specific problem-solving skills; mental simulation skills, or the ability to mentally play a potential solution through to identify problems; risk assessment skills; and situation assessment skills (Cannon-Bowers & Bell, 1997). Knowledge targeted by training concerns hard-to-detect cues, patterns, mental models, and scripts. The development of expertise also requires a qualitative shift in processing and organization of knowledge (Tsang, 2003). Training expertise in naturalistic environments is tricky, as it is difficult to “unpack” the knowledge underlying intuitive decision-making skills. It may be facilitated by exposure to an array of “typical” situations from which the decision maker can gradually discern meaningful patterns of critical cues, organize their knowledge around principles that may not be apparent at the surface, and anticipate future events. This type of training often takes the form of extensive practice using computerbased scenarios, simulators, multimedia presentation formats, guided practice and feedback, or cognitive apprenticeships and rehearsals in the field (Cannon-Bowers & Bell, 1997). The outcome of the Air Florida flight described at the beginning of this chapter might have been very different if the crew had been given more extensive practice in winter operations. The literature reviewed suggests that technological decision aids will be most effective when focusing on enhancing front-end and back-end processes rather than merely prescribing decisions or actions. Training interventions as well should focus on the processes we have discussed in this chapter—honing decision makers’ abilities to identify ecologically valid cues, develop domain-specific knowledge structures, recognize typical patterns of cues, seek out and comprehend relevant information, retrieve an appropriate course of action from memory, mentally simulate a course of action and adapt to changing demands, and detail a plan. A critical aspect of these interventions is to train metacognitive monitoring of one’s judgment and decision processes to ensure that they are appropriate to the situation at hand.
FUTURE DIRECTIONS—FILLING THE GAPS IN OUR KNOWLEDGE As our review indicates, efforts are well under way to examine how individuals make decisions, the processes involved in team decision making, and the impact of technology and organizational culture. Our review also reveals a number of challenges for future research.
Interactions Across the Layers Most of the human factors decision research has limited investigations to a single layer of context. Future research must consider interactions among the layers. For example, with few exceptions (e.g., Morrow et al., 2006), very little has been done to examine the impact of automation and technology on team processes. Downloaded from rev.sagepub.com by guest on April 5, 2012
242
Reviews of Human Factors and Ergonomics, Volume 6
Team Decisions Team decision-making research has focused almost exclusively on situation assessment— the front end of the decision process. The dominant question has been how team members come to a shared understanding of the problem they face. Consequently, research efforts have been devoted to identifying common knowledge structures and communication processes that foster shared problem models. Moreover, team communication has been conceived of as an exchange of task-critical information. This view emphasizes the flow of information, leading researchers to examine how much and what information team members share and whether shared information is understood. Little attention has been given to questions such as, How do team members pool their knowledge and observations and jointly create a coherent situation model? How do team members build consensus when there is disagreement? These kinds of constructive interactions, moreover, not only apply to the front end of the decision making process but should also be investigated with respect to its back end. So far, discussions of how teams decide on a course of action are virtually absent in the literature. This may be a reflection of the domains that have been studied by decision researchers. Aviation is a highly proceduralized domain; once a problem is understood, the appropriate response is likely covered by some procedure. Similarly, the range of options available to firefighters, military planners, and physicians is likely to be limited, too. Nonetheless, the “best” option may neither be self-evident nor undisputed. Moreover, there certainly are domains, such as policy development or product design, that present teams with both ill-defined problems and ill-defined solutions requiring consensus building and negotiation throughout the decision-making process. An additional area that warrants the attention of team decision researchers concerns the question of “shared knowledge.” Most research to date has worked from the assumption that similarity in team members’ knowledge structures enhances their joint performance. As a result, team performance has always been related to the extent to which team members’ knowledge converges rather than to the complementariness of their knowledge. However, complex and dynamically changing conditions that are characteristic of real-world task environments may be best managed by division of labor. That is, the requisite knowledge for coherent and accurate decision making may be better apportioned across team members rather than duplicated within members. How knowledge is optimally distributed in teams and how to balance knowledge distribution with knowledge convergence are questions for future research. Most certainly these questions have to be addressed with respect to characteristics of the team task and the operational environment.
Operator State One area that is beginning to assume importance in decision-making research is the psychological state (or affect) of the operator. A large body of research in cognitive and social psychology over the past 2 decades provides overwhelming evidence that people’s judgments and decisions are critically influenced by emotions they experience at the time of decision making. How people respond to a situation—whether they are more or less risk taking, whether they prefer punitive to lenient measures—has been shown to vary by Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
243
their mood. Similarly, affect has been found to determine people’s cognitive strategies— that is, whether they are systematic in their decision making or rely on heuristic cues. Human factors researchers have long recognized the potential impact of operationally induced factors such as stress or time pressure on performance. What has not been given much research attention is the potential effect of an operator’s affective state—anger, fear, anxiety, happiness—on decision-making processes. Affect may be of particular importance in expert decision making, and this notion is gaining interest among the community of NDM researchers. Experts may be well attuned to affect that is in response to elements of the task context and that may have significance for their decisions. That is, for experts, affective reactions to situations may represent a knowledge-based informational cue and may be a salient component of their decision making. For example, Dominguez (2001) reported that physicians frequently referred to their comfort level while deciding on whether or not to continue with laparoscopic surgery. Observations like these suggest that professionals are sensitive to task-relevant affect and incorporate it into their decision process.
Decision Strategies for Hybrid Ecologies Last, much more research is needed to define strategies and enhance decision making in hybrid ecologies. Strategies for decision making in a hybrid environment include both recognitional pattern matching and rational analysis. Sometimes, decision makers will be expected to employ both strategies—while landing an aircraft, for example, a pilot may first examine cockpit instruments with an analytical eye, set up the appropriate mode and flight parameters to capture the glide slope, and then check cues outside of the window to see if everything “looks right.” The physician may interpret the digital data on a set of monitors, and then examine the patient to see if the visual appearance corresponds to his or her interpretation. In other cases, decision makers may be better off staying in one mode or the other. The pilot may be forced to rely on analytical processing of cockpit information if visibility is obscured or may choose to disengage the automatic aircraft functions to “hand fly” an approach and landing if visibility is good. Because “good” decision making in hybrid ecologies must be both coherent (in the electronic world) and accurate (in the physical world), proficient decision making in these environments involves knowing which strategy is appropriate in a given situation. As yet, the optimal combinations of recognition and analysis, and of coherence and correspondence strategies, are not known. The proliferation of hybrid ecologies in decision domains (particularly those domains in which teams operate) offers perhaps the greatest future challenge in decision-making research.
ACKNOWLEDGMENTS Our heartfelt thanks to three anonymous reviewers who helped us synthesize and clarify the issues, models, and applications included in this chapter—particularly the concepts of correspondence and coherence. Downloaded from rev.sagepub.com by guest on April 5, 2012
244
Reviews of Human Factors and Ergonomics, Volume 6
REFERENCES Adelman, L. M. S., Cohen, M. S., Bresnick, T. A., Chinnis, J. O., & Laskey, K. B. (1993). Real-time expert system interfaces, cognitive processes, and task performance: An empirical assessment. Human Factors, 35, 243–261. Adelman, L. M. S., Miller, S. L., & Yeo, C. (2006). Using Brunswikian theory and the lens model equation to understand the effects of computer displays on distributed team performance. In A. Kirlik (Ed.), Adaptation in human-technology interaction: Methods, models and measures (pp. 43–54). New York: Oxford University Press. Andre, A., & Wickens, C. (1992). Compatibility and consistency in display-control systems: Implications for aircraft decision aid design. Human Factors, 34, 639–653. Andre, T. S., Hartson, H. R., Belz, S. M., & McCreary, F. A. (2001). The user action framework: A reliable foundation for usability engineering support tools. International Journal of Human-Computer Studies, 54, 107–136. Badke-Schaub, P. (in press). Decision making processes of leaders in product development. In K. Mosier & U. Fischer (Eds.), Informed by knowledge: Expert performance in complex situations. New York: Taylor & Francis. Barshi, I., & Chute, R. (2001, March). What do pilots and controllers know about each other’s jobs? Paper presented at the 11th International Symposium on Aviation Psychology, Columbus, OH. Bass, E. J., & Pritchett, A. R. (2002). Human-automated judgment learning: A methodology to investigate human interaction with automated judges. Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting (pp. 362–366). Santa Monica, CA: Human Factors and Ergonomics Society. Bates, D. W., Cohen, M., Leape, L. L., Overhage, J. M., Shabot, M. M., & Sheridan, T. (2001). Reducing the frequency of errors in medicine using information technology. Journal of the American Medical Informatics Association, 8, 299–308. Bauer, M. I., & Johnson-Laird, P. N. (1993). How diagrams can improve reasoning. Psychological Science, 4, 372–378. Bayes, T. (1958). Essay towards solving a problem in the doctrine of chances. Biometrika, 45, 293–315. (Reprinted from Philosophical Transactions of the Royal Society, 53, 370–418, 1763) Beach, L. R. (1993). Image theory: Personal and organizational decisions. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 148–157). Norwood, NJ: Ablex. Beach, L. R. (1998). Image theory: Theoretical and empirical foundations. Mahwah, NJ: Erlbaum. Billings, C. E. (1996). Human-centered aviation automation: Principles and guidelines (NASA Tech. Memorandum #110381). Moffett Field, CA: NASA Ames Research Center. Bisantz, A. M., Kirlik, A., Gay, P., Phipps, D., Walker, N., & Fisk, A. D. (2000). Modeling and analysis of a dynamic judgment task using a lens model approach. IEEE Transactions on Systems, Man, and Cybernetics, 30, 605–616. Bisantz, A. M., & Pritchett, A. (2003). Measuring the fit between human judgments and automated alerting algorithms: A study of collision detection. Human Factors, 45, 266–280. Bisantz, A. M., & Roth, E. (2008). Analysis of cognitive work. In D. A. Boehm-Davis (Ed.), Reviews of human factors and ergonomics (Vol. 3, pp. 1–43). Santa Monica, CA: Human Factors and Ergonomics Society. Bisantz, A. M., Roth, E., Brickman, B., Gosbee, L. L., Hettinger, L., & McKinney, J. (2003). Integrating cognitive analyses in a large-scale system design process. International Journal of Human-Computer Studies, 58, 177–206. Blatt, R., Christianson, M., Sutcliffe, K., & Rosenthal, M. (2006). A sense-making lens on reliability. Journal of Organizational Behavior, 27, 897–917. Bogner, M. S. (1994). Human error in medicine. Mahwah, NJ: Erlbaum. Brunswik, E. (1943). Organismic achievement and environmental probability. Psychological Review, 50, 255–272. Brunswik, E. (1955). Representative design and probabilistic theory in a functional psychology. Psychological Review, 62, 193–217. Brunswik, E. (1956). Perception and the representative design of psychological experiments. Berkeley: University of California Press. Buch, G., & Diehl, A. (1984). An investigation of the effectiveness of pilot judgment training. Human Factors, 26, 557–564. Caird, J. K., Edwards, C. J., Creaser, J. I., & Horrey, W. J. (2005). Older driver failures of attention at intersections: Using change blindness methods to assess turn decision accuracy. Human Factors, 47, 235–249. Camasso, M., & Jagannathan, R. (2001). Flying personal planes: Modeling the airport choices of general aviation pilots using stated preference methodology. Human Factors, 43, 392–404. Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
245
Cannon-Bowers, J. A., & Bell, H. H. (1997). Training decision makers for complex environments: Implications of the naturalistic decision making perspective. In C. E. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 99–110). Mahwah, NJ: Erlbaum. Cannon-Bowers, J. A., & Salas, E. (2001). Reflections on shared cognition. Journal of Organizational Behavior, 22, 195–202. Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). Shared mental models in team decision making. In N. J. Castellan (Ed.), Individual and group decision making: Current issues (pp. 221–246). Mahwah, NJ: Erlbaum. Cannon-Bowers, J. A., Salas, E., & Pruitt, J. (1996). Establishing the boundaries of a paradigm for decisionmaking research. Human Factors, 38, 193–205. Cardosi, K., Falzarano, P., & Han, S. (1998). Pilot-controller communication errors: An analysis of Aviation Safety Reporting System (ASRS) reports. Cambridge, MA: Volpe National Transportation Systems Center, Research and Development Administration. (NTIS No. PB99-106668) Carretta, T. R. (2000). U.S. Air Force pilot selection and training methods. Aviation, Space, and Environmental Medicine, 71, 950–956. Cellier, J., Eyrolle, H., & Marine, C. (1997). Expertise in dynamic environments. Ergonomics, 40, 28–50. Cesna, M., & Mosier, K. L. (2005). Using a prediction paradigm to compare levels of expertise and decision making among critical care nurses. In B. Brehmer, R. Lipshitz, & H. Montgomery (Eds.), How do professionals make decisions? (pp. 107–118). Mahwah, NJ: Erlbaum. Charness, N., & Tuffiash, M. (2008). The role of expertise research and human factors in capturing, explaining, and producing superior performance. Human Factors, 50, 427–432. Chi, M., Feltovich, P., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5, 121–152. Chi, M., Glaser, R., & Farr, M. (1988). The nature of expertise. Mahwah, NJ: Erlbaum. Chi, M., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. S. Steinberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7–75). Mahwah, NJ: Erlbaum. Clark, H. H., & Schaefer, E. F. (1989). Contributing to discourse. Cognitive Science, 13, 259–294. Cohen, M. S. (1993). Taking risk and taking advice: The role of experience in airline pilot diversion decisions. In R. S. Jensen & D. Neumeister (Eds.), Proceedings of the Seventh International Symposium on Aviation Psychology (pp. 244–247). Columbus, OH: Ohio State University, Department of Aviation. Cohen, M. S. (in press). Knowns, known unknowns, and unknown unknowns: Synergies between intuitive and deliberative approaches to time, uncertainty, and information. In K. L. Mosier & U. M. Fischer (Eds.), Informed by knowledge: Expert performance in complex situations. New York: Taylor & Francis. Cohen, M. S., Adelman, L., Tolcott, M. A., Bresnick, T. A., & Marvin, F. F. (1993). A cognitive framework for battlefield commanders’ situation assessment (Tech. Rep. 93-1). Arlington, VA: Cognitive Technologies. Cohen, M. S., Freeman, J. T., & Thompson, B. B. (1997). Training the naturalistic decision maker. In C. E. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 257–268). Mahwah, NJ: Erlbaum. Cohen, M., Freeman, J. T., & Wolf, S. (1996). Meta-recognition in time-stressed decision making: Recognizing, critiquing, and correcting. Human Factors, 38, 206–219. Connolly, T. (1988). Hedge-clipping, tree-felling, and the management of ambiguity. In M. B. McCaskey, L. R. Pondy, & H. Thomas (Eds.), Managing the challenge of ambiguity and change (pp. 37–50). New York: Wiley. Cook, M. B., & Smallman, H. S. (2008). Human factors of the confirmation bias in intelligence analysis: Decision support from graphical evidence landscapes. Human Factors, 50, 745–754. Cooke, N. J., Kiekel, P. A., Salas, E., Stout, R., Bowers, C., & Cannon-Bowers, J. (2003). Measuring team knowledge: A window to the cognitive underpinnings of team performance. Group Dynamics: Theory, Research, and Practice, 7, 179–199. Cooke, N. J., Salas, E., Cannon-Bowers, J. A., & Stout, R. J. (2000). Measuring team knowledge. Human Factors, 42, 151–173. Cooksey, R. W. (1996). Judgment analysis: Theory, methods, and applications. San Diego, CA: Academic Press. Crandall, B., & Getchell-Reiter, K. (1993). Critical decision method: A technique for eliciting concrete assessment indicators from the intuition of NICU nurses. Advances in Nursing Science, 16(1), 42–51.
Downloaded from rev.sagepub.com by guest on April 5, 2012
246
Reviews of Human Factors and Ergonomics, Volume 6
Cuomo, D. L., & Bowen, C. D. (1992). Stages of user activity model as a basis for user-system interface evaluations. In Proceedings of the Human Factors Society 36th Annual Meeting (pp. 1254–1258). Santa Monica, CA: Human Factors and Ergonomics Society. Cushing, S. (1994). Fatal words: Communication clashes and aircraft crashes. Chicago: University of Chicago Press. Davison, J,. & Orasanu, J. (2001). Pilots’ and controllers’ perceptions of traffic risk. Paper presented at the 11th International Symposium on Aviation Psychology, Columbus, OH. Dekker, S. (2006). The field guide to understanding human error. Aldershot, UK: Ashgate. Dekker, S., & Lützhöft, M. (2004). Correspondence, cognition and sensemaking: A radical empiricist view of situation awareness. In S. Banbury & S. Tremblay (Eds.), A cognitive approach to situation awareness: Theory and application (pp. 22–41). Aldershot, UK: Ashgate. Dietrich, R. (2004). Determinants of effective communication. In R. Dietrich & T. M. Childress (Eds.), Group interaction in high risk environments (pp. 185–205). Aldershot, UK: Ashgate. Dijksterhuis, A., Bos, M. W., Nordgren, L. F., & van Baaren, R. B. (2006). On making the right choice: The deliberation-without-attention effect. Science, 311, 1005–1007. Dominguez, C. O. (2001). Expertise and metacognition in laparoscopic surgery: A field study. Proceedings of the Human Factors and Ergonomic Society 45th Annual Meeting (pp. 1298–1303). Santa Monica, CA: Human Factors and Ergonomics Society. Dorsey, D., & Coovert, M. (2003). Mathematical modeling of decision making: A soft and fuzzy approach to capturing hard decisions. Human Factors, 45, 117–135. Dyer, J. L. (1984). Team research and team training: A state of the art review. In F. Muckler (Ed.), Human factors review (pp. 285–323). Santa Monica, CA: Human Factors and Ergonomics Society. Eddy, D. M. (1982). Probabilistic reasoning in clinical medicine: Problems and opportunities. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 249–267). New York: Cambridge University Press. Endsley, M. R. (1988). Design and evaluation for situation awareness enhancement. Proceedings of the Human Factors Society 32nd Annual Meeting (pp. 97–101). Santa Monica, CA: Human Factors and Ergonomics Society. Endsley, M. R. (1995a). A taxonomy of situation awareness errors. In R. Fuller, N. Johnston, & N. McDonald (Eds.), Human factors in aviation operations (pp. 287–292). Aldershot, UK: Ashgate. Endsley, M. R. (1995b). Toward a theory of situation awareness in dynamics systems. Human Factors, 37, 32–64. Endsley, M. R. (2000). Theoretical underpinnings of situation awareness: A critical review. In M. R. Endsley & D. J. Garland (Eds.), Situation awareness analysis and measurement (pp. 3–32). Mahwah, NJ: Erlbaum. Endsley, M. R., & Smith, R. (1996). Attention distribution and decision making in tactical air combat. Human Factors, 38, 232–249. Entin, E. E., & Serfaty, D. (1999). Adaptive team coordination. Human Factors, 41, 312–325. Ezer, N., Fisk, A. D., & Rogers, W. A. (2008). Age-related differences in reliance behavior attributable to costs within a human-decision aid system. Human Factors, 50, 853–865. Fadden, D. (1990). Aircraft automation changes. In Abstracts of AIAA-NASA-FAA-HFS Symposium, Challenges in Aviation Human Factors: The National Plan. Washington, DC: American Institute of Aeronautics and Astronautics. Fiori, S. M., & Schooler, J. W. (2004). Process mapping and shared cognition: Teamwork and the development of shared problem models. In E. Salas & S. M. Fiori (Eds.), Team cognition (pp. 133–152). Washington, DC: American Psychological Association. Fischer, U., McDonnell, L., & Orasanu, J. (2007). Linguistic correlates of team performance: Toward a tool for monitoring team functioning during space missions. Aviation, Space, and Environmental Medicine, 78, B86–95. Fischer, U., & Orasanu, J. (2000). Error-challenging strategies: Their role in preventing and correcting errors. Proceedings of the XIVth Triennial Congress of the International Ergonomics Association and 44th Annual Meeting of the Human Factors and Ergonomics Society (pp. 1.30–1.33). Santa Monica, CA: Human Factors and Ergonomics Society. Fischer, U., & Orasanu, J. (2001). Do you see what I see? Effects of crew position on interpretation of flight problems (NASA Tech. Memorandum 2002-209612). Moffett Field, CA: NASA Ames Research Center.
Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
247
Fischer, U., Orasanu, J., & Davison, J. (2003). Why do pilots take risks? Some insights from a think-aloud study. In M. J. Cook (Ed.), Proceedings of the Human Factors of Decision Making in Complex Systems Meeting (pp. 44–46). Dundee, Scotland: University of Abertay. Fischer, U., Orasanu, J., & Montalvo, M. (1993). Efficient decision strategies on the flight deck. In R. S. Jensen & D. Neumeister (Eds.), Proceedings of the 7th International Symposium on Aviation Psychology (pp. 238–243). Columbus: Ohio State University. Fischer, U., Orasanu, J., & Wich, M. (1995). Experts pilots’ perceptions of problem situations. Proceedings of the 8th International Symposium of Aviation Psychology (pp. 777–782). Columbus: Ohio State University. Fischer, U., Rinehart, M., & Orasanu, J. (2001, March). Training flight crews in effective error challenging strategies. Paper presented at the 11th International Symposium on Aviation Psychology, Columbus, OH. Fiske, S. T., & Taylor, S. E. (1994). Social cognition (2nd ed.). New York: McGraw-Hill. Flin, R., O’Connor, P., & Crichton, M. (2008). Safety at the sharp end: A guide to non-technical skills. Aldershot, UK: Ashgate. Flin, R., Slaven, G., & Stewart, K. (1996). Emergency decision making in the offshore oil and gas industry. Human Factors, 38, 262–277. Garg, A. X., Adhikari, N. K. J., McDonald, H., Rosas-Arellano, M. P., Devereaux, P. J., Beyene, J., et al. (2005). Effects of computerized clinical decision support systems on practitioner performance and patient outcomes. Journal of the American Medical Association, 293, 1233–1238. Gattie, G. J., & Bisantz, A. M. (2006). The effects of integrated cognitive feedback components and task conditions on training in a dental diagnosis task. International Journal of Industrial Ergonomics, 36, 485–497. Gawande, A., & Bates, D. (2000). The use of information technology in improving medical performance: Part I. Information systems for medical transactions. Medscape General Medicine, 2, 1–6. Gemignani, M. (1984). Laying down the law to robots. San Diego Law Review, 21, 1045. Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Science, 20–29. Gigerenzer, G., Hoffrage, U., & Kleinbölting, H. (1991). Probabilistic mental models: A Brunswikian theory of confidence. Psychological Review, 98, 506–528. Gigerenzer, G., & Selten, R. (2001). Bounded rationality: The adaptive toolbox. Cambridge, MA: MIT Press. Gillan, C. A. (2003). Analysis of multicrew decision making from a cognitive perspective. Proceedings of the 12th International Symposium on Aviation Psychology (pp. 427–432). Dayton, OH: Wright State University. Gilson, R. D., Deaton, J. E., & Mouloua, M. (1996). Coping with complex alarms. Ergonomics in Design, 4(4), 12–18. Ginnett, R. C. (1993). Crews as groups: Their formation and their leadership. In E. Weiner, B. Kanki, & R. Helmreich (Eds.), Cockpit resource management (pp. 71–98). San Diego, CA: Academic Press. Gonzalez, C. (2004). Learning to make decisions in dynamic environments: Effects of time constraints and cognitive abilities. Human Factors, 46, 149–160. Gonzalez, C. (2005). Task workload and cognitive abilities in dynamic decision making. Human Factors, 47, 92–101. Guerlain, S. A., Smith, P., Obradovich, J., Rudmann, S., Strohm, P., Smith, J. W., et al. (1999). Interactive critiquing as a form of decision support: An empirical evaluation. Human Factors, 41, 72–89. Hackman, J. R., & Helmreich, R. L. (1987). Assessing the behavior and performance of teams in organizations: The case of air transport crews. In D. R. Peterson & D. B. Fishman (Eds.), Assessment for decision. New Brunswick, NY: Rutgers University Press. Hammond, K. R. (1993). Naturalistic decision making from a Brunswikian viewpoint: Its past, present, future. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 205–227). Norwood, NJ: Ablex. Hammond, K. R. (1996). Human judgment and social policy. New York: Oxford University Press. Hammond, K. R. (1999). Coherence and correspondence theories of judgment and decision making. In T. Connolly, H. R. Arkes, & K. R. Hammond (Eds.), Judgment and decision making: An interdisciplinary reader (pp. 53–65). New York: Cambridge University Press. Hammond, K. R. (2000). Judgments under stress. New York: Oxford University Press. Hammond, K. R. (2007). Beyond rationality: The search for wisdom in a troubled time. New York: Oxford University Press.
Downloaded from rev.sagepub.com by guest on April 5, 2012
248
Reviews of Human Factors and Ergonomics, Volume 6
Hammond, K. R., Hamm, R. M., Grassia, J., & Pearson, T. (1997). Direct comparison of the efficacy of intuitive and analytical cognition in expert judgment. In W. M. Goldstein & R. M. Hogarth (Eds.), Research on judgment and decision making: Currents, connections, and controversies (pp. 144–180). Cambridge, UK: Cambridge University Press. Hammond, K. R., & Stewart, T. R. (2001). The essential Brunswik. New York: Oxford University Press. Helmreich, R. L., & Foushee, C. H. (1993). Why crew resource management? Empirical and theoretical bases of human factors training in aviation. In E. L. Wiener, B. G. Kanki, & R. L. Helmreich (Eds.), Cockpit resource management (pp. 3–45). San Diego, CA: Academic Press. Helmreich, R. L., & Merritt, A. C. (1998). Culture at work in aviation and medicine: National, organizational and professional influences. Aldershot, UK: Ashgate. Hitt, J. M., Mouloua, M., & Vincenzi, D. A. (1999). Effects of age and automation level on attitudes towards use of an automated system. In M. W. Scerbo & M. Mouloua (Eds.), Automation technology and human performance: Current research and trends (pp. 270–275). Mahwah, NJ: Erlbaum. Hoffman, R. R., Crandall, B., & Shadbolt, N. (1998). Use of the critical decision method to elicit expert knowledge: A case study in the methodology of cognitive task analysis. Human Factors, 40, 254–276. Hoffman, R. R., & Militello, L. G. (2008). Perspectives on cognitive task analysis: Historical origins and modern communities of practice. New York: Taylor & Francis. Hoffmann, M. H. G. (2005). Logical argument mapping: A method for overcoming cognitive problems of conflict management. International Journal of Conflict Management, 16, 304–334. Horrey, W. J., Wickens, C. D., Strauss, R., Kirlik, A., & Stewart, T. R. (2006). Supporting situation assessment through attention guidance and diagnostic aiding: The benefits and costs of display enhancement on judgment skill. In A. Kirlik (Ed.), Adaptive perspectives on human-technology interaction (pp. 55–70). New York: Oxford University Press. Jacobson, C., & Mosier, K. L. (2004). Coherence and correspondence decision making in aviation: A study of pilot incident reports. International Journal of Applied Aviation Studies, 4, 123–134. Johannesen, L. (2008). Maintaining common ground: An analysis of cooperative communication in the operating room. In C. P. Nemeth (Ed.), Improving healthcare team communication: Building on lessons from aviation and aerospace (pp. 179–203). Aldershot, UK: Ashgate. Johnson-Laird, P. N. (2004). The history of mental models. In K. Manktelow & M. C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 179–212). New York: Psychology Press. Kaber, D. B., Perry, C. M., Segall, N., McClernon, C. K., & Prinzel, L. J., III. (2006). Situation awareness implications of adaptive automation for information processing in an air traffic control-related task. International Journal of Industrial Ergonomics, 36, 447–462. Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge, UK: Cambridge University Press. Kanki, B. G., Lozito, S. C., & Foushee, H. C. (1989). Communication indices of crew coordination. Aviation, Space, and Environmental Medicine, 60, 56–60. Karren, R. J., & Barringer, M. W. (2002). A review and analysis of the policy-capturing methodology in organizational research: Guidelines for research and practice. Organizational Research Methods, 5, 337–361. Keeney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives: Preferences and value trade-offs. New York: Wiley. Kerstholt, J., Passenier, P., Houttuin, K., & Schuffel, H. (1996). The effect of a priori probability and complexity on decision making in a supervisory control task. Human Factors, 38, 65–78. Khoo, Y.-L., & Mosier, K. L. (2008). The impact of time pressure and experience on information search and decision making processes. Journal of Cognitive Engineering and Decision Making, 2, 275–294. Kiekel, P. A., Cooke, N. J., Foltz, P. W., Gorman, J., & Martin, M. (2002). Some promising results of communication-based automatic measures of team cognition. Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting (pp. 298–302). Santa Monica, CA: Human Factors and Ergonomics Society. Kiekel, P. A., Cooke, N. J., Foltz, P. W., & Shopes, M. M. (2001). Automating measurement of team cognition through analysis of communication data. In M. J. Smith, G. Salvendy, D. Harris, & R. J. Koubek (Eds.), Usability evaluation and interface design: Cognitive engineering, intelligent agents and virtual reality (pp. 1382–1386). Mahwah, NJ: Erlbaum. Kirlik, A. (2006). Adaptive perspectives on human-technology interaction. New York: Oxford University Press. Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
249
Kirlik, A., & Bertel, S. (in press). Decision making under pressure and constraints: Bounded rationality. In J. J. Cochran (Ed.), Wiley encyclopedia of operations research and management science. New York: Wiley. Kirlik, A., & Strauss, R. (2006). Situation awareness as judgment: I. Statistical modeling and quantitative measurement. International Journal of Industrial Ergonomics, 36, 463–474. Klein, G. A. (1989). Recognition-primed decisions. In W. B. Rouse (Ed.), Advances in man-machine system research (Vol. 5, pp. 47–92). Greenwich, CT: JAI Press. Klein, G. A. (1993a). A recognition-primed decision (RPD) model of rapid decision making. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 138–147). Norwood, NJ: Ablex Publishing. Klein, G. A. (1993b). Naturalistic decision making: Implications for design (CSERIAC SOAR Rep. 93-1). Wright Patterson Air Force Base, OH: CSERIAC. Klein, G. A. (1999). Sources of power: How people make decisions. Cambridge, MA: MIT Press. Klein, G. A. (2003). The power of intuition. New York: Doubleday. Klein, G. A. (2008). Naturalistic decision making. Human Factors, 50, 456–460. Klein, G. A., Calderwood, R., & Clinton-Cirocco, A. (1986). Rapid decision making on the fire ground. Proceedings of the Human Factors Society 30th Annual Meeting (pp. 576–580). Santa Monica, CA: Human Factors and Ergonomics Society. Klein, G. A., Calderwood, R., & MacGregor, D. (1989). Critical decision method for eliciting knowledge. IEEE Transactions on Systems, Man, and Cybernetics, 19, 462–472. Klein, G. A., Ross, K. G., Moon, B. M., Klein, D. E., Hoffman, R. R., & Hollnagel, E. (2003). Macrocognition. IEEE Intelligent Systems, 18, 81–85. Koslowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest, 7, 77–124. Krifka, M., Martens, S., & Schwarz, F. (2004). Linguistic factors. In R. Dietrich & T. M. Childress (Eds.), Group interaction in high risk environments (pp. 75–85). Aldershot, UK: Ashgate. Kuipers, B., Moskowitz, A. J., & Kassirer, J. P. (1988). Critical decisions under uncertainty: Representation and structure. Cognitive Science, 12, 177–210. Langewiesche, W. (2003, November). Columbia’s last flight. Atlantic Monthly, 58–87. Laughery, K. R., & Wogalter, M. S. (2006). Designing effective warnings. In R. C. Williges (Ed.), Reviews of human factors and ergonomics (Vol. 2, pp. 241–271). Santa Monica, CA: Human Factors and Ergonomics Society. Layton, C., Smith, P. J., & McCoy, C. E. (1994). Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation. Human Factors, 36, 94–119. Leonard, M., Graham, S., & Bonacum, D. (2004). The human factor: The critical importance of effective teamwork and communication in providing safe care. Quality and Safety in Healthcare, 13(Suppl. 1), 85–90. Linde, C. (1988). The quantitative study of communicative success: Politeness and accidents in aviation discourse. Language in Society, 17, 375–399. Lipshitz, R. (1993). Converging themes in the study of decision-making in realistic settings. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 103–137). Norwood, NJ: Ablex. Lipshitz, R., & Ben Shaul, O. (1997). Schemata and mental models in recognition-primed decision making. In C. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 293–303). Mahwah, NJ: Erlbaum. Lipshitz, R., Klein, G., Orasanu, J., & Salas, E. (2001). Focus article: Taking stock of naturalistic decision making. Journal of Behavioral Decision Making, 14, 331–352. Lipshitz, R., & Strauss, O. (1997). Coping with uncertainty: A naturalistic decision making analysis. Organizational Behavior and Human Decision Processes, 69, 149–163. MacMillan, J., Entin, E. E., & Serfaty, D. (2004). Communication overhead: The hidden cost of team cognition. In E. Salas & S. M. Fiori (Eds.), Team cognition (pp. 61–82). Washington, DC: American Psychological Association. Madhavan, P., & Wiegmann, D. A. (2005). Cognitive anchoring on self-generated decisions reduces operator reliance on automated diagnostic aids. Human Factors, 47, 332–341. Maher, J. W. (1989). Beyond CRM to decisional heuristics: An airline generated model to examine accidents and incidents caused by crew errors in deciding. In R. S. Jensen (Ed.), Proceedings of the 5th International Symposium on Aviation Psychology (pp. 439–444). Columbus: Ohio State University. Downloaded from rev.sagepub.com by guest on April 5, 2012
250
Reviews of Human Factors and Ergonomics, Volume 6
Marks, M. A., Zaccaro, S. J., & Mathieu, J. E. (2000). Performance implications of leader briefings and teaminteraction training for team adaptation to novel environments. Journal of Applied Psychology, 85, 971–986. Mathieu, J. E., Heffner, T. S., Goodwin, G. F., Salas, E., & Cannon-Bowers, J. A. (2000). The influence of shared mental models on team process and performance. Journal of Applied Psychology, 85, 273–283. Maule, J. (2009). Can computers help overcome limitations in human decision making? In Proceedings of NDM9, the 9th International Conference on Naturalistic Decision Making (pp. 10–17). London: British Computer Society. Mayhorn, C. B., Fisk, A. D.., & Whittle, J. D. (2002). Decisions, decisions: Analysis of age, cohort, and time of testing on framing of risky decision options. Human Factors, 44, 515–521. Mazzocco, K., Petitti, D. B., Fong, K. T., Bonacum, D., Brookey, J., Graham, S., et al. (2009). Surgical team behavior and patient outcomes. American Journal of Surgery, 197, 678–685. McDaniel, W., & Rankin, W. (1991). Determining flight task proficiency of students: A mathematical decision aid. Human Factors, 33, 293–308. Mearns, K. J., & Flin, R. (2001). Assessing the state of organizational safety—Culture or climate? In H. Ellis & N. Macrae (Eds.), Validation in psychology (pp. 5–20). London: Transaction. Mearns, K., Flin, R., Gordon, R., & Fleming, M. (2001). Human and organizational factors in offshore safety. Work and Stress, 15, 144–160. Militello, L. G., & Hutton, R. J. B. (1998). Applied cognitive task analysis (ACTA): A practitioner’s toolkit for understanding cognitive task demands. Ergonomics, 41, 1618–1641. Militello, L. G., Wong, W., Kirschenbaum, S., & Patterson, E. (in press). Systematizing discovery in cognitive task analysis. In K. L. Mosier & U. M. Fischer (Eds.), Informed by knowledge: Expert performance in complex situations. New York: Taylor & Francis. Miller, R., Schaffner, K., & Meisel, A. (1985). Ethical and legal issues related to the use of computer programs in clinical medicine. Annals of Internal Medicine, 102, 529–536. Mockler, R. J. (1989). Knowledge-based systems for management decisions. Englewood Cliffs, NJ: Prentice-Hall. Mohammed, S., & Dumville, B. C. (2001). Team mental models in a team knowledge framework: Expanding theory and measurement across disciplinary boundaries. Journal of Organizational Behavior, 22, 89–106. Montgomery, H., Lipshitz, R., & Brehmer, B. (2005). How professionals make decisions. Mahwah, NJ: Erlbaum. Moray, N. (1992). Flexible interfaces can promote operator error. In J. Kragt (Ed.), Case studies in ergonomics (pp. 49–64). London: Taylor & Francis. Morgan, T. (1992). Competence and responsibility in intelligent systems. Artificial Intelligence Review, 6, 217–226. Morineau, T., Morandi, X., LeMoëllic, N., Diabira, S., Riffaud, L., Haegelen, C., et al. (2009). Decision making during preoperative surgical planning. Human Factors, 51, 67–77. Morrow, D. G., North, R., & Wickens, C. D. (2006). Reducing and mitigating human error in medicine. In R. S. Nickerson (Ed.), Reviews of human factors and ergonomics (Vol. 1, pp. 254–296). Santa Monica, CA: Human Factors and Ergonomics Society. Morrow, D. G., Soederberg Miller, L. M., Ridolfo, H. E., Magnor, C., Fischer, U. M., Kokayeff, N. K., et al. (2009). Expertise and age differences in pilot decision making. Aging, Neuropsychology, and Cognition, 16(1), 33–55. Mosier, K. L. (2002). Automation and cognition: Maintaining coherence in the electronic cockpit. In E. Salas (Ed.), Advances in human performance and cognitive engineering research (Vol. 2, pp. 93–121). Amsterdam: Elsevier Science. Mosier, K. L. (2008). Technology and “naturalistic” decision making: Myths and realities. In J. M. C. Schraagen, L. Militello, T. Ormerod, & R. Lipshitz (Eds.), Naturalistic decision making and macrocognition (pp. 41–56). Aldershot, UK: Ashgate. Mosier, K. L. (2009). Searching for coherence in a correspondence world. Judgment and Decision Making, 4, 154–163. Mosier, K. L., & Chidester, T. R. (1991). Situation assessment and situation awareness in a team setting. In Designing for everyone: Proceedings of the 11th Congress of the International Ergonomics Association (pp. 798–800). Paris: International Ergonomics Association. Mosier, K. L., & McCauley, S. T. (2006). Achieving coherence: Meeting new cognitive demands in technological systems. In A. Kirlik (Ed.), Adaptive perspectives on human-technology interaction (pp. 163–174). New York: Oxford University Press. Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
251
Mosier, K. L., Palmer, E. A., & Degani, A. (1992). Electronic checklists: Implications for decision making. Proceedings of the Human Factors Society 36th Annual Meeting (pp. 7–12). Santa Monica, CA: Human Factors and Ergonomics Society. Mosier, K. L., Sethi, N., McCauley, S., Khoo, L., & Orasanu, J. (2007). What you don’t know can hurt you: Factors impacting diagnosis in the automated cockpit. Human Factors, 49, 300–310. Mosier, K. L., & Skitka, L. (1996). Human decision makers and automated decision aids: Made for each other? In R. Parasuraman & M. Mouloua (Eds.), Automation and human performance: Theory and applications (pp. 201–220). Mahwah, NJ: Erlbaum. Mosier, K. L., Skitka, L. J., Dunbar, M., & McDonnell, L. (2001). Air crews and automation bias: The advantages of teamwork? International Journal of Aviation Psychology, 11, 1–14. Mosier, K. L., Skitka, L. J., Heers, S., & Burdick, M. D. (1998). Automation bias: Decision making and performance in high-tech cockpits. International Journal of Aviation Psychology, 8, 47–63. Murray, S. R. (1997). Deliberate decision making by aircraft pilots: A simple reminder to avoid decision making under panic. International Journal of Aviation Psychology, 7, 83–100. Nagel, D. C. (1988). Human error in aviation operations. In E. L. Wiener & D. C. Nagel (Eds.), Human factors in aviation (pp. 263–304). New York: Academic Press. Naikar, N., Moylan, A., & Pearce, B. (2006). Analysing activity in complex systems with cognitive work analysis: Concepts, guidelines and case study for control task analysis. Theoretical Issues in Ergonomics Science, 7, 372–394. Nance, J. J. (1984). Splash of colors: The self-destruction of Braniff International. New York: William Morrow. National Transportation Safety Board. (1982). Aircraft accident report: Air Florida, Inc., Boeing 737-222, N62AF, collision with 14th Street Bridge, near Washington National Airport, Washington, D.C., January 13, 1982 (NTSBAAR-82-8). Washington, DC: Author. National Transportation Safety Board. (1994). Safety study: A review of flightcrew-involved, major accidents of U.S. air carriers, 1978 through 1990 (NTSB/SS-94/01). Washington, DC: National Technical Information Service. Negroni, C. (2010, February 4). The flaw in blaming the pilots in Colgan Air Crash. Huffington Post. Retrieved from http://www.huffingtonpost.com/christine-negroni/the-flaw-in-blaming-the-p_b_449462.html Norman, D. A. (1986). Cognitive engineering. In D. A. Norman & S. W. Draper (Eds.), User centered system design (pp. 31–61). Mahway, NJ: Erlbaum. Norman, D. A. (1990). The design of everyday things. New York: Doubleday. O’Hare, D. (2003). Aeronautical decision making: Metaphors, models, and methods. In P. S. Tsang & M. A. Vidulich (Eds.), Principles and practice of aviation psychology (pp. 201–238). Mahwah, NJ: Erlbaum. O’Hare, D., & Smitheram, T. (1995). “Pressing on” into deteriorating conditions: An application of behavioral decision theory to pilot decision making. International Journal of Aviation Psychology, 5, 351–370. Orasanu, J. (1990). Shared mental models and crew decision making (Tech. Rep. 46). Princeton, NJ: Princeton University, Cognitive Science Laboratory. Orasanu, J. (1994). Shared problem models and flight crew performance. In N. Johnston, N. McDonald, & R. Fuller (Eds.), Aviation psychology in practice (pp. 255–285). Aldershot, UK: Ashgate. Orasanu, J., & Connolly, T. (1993). The reinvention of decision making. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 3–20). Norwood, NJ: Ablex. Orasanu, J., & Fischer, U. (1992). Distributed cognition in the cockpit: Linguistic control of shared problem solving. Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society (pp. 189–194). Hillsdale, NJ: Erlbaum. Orasanu, J., & Fischer, U. (1997). Finding decisions in natural environments: The view from the cockpit. In C. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 343–357). Hillsdale, NJ: Erlbaum. Orasanu, J., Fischer, U., & Davison, J. (1997). Cross-cultural barriers to effective communication in aviation. In C. S. Granrose & S. Oskamp (Eds.), Cross-cultural work groups (pp. 134–160). Thousand Oaks, CA: Sage. Orasanu, J., Fischer, U., McDonnell, L. K., Davison, J., Haars, K. E., Villeda, E., et al. (1998). How do flight crews detect and prevent errors? Findings from a flight simulation study. Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting (pp. 191–195). Santa Monica, CA: Human Factors and Ergonomics Society.
Downloaded from rev.sagepub.com by guest on April 5, 2012
252
Reviews of Human Factors and Ergonomics, Volume 6
Orasanu, J., & Salas, E. (1993). Team decision making in complex environments. In G. A. Klein, J. Orasanu., R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 327–345). Norwood, NJ: Ablex. Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39, 230–253. Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics, 30, 286–297. Patel, V. L., & Groen, G. J. (1991). The general and specific nature of medical expertise: A critical look. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 93–125). Cambridge, UK: Cambridge University Press. Patel, V. L., Kaufman, D. R., & Arocha, J. F. (2002). Emerging paradigms of cognition in medical decision making. Journal of Biomedical Informatics, 35, 52–75. Perrin, B. M., Barnett, B. J., Walrath, L., & Grossman, J. D. (2001). Information order and outcome framing: An assessment of judgment bias in a naturalistic decision-making context. Human Factors, 43, 227–238. Perry, S. J., & Wears, R. L. (in press). Large scale coordination of work: Coping with complex chaos within healthcare. In K. Mosier & U. Fischer (Eds.), Informed by knowledge: Expert performance in complex situations. New York: Taylor & Francis. Pew, R. W. (1995). The state of situation awareness measurement: Circa 1995. Proceedings of the International Conference on Experimental Analysis and Measurement of Situation Awareness (pp. 7-16). Daytona Beach, FL. Pierce, P. (1996). When the patient chooses: Describing unaided decisions in health care. Human Factors, 38, 278–287. Prince, C., & Salas, E. (1993). Training and research for teamwork in the military aircrew. In E. L. Wiener, B. G. Kanki, & R. L. Helmreich (Eds.), Cockpit resource management (pp. 337–366). San Diego, CA: Academic Press. Pritchett, A. (2009). Aviation automation: General perspectives and specific guidance for the design of modes and alerts. In F. Durso (Ed.), Reviews of human factors and ergonomics (Vol. 5, pp. 82–113). Santa Monica, CA: Human Factors and Ergonomics Society. Provonost, P. J., Wu, A., Dorman, T., & Morlock, L. (2002). Building safety into ICU care. Journal of Critical Care 17, 78–85. Provonost, P. J., Wu, A., & Sexton, J. B. (2004). Acute decompensation after removing a central line: Practical approaches to increasing safety in the intensive care unit. Annals of Internal Medicine, 140, 1025–1033. Rasmussen, J. (1976). Outlines of a hybrid model of the process plant operator. In T. B. Sheridan & G. Johannsen (Eds.), Monitoring behavior and supervisory control (pp. 371–384). New York: Plenum Press. Reader, T., Flin, R., & Cuthbertson, B. (2008). Factors affecting team communication in the intensive care unit (ICU). In C. P. Nemeth (Ed.), Improving healthcare team communication: Building on lessons from aviation and aerospace (pp. 117–133). Aldershot, UK: Ashgate. Reason, J. T. (1990). Human error. Cambridge, UK: Cambridge University Press. Reason, J. T. (1997). Managing the risks of organizational accidents. Aldershot, UK: Ashgate. Resnick, L. B., Salmon, M., Zeith, C. M., Wathen, S. H., & Holowchak, M. (1993). Reasoning in conversation. Cognition and Instruction, 11, 347–364. Rizzo, A., Marchigiani, E., & Andreadis, A. (1997). The AVANTI project: Prototyping and evaluation with a cognitive walkthrough based on the Norman’s model of action. Designing Interactive Systems (DIS ’97) Conference Proceedings (pp. 305–309). New York: ACM Press. Rogers, W. A., & Fisk, A. D. (2001). Understanding the role of attention in cognitive aging research. In J. E. Birren & K. W. Schaie (Eds.), Handbook of the psychology of aging (5th ed., pp. 267–287). San Diego, CA: Academic Press. Rogers, W. A., Stronge, A. J., & Fisk, A. D. (2006). Technology and aging. In R. S. Nickerson (Ed.), Reviews of human factors and ergonomics (Vol. 1, pp. 130–171). Santa Monica, CA: Human Factors and Ergonomics Society. Rothrock, L., & Kirlik, A. (2003). Inferring rule-based strategies in dynamic judgment tasks: Toward a noncompensatory formulation of the lens model. IEEE Transactions on Systems, Man, and Cybernetics, Part A, 33, 58–72. Rothrock, L., & Kirlik, A. (2006). Inferring fast and frugal heuristics from human judgment data. In A. Kirlik (Ed.), Adaptive perspectives on human-technology interaction (pp. 131–148). New York: Oxford University Press. Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
253
Rotundo, M., & Sackett, P. R. (2002). The relative importance of task, citizenship, and counterproductive performance to global ratings of job performance: A policy capturing approach. Journal of Applied Psychology, 87, 66–80. Rouse, W. B., Cannon-Bowers, J. A., & Salas, E. (1992). The role of mental models in team performance in complex systems. IEEE Transactions on Systems, Man, and Cybernetics, 22, 1296–1308. Rouse, W. B., & Morris, N. M. (1986). On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin, 100, 349–363. Rousseau, V., Aubé, C., & Savoie, A. (2006). Teamwork behaviors: A review and an integration of frameworks. Small Group Research, 37, 540–570. Rovira, E., McGarry, K., & Parasuraman, R. (2007). Effects of imperfect automation on decision making in a simulated command and control task. Human Factors, 49, 76–85. Rumelhart, D. E. (1980). Schema: The building blocks of cognition. In R. J. Spiro, B. C. Bruce, & W. F. Brewer (Eds.), Theoretical issues in reading comprehension (pp. 33–58). Hillsdale, NJ: Erlbaum. Salas, E., Cooke, N. J., & Rosen, M. A. (2008). On teams, teamwork, and team performance: Discoveries and developments. Human Factors, 50, 540–547. Salas, E., Dickinson, T. L., Converse, S. A., & Tannenbaum, S. I. (1992). Toward an understanding of team performance and training. In R. W. Swezey & E. Salas (Eds.), Teams: Their training and performance (pp. 3–29). Norwood, NJ: Ablex. Salas, E., Sims, D. E., & Burke, C. S. (2005). Is there a big five in teamwork? Small Group Research, 36, 555–599. Salas, E., Stagl, K. C., & Burke, C. S. (2004). 25 Years of team effectiveness in organizations: Research themes and emerging needs. In C. L. Cooper & I. T. Robertson (Eds.), International review of industrial and organizational psychology (Vol. 19, pp. 47–91). West Sussex, UK: Wiley. Salterio, S. (1996). Decision support and information search in a complex environment: Evidence from archival data in auditing. Human Factors, 38, 495–505. Sanchez, J., Fisk, A. D., & Rogers, W. A. (2004). Reliability and age-related effects on trust and reliance of a decision support aid. Proceedings of the Human Factors and Ergonomics Society 48th Annual Meeting (pp. 586–589). Santa Monica, CA: Human Factors and Ergonomics Society Sarter, N. B. (2006). Multimodal information presentation: Design guidance and research challenges. International Journal of Industrial Ergonomics, 36, 439–445. Sarter, N. B., & Schroeder, B. (2001). Supporting decision making and action selection under time pressure and uncertainty: The case of in-flight icing. Human Factors, 43, 573–583. Sarter, N. B., & Woods, D. D. (1994). Decomposing automation: Autonomy, authority, observability and perceived animacy. In M. Mouloua & R. Parasuraman (Eds.), Human performance in automated systems: Current research and trends (pp. 22–27). Mahwah, NJ: Erlbaum. Sarter, N. B., Woods, D. D., & Billings, C. (1997). Automation surprises. In G. Savendy (Ed.), Handbook of human factors/ergonomics (2nd ed., pp. 1926–1943). New York: Wiley. Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals, and understanding. Hillsdale, NJ: Erlbaum. Schank, R. C., & Abelson, R. P. (1995). Knowledge and memory: The real story. In R. S. Wyer, Jr. (Ed.), Knowledge and memory. Advances in social cognition (Vol. 8, pp. 1–85). Hillsdale, NJ: Erlbaum. Schraagan, J. M., Chipman, S., & Shalin, V. (2000). Cognitive task analysis. Mahwah, NJ: Erlbaum. Schriver, A. T., Morrow, D. G., Wickens, C. D., & Talleur, D. A. (2008). Expertise differences in attentional strategies related to pilot decision making. Human Factors, 50, 864–868. Seong, Y., Bisantz, A. M., & Gattie, G. J. (2006). Trust, automation, and feedback: An integrated approach. In A. Kirlik (Ed.), Adaptive perspectives on human-technology interaction (pp. 105–113). New York: Oxford University Press. Serfaty, D., MacMillan, J., Entin, E. E., & Entin, E. B. (1997). The decision making expertise of battle commanders. In C. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 233–246). Hillsdale, NJ: Erlbaum. Sexton, J. B., & Helmreich, R. L. (2000). Analyzing cockpit communication. The links between language, performance, error and workload. Human Performance in Extreme Environments, 5, 63–68. Shappell, S., Detwiler, C., Holcomb, K., Hackworth, C., Boquet, A., & Wiegmann, D. A. (2007). Human error and commercial aviation accidents: An analysis using the human factors analysis and classification system. Human Factors, 49, 227–242. Sheridan, T. B. (2002). Humans and automation: System design and research issues. Santa Monica, CA: Human Factors and Ergonomics Society and New York: Wiley. Downloaded from rev.sagepub.com by guest on April 5, 2012
254
Reviews of Human Factors and Ergonomics, Volume 6
Sheridan, T. B., & Parasuraman, R. (2006). Human-automation interaction. In R. S. Nickerson (Ed.), Reviews of human factors and ergonomics (Vol. 1, pp. 89–129). Santa Monica, CA: Human Factors and Ergonomics Society. Sheridan, T. B., & Thompson, J. M. (1994). People versus computers in medicine. In M. S. Bogner (Ed.), Human error in medicine (pp. 141–158). Mahwah, NJ: Erlbaum. Simon, H. A., & Chase, W. G. (1973). Skill in chess. American Scientist, 61, 394–403. Skitka, L. J., Mosier, K. L., & Burdick, M. (1999). Does automation bias decision making? International Journal of Human-Computer Studies, 50, 991–1006. Skitka, L. J., Mosier, K. L., & Burdick, M. (2000). Accountability and automation bias. International Journal of Human-Computer Studies, 52, 701-717. Skitka, L. J., Mosier, K. L., Burdick, M., & Rosenblatt, B. (2000). Automation bias and errors: Are crews better than individuals? International Journal of Aviation Psychology, 10(1), 83–95. Smith, P. J., Bennett, K. B., & Stone, R. B. (2006). Representation aiding to support performance on problemsolving tasks. In R. C. Williges (Ed.), Reviews of human factors and ergonomics (Vol. 2, pp. 74–108). Santa Monica, CA: Human Factors and Ergonomics Society. Smith-Jentsch, K. A., Campbell, G., Milanovich, D. M., & Reynolds, A. M. (2001). Measuring teamwork mental models to support training needs assessment, development, and evaluation: Two empirical studies. Journal of Organizational Behavior, 22, 179–194. Smith-Jentsch, K. A., Cannon-Bowers, J. A., Tannenbaum, S. I., & Salas, E. (2008). Guided team self-correction: Impacts on team mental models, processes, and effectiveness. Small Group Research, 39, 303–327. Sperling, B. K., & Pritchett, A. R. (2005). Information distribution to improve team performance in military helicopter operations: An experimental study. Proceedings of the 13th International Symposium on Aviation Psychology (pp. 706–711). Dayton, OH: Wright State University. Stout, R. J., Cannon-Bowers, J. A., Salas, E., & Milanovich, D. M. (1999). Planning, shared mental models, and coordinated performance: An empirical link is established. Human Factors, 41, 61–71. Sunstein, C. R., Kahneman, D., Schkade, D., & Ritov, I. (2001). Predictably incoherent judgments (John M. Olin Law & Economics Working Paper No. 131, 2nd series). Chicago: University of Chicago Law School. Tenney, Y. J., & Pew, R. W. (2006). Situation awareness catches on: What? So what? Now what? In R. C. Williges (Ed.), Reviews of human factors and ergonomics (Vol. 2, pp. 1–34). Santa Monica, CA: Human Factors and Ergonomics Society. Tsang, P. S. (2003). Assessing cognitive aging in piloting. In P. S. Tsang & M. A. Vidulich (Eds.), Principles and practice of aviation psychology (pp. 507–546). Mahwah, NJ: Erlbaum. Tschan, F., Semmer, N. K., Gurtner, A., Bizzari, L., Spychiger, M., Breuer, M., et al. (2009). Explicit reasoning, confirmation bias, and illusory transactive memory: A simulation study of group medical decision making. Small Group Research, 40, 271–300. Tversky, A., & Kahneman, D. (1971). Belief in the law of small numbers. Psychological Bulletin, 76, 105–110. Uhlig, P. N., Haan, C. K., Nason, A. K., Niemann, P. L., Camelio, A., & Brown, J. (2001). Improving patient care by the application of theory and practice from the aviation safety community. Proceedings of the 11th International Symposium on Aviation Psychology (pp. 1–9). Columbus: Ohio State University. Vicente, K. J. (1990). Coherence- and correspondence-driven work domains: Implications for systems design. Behavior and Information Technology, 9, 493–502. Vicente, K. J. (1999). Cognitive work analysis: Toward safe, productive, and health computer-based work. Mahwah, NJ: Erlbaum. Vicente, K. J. (2003). Less is (sometimes) more in cognitive engineering: The role of automation technology in improving patient safety. Quality Safety Health Care, 12, 291–294. Vicente, K. J., & Pawlak, S. (1994). Cognitive work analysis for the DURESS II system (CEL 94-03). Toronto, Canada: University of Toronto, Cognitive Engineering Laboratory. Waag, W., & Bell, H. (1997). Situation assessment and decision-making in skilled fighter pilots. In C. E. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 247–354). Mahwah, NJ: Erlbaum. Walker, N., Fain, W., Fisk, A., & McGuire, C. (1997). Aging and decision making: Driving-related problem solving. Human Factors, 39, 438–444. Wears, R. L., & Berg, M. (2005). Computer technology and clinical work: Still waiting for Godot. Journal of the American Medical Association, 293, 1261–1263. Downloaded from rev.sagepub.com by guest on April 5, 2012
Judgment and Decision Making
255
Wegwarth, O., Gaissmaier, W., & Gigerenzer, G. (2009). Smart strategies for doctors and doctors-in-training: Heuristics in medicine. Medical Education, 43, 721–728. Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage. Wickens, C. D., & Alexander, A. L. (2009). Attentional tunneling and task management in synthetic vision displays. International Journal of Aviation Psychology, 19, 182–199. Wickens, C. D., & Flach, J. M. (1988). Information processing. In E. L. Wiener & D. C. Nagel (Eds.), Human factors in aviation (pp. 110–156). San Diego, CA: Academic Press. Wiegmann, D. A., Goh, J., & O’Hare, D. (2002). The role of situation assessment and flight experience in pilots’ decisions to continue visual flight rules flight into adverse weather. Human Factors, 44, 189–197. Wiggins, M. W., & Bollwerk, S. (2006). Heuristic-based information acquisition and decision making among pilots. Human Factors, 48, 734–746. Wilkinson, S. (1994). The November Oscar incident. Smithsonian Air and Space, 8(6), 80–87. Will, R. P. (1991). True and false dependence on technology: Evaluation with an expert system. Computers in Human Behavior, 7, 171–183. Winograd, T. (1989, Spring). In R. Davis (Ed.), Expert systems: How far can they go? Part One. AI Magazine, 61–67. Woods, D. D. (1996). Automation: Apparent simplicity, real complexity. In M. Mouloua & R. Parasuraman (Eds.), Human performance in automated systems: Current research and trends (pp. 1–7). Norwood, NJ: Erlbaum. Woods, D. D., & Sarter, N. B. (2000). Learning from automation surprises and “going sour” accidents. In N. B. Sarter & R. Amalberti (Eds.), Cognitive engineering in the aviation domain (pp. 327–353). Mahwah, NJ: Erlbaum. Wright, M. C., & Endsley, M. R. (2008). Building shared situation awareness in healthcare settings. In C. P. Nemeth (Ed.), Improving healthcare team communication: Building on lessons from aviation and aerospace (pp. 97–114). Aldershot, UK: Ashgate. Wright, M. C., & Kaber, D. B. (2005). Effects of automation of information-processing functions on teamwork. Human Factors, 47, 50–66. Xiao, Y., Milgram, P., & Doyle, D. J. (1997a). Capturing and modeling planning expertise in anesthesiology. In C. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 197–215). Mahwah, NJ: Erlbaum. Xiao, Y., Milgram, P., & Doyle, D. J. (1997b). Planning behavior and its functional role in interactions with complex systems. IEEE Transactions on Systems, Man, and Cybernetics: Part A, Systems and Humans, 27, 313–324. Yates, J. F., & Stone, E. R. (1992). The risk construct. In J. F. Yates (Ed.), Risk-taking behavior (pp. 1–25). Chichester, UK: Wiley Zeide, J. S., & Liebowitz, J. (1987). Using expert systems: The legal perspective. IEEE Expert, 2(1), 19–22. Zhang, J., Patel, V. L., Johnson, T. R., & Shortliffe, E. H. (2004). A cognitive taxonomy of medical errors. Journal of Biomedical Informatics, 37, 193–204. Zsambok, C. E., & Klein, G. (Eds.). (1997). Naturalistic decision making. Mahwah, NJ: Erlbaum.
ABOUT THE AUTHORS Kathleen L. Mosier is a professor of psychology at San Francisco State University and 2009–2010 president of the Human Factors and Ergonomics Society. She received her PhD in industrial/organizational psychology from the University of California at Berkeley and spent 7 postdoctoral years at NASA Ames Research Center as a senior research scientist. She has worked in aviation human factors for 25 years, focusing on the impact of automated technologies on judgment and decision making in the air transport cockpit. Her early work was translated into a decision training module for airline pilots. She coined the term automation bias to refer to the use of automation as a heuristic replacement for vigilant information seeking and processing and associated errors. Her most recent work explores the impact of technology in the NextGen aviation environment. Mosier is an Downloaded from rev.sagepub.com by guest on April 5, 2012
256
Reviews of Human Factors and Ergonomics, Volume 6
associate editor for the Journal of Cognitive Engineering and Decision Making, is on the editorial boards of Human Factors and the International Journal of Aviation Psychology, and is a past president of the Association for Aviation Psychology. Ute M. Fischer is a research scientist in the School of Literature, Communication and Culture at the Georgia Institute of Technology. After receiving her PhD in cognitive psychology from Princeton University, she was a postdoctoral fellow and then a senior research scientist at NASA Ames Research Center. Her research over the past 19 years has examined how experts and teams of experts make decisions in complex, high-technology environments and how social and organizational factors impact individual and team cognition, in particular during commercial aircraft and space operations. Her work on decision making, jointly with Judith Orasanu, has focused on decision strategies that flight deck crews employ to cope with the dynamic nature, ambiguity, and complexity of decision problems and has led to a model of aviation decision making. Her research on team communication concerns linguistic indicators of team functioning and has identified communication strategies and interaction patterns that support team performance. Fischer has worked closely with safety representatives from airlines, and her research findings have led to training recommendations and ultimately to changes in pilot training at various domestic and international commercial air carriers. In her current work she is developing, in collaboration with Deepak Jagdish and Abbas Attarwala, a Tablet PC–based tool to facilitate real-time coding of individual or team behavior.
Downloaded from rev.sagepub.com by guest on April 5, 2012