Instead, there appear to be two broad classes of selection tasks, indicative and ..... an online Balderdash tournament, in which players make up bogus ...... example, Fischhoff, Slovic, and Lichtenstein (1978) asked auto mechanics to judge the.
JUDGMENT AND DECISION-MAKING
T. D. Gilovich D. W. Griffin
2 Making judgments and decisions are the most important things people do. People must assess adversaries and decide to fight, flee, or raise the white flag. They must size up the pool of potential mates and choose one to woo—or decide to stay out of the game altogether. And in the modern world, they must evaluate and choose careers, cell phone providers, plasma versus LCD televisions, even what religion to raise the kids. Decisionmaking, defined as the choice of one path among many based on an evaluation of the possible outcomes, necessarily involves judgment—the evaluation process itself—but judgment also occurs in the absence of choice. Given the centrality of judgment and decision making to nearly everything that is important in life, it stands to reason that scholars from a number of different disciplines have been interested in understanding how people go about judging and deciding. This means that judgment and decision making is something of an orphan field, lacking a dedicated and exclusive academic home. Notable contributions to the field have been made by cognitive psychologists (Kahneman & Tversky, 1974, 1979; Kelly & Jacoby, 1998; Markman & Medin, 1995; Shafir, Simonson, & Tversky, 1993; Sloman, 1996; Slovic & Lichtenstein, 1968), social psychologists (Dawes, 1988; Miller & Taylor, 1995; Nisbett & Ross, 1980; Wilson & Gilbert, 2003), economists (Camerer, 1990; Loewenstein, 1987, 1996; Thaler, 1980), marketing scholars (Johnson, Hershey, Meszaros, & Kunreuther, 1993; Simonson, 1989), and academics in a host of other disciplines (Rachlinski, 1998; Redelmeier, Koehler, Liberman, & Tversky, 1995; Simon, 1957; Ubel, Spranca, DeKay, Hershey, & Asch, 1998). But to have many homes can lead to uncertain membership in any one family. And, in one sign of judgment and decision making’s quasi-orphan status, questions have
3 been repeatedly raised about whether some of the core topics in judgment and decision making should be included in social psychology textbooks and taught in social psychology survey courses. Updated and put in terms more relevant to the present volume, why is there—or should there even be—a chapter on judgment and decision making in The Handbook of Social Psychology? We believe that the connections between the fields of social psychology and judgment and decision making (JDM) run deep, and that to exclude judgment and decision making from the study of social psychology would be to rob social psychology of much of what is important in everyday experience. One way to make the case that there is a natural connection between the two fields would be to list all of the topics or findings in JDM with direct and powerful implications for social life. It would be a long list. We have chosen to make the case differently. We aim to clarify the substantive overlap between the two fields by highlighting the tremendous influence they have had on one another, thus showing how the two fields are inextricably linked, to the benefit of both. More specifically, after a brief history of the development of JDM, we illustrate the deep connections between social psychology and JDM in three ways. We first discuss three important ideas from JDM that have shaped how social psychologists think about longstanding issues in their field. We then examine three ideas from social psychology that have had an enduring impact on the study of JDM. To further show the strong and natural links between the two fields, we then discuss three important ideas and areas of investigation that have arisen independently in each field, with mutually reinforcing effect. We end with some thoughts about the application of the ideas
4 developed in social psychology and JDM to solving some of the most pressing problems confronting the world today.
Prologue: Three Founding Stories Before discussing the rich interconnections between JDM and social psychology, we first look back at the origins of the field of judgment and decision making. The founders of modern social psychology (Kurt Lewin, Solomon Asch, Fritz Heider, and Leon Festinger among them) frequently explored such questions as how people decide whether to join a group, go along with others, or change their previous opinions. Notably, Festinger wrote a book entitled “Conflict, Decision, and Dissonance” (1964), and Lewin authored an influential chapter entitled “Group Decision and Social Change” (1952). The study of judgment was also central to such classic social psychological topics as attitude change and prejudice toward outgroups. When Sherif and Hovland (1961) developed Social Judgment Theory, they drew upon perceptual psychology and the concepts of assimilation and contrast to explain when persuasive messages are likely to be accepted or rejected. Our opening conundrum thus arises again: If theories about decision making and judgment have always been intrinsic to social psychology, why do we need to chart a separate path of influence on the part of the “field” of JDM? And why was this influence on social psychology most pronounced during the 1970s? Surprisingly, given the long interest in decision making among scholars in a wide range of different disciplines, the field of JDM as we know it today is a relatively modern invention, with its birth in the 1950s. The modern field is defined by the linkage of the study of actual behavioral tendencies with the specification of formal mathematical
5 models of judgment and decision-making developed in more prescriptive fields such as statistics, economics, and the philosophy of logic. It is the tension between the careful analysis of how judgment and decisions ought to be made with the careful observation of how decision are actually made that defines the modern field. Formal models of judgment and decision making are built up from a set of fundamental axioms that represent the most basic building blocks of logical analysis as applied to uncertainty, valuation, and choice among alternatives. It is astonishing to realize that these axiomatic models that were built up in mathematics, philosophy, and economics are in large part inventions of the 20th century and were not fully presented until the 1950s. This explains why JDM—as the confluence of formal models and psychological description—was not born until the 1950s and consequently only started to exert its full influence on social psychology in the 1970s and 1980s.
The First Behavioral Economist. The first tale of the founding of JDM focuses on Herbert Simon, a Nobel prize-winner in Economics who, paradoxically, was one of the fiercest critics of micro-economic theory. Simon was trained in the field of public administration, and was interested in modeling how bureaucracies actually worked (a goal more focused on “description” than on “prescription”). The phenomena that Simon and his colleagues observed could be described as “muddling through”—large organizations seemed to operate on simple rules of thumb in an environment in which no one person or department knows everything, but somehow everyone knows just enough to produce a satisfactory overall outcome when the individual contributions are added together. Such organizational behavior was strikingly at variance with the dominant
6 economic models of the time (and today, in fact), which posited that organizations and humans were rational (in the economic sense of making ideal, profit-maximizing decisions, not in the Freudian sense of being in touch with reality). The heart of Simon’s critique was that full economic rationality was simply an unrealizable model for human judgment and decision making. The “rational man” at the center of his critique was a 20th century invention built upon advances in statistics and choice theory offered by von Neumann and Morganstern (1944) and Savage (1954), and developed by Nash (1950) into modern game theory. In essence, the rational models required the decision maker to consider every possible action, the outcome of every possible action in every possible future state of the world and the probability of that state, and calculate the choice that would lead to the best outcome (and in the case of game theory, to correctly forecast how others would respond to each action). Simon (1955) noted that these theories were computationally unrealistic as either guides to or descriptions of actual human decision making because they required prodigious knowledge, an immense calculation ability that surpassed the capabilities of any computer at that time, and perfect prescience on the part of the decision maker regarding his or her own (future) preferences. Simon did not build his theory on specific psychological principles or processes: he explicitly noted that psychological theories of choice processes were not yet sufficiently developed to inform economics. Instead he used general psychological principles to outline some broad, realistic constraints on rational models as models of actual decision-making. These general psychological principles reflected the zeitgeist of cognitive psychology at the time, which focused on the limits of memory and attention.
7 Simon’s realistic constraints set the stage for the field of JDM as we know it today. Most generally, he asserted that people cannot—and do not want to—carry out the complex and time-consuming calculations necessary to determine the ideal choice out of all possible actions. Instead, they simplify the choice process by searching for a satisfactory outcome. This satisficing generally consists of 3 elements: a strategy that examines local or easy options before looking further afield, a stopping rule that specifies an aspiration level that must be met and hence how far afield the search should continue, and a simplified assessment of future value that provides a rather vague clue as to the actual value of the choice. There is another less well-known side to Simon’s critique: he also emphasized that such simplified methods of choice can do surprisingly well relative to optimizing methods, and that “bounded rationality” could still be evolutionarily successful. Simon offered the field of economics (at least) two other familiar psychological insights that were to echo repeatedly in the development of JDM. First, the human mind (as well as the aggregate mind of the organization) can only hold on to two or three alternatives at one time. Second, attention is a precious and costly commodity, a fact that must be considered in any description of how judgment and choice processes actually operate. Thus, in the vocabulary later introduced by Kahneman and Tversky, Simon had both a negative agenda (explaining how ideal, rational models were unrealistic and descriptively invalid) and a positive agenda (providing guidelines as to how humans— and animals—might actually make highly sensible, if simplified, choices). Simon was thus the first acknowledged “behavioral economist” who strove to incorporate psychological realism into economic models and explanations. The second
8 founding tale of JDM focuses on the individual most responsible for creating an information flow in the opposite direction, importing economic and statistical models into psychology. The first Behavioral Decision Theorist. In 1954, Ward Edwards published a review paper that introduced the new formal theories of decision making to the broader field of psychology (Edwards, 1954). He introduced the now-formalized Subjective Expected Utility Model (SEU) of decision making, with its distinctions between objective value (e.g., money) and subjective utility, and between objective probability (e.g., the proportion of 6’s expected when rolling a fair die) and subjective probability (a personal belief about the likelihood that a 6 will turn up). Although the notion of utility had been part of economics at least since Bentham (1789), it received a sharp twist in the new formal models (Kahneman, Wakker, & Sarin, 1997). Now, instead of referring to the pleasure or pain a person received from an outcome—that is, experienced utility, as originally used by Bentham—it referred to the predicted utility associated with a given choice. In a follow-up 1961 review, Edwards coined the term Behavioral Decision Theory and reviewed the now burgeoning literature on empirical tests of the foundations of SEU and related models (Edwards, 1961). Edwards also introduced Bayesian statistical methods to psychologists in a review co-authored with a leading statistician (Edwards, Lindmann, & Savage, 1963). In addition to providing influential reviews of formal decision models and relevant empirical evidence, Edwards and his colleagues also conducted programs of empirical research aimed at reconciling formal models and actual behavior. Notably, he
9 studied how gamblers used probabilities in Las Vegas and how computer programs might bridge the gap between actual and ideal decision-making. He chronicled a long list of failures of the formal models to match actual judgment and decision making, noting that people reacted differently to gains and losses, and seemed to be most responsive to comparative values rather than absolute values, consistent with Lewin’s expectancyvalue model of aspiration level (Lewin, Festinger, Dembo, & Sears, 1944). However, because of his deep interest in Decision Analysis—which was the new sub-discipline of using formal tools and models to provide useful guides to decision makers—Edwards was reluctant to discard what quickly became known as the “classical” formal models and preferred to use them as approximations to human judgment and decision making. Thus, although his dissertation began with the clear statement: “People in gambling situations do not make choices in such a way as to maximize their expected winnings or minimize their expected losses” (Edwards, 1953). Edwards continued to use the formal models throughout his career as the core of his explanatory frameworks and maintained an optimistic outlook as to the perfectibility of human judgment through decision aids. Famously, he and his colleagues concluded that human judgment could be characterized as “approximately Bayesian,” implying that people largely followed the rules of subjective probability (or Bayesian probability) as defined in SEU theory. The Third Path to the Psychology of Judgment and Decision Making. The Heuristics and Biases program of research instigated by Daniel Kahneman and Amos Tversky has come to define JDM in many Social Psychology textbooks and in the minds of many social psychologists, not least because of the Nobel Prize awarded to Kahneman in 2002 for their joint work. Although the program grew out of the zeitgeist created by
10 Simon, Edwards, and many others, it had a radically different agenda. The program began when Tversky, a mathematical psychologist who had worked with Edwards and others on formal measurement models, described the current state of the Behavioral Decision Theory paradigm circa 1968 to Kahneman, his colleague in the Psychology Department at Hebrew University. Kahneman found the idea of tinkering with formal models such as SEU to make them fit the accumulating empirical evidence to be an unpromising approach to understanding the psychological processes involved in judgment and choice. Instead, he argued, based on his own research on visual attention and processing, the principles of cognition underlying judgment should follow the principles of perception. Thus, instead of starting with formal models as the basis of descriptive accounts of judgment and decision making, Kahneman and Tversky started with principles of perception and psychophysics and extended them to the kind of processing necessary to evaluate probabilities and assess subjective values. This approach immediately suggested a guiding paradigm for research on judgment and decision-making: the study of visual illusions. The logic of studying perceptual illusions is that failures of a system are often more diagnostic of the rules the system follows than are its successes. Consider, for example, the moon illusion: the full moon looks enormous as it sits on the horizon, but more modestly moon-sized when high in the sky. There is little to learn from the constancy of the perceived size of the moon along the long arc of the overhead sky, but its illusory magnification when it sits on the horizon provides insight about the way that the visual system uses contextual detail to compute perceived distance and hence perceived size. The visual illusion paradigm, like the cognitive illusion approach patterned on it, does not imply that judgments of size are
11 typically wrong—in fact, it provides a map to those situations when intuitive perceptions are likely to be correct—but it highlights the processes by which perception or judgment is constructed from imperfect cues. Thus, the resulting guiding logic in the study of judgment was in practice the opposite of the approach championed by Simon, who had urged researchers to seek out and understand the environmental factors that maximized the success of simple processes. The cognitive illusion paradigm seeks out those environments or problem descriptions in which the judgment and choice processes people rely on lead to clear errors. The purpose was not to emphasize the predominance of bias over accuracy, but to find the clearest testing grounds for diagnosing the underlying simple processes or judgmental heuristics that people habitually employ. The heuristics that Kahneman and Tversky identified were also suggested by the principles of perceptual psychology, especially the organizing principles of Gestalt Psychology (e.g., Koffka, 1935). Gestalt Psychology emphasized how the perceptual system effortlessly and without awareness creates whole forms even when the information reaching the receptors is incomplete and indeterminate. According to the Heuristics and Biases approach—and as we shall see, according to the pertinent evidence—these underlying heuristics are not a simplified version of an ideal statistical analysis, but something completely different. This constituted a key point of differentiation between the H&B model and others before it: “In his evaluation of evidence, man is apparently not a conservative Bayesian: he is not Bayesian at all” (Kahneman & Tversky, 1972, p. 450). Unfortunately, or so it seems to us, this statement was taken by some to imply that the Heuristics and Bias (hu)man was not simply unBayesian, but rather stupid.
12 In a second phase of their collaborative research, Kahneman and Tversky took the perceptual framework they had used to study probability judgment and used it to illuminate decision-making, leading to their most complete and formal model, Prospect Theory (Kahneman & Tversky, 1979). Here, fundamental perceptual principles such as comparison levels and adaptation (Helson, 1964), diminishing sensitivity, and the privileged status of pain served as the primitives of a model that once again used specific biases and errors as tools of diagnosis. It is illuminating to compare the evolutionary implications of Simon’s Bounded Rationality and the Heuristics and Biases approach. For Simon, the guiding evolutionary principle was computational realism (i.e., simplified approximation) that nonetheless was well-adapted to fit the information environment. For Kahneman and Tversky, the guiding evolutionary principle was that existing processes in perceptual analysis were coopted as tools for higher-level cognitive processing. Although these tools might work well in many environments, they also lead to signature biases that are endemic to human intuition. In many cases, the biases that to Kahneman and Tversky were signals of underlying heuristics were already well-known. For example, Meehl and Rosen (1955) had warned clinicians of the danger of neglecting base rates in psychological diagnoses. In other cases, the biases were identified by informal observation, whether of psychologists who seemed to neglect power and underestimate sample sizes, Israeli army officers who neglected regression effects in determining the value of rewards versus punishment, or army selection personnel who maintained their belief in the efficacy of interviews despite statistical evidence to the contrary.
13 Without these three founding stories, the field of JDM would look very different than it does today, if indeed it existed at all. Of course, there are many other influences that shaped JDM and, through it, social psychology. For example, the study of logical reasoning in cognitive psychology became an important strand of JDM as errors in reasoning became a focus of study.
As we shift the focus back to social psychology, we
are now in a position to answer the “Why and When” questions. JDM had its impact on social psychology starting in the late 60s and early 70s for three main reasons. First, formal theories of judgment and decision making blossomed in the 1950s and provided a vocabulary for talking about and studying the processes of judgment and decision making. Second, the work of the three founding paradigms described above provided a new and stimulating way to think about rationality and error in human thought and behavior. Third, the methodology of the Heuristics and Biases framework—simple pencil-and-paper demonstrations of judgment and decision errors—was easily extended to the investigation of issues central to Social Psychology, especially the cognitive social psychology of the 1970s.
SOCIAL PSYCHOLOGY’S DEBT TO JUDGMENT AND DECISION MAKING
If the field of judgment and decision making never existed, social psychology would be different in a number of ways from what it is now. Social psychologists would have to reach for different terms to characterize the phenomena they study if their field’s accessible lexicon did not include such JDM terms as base-rates, counterfactuals, heuristics, illusory correlation, “a proper Bayesian,” and countless others. And the
14 acknowledged importance of such things as heuristics, counterfactuals, and illusory correlations also means that social psychologists must now confront journal editors and reviewers armed with additional alternative interpretations of experimental results. Social psychologists must now show that their findings are not “just” the reflection of a particular heuristic, a possible counterfactual comparison, or a quasi-Bayesian analysis. Even the need to rule out regression artifacts—an important caution in social psychology methods courses long before the field’s connection to JDM—would likely not be as accessible, and hence raised as often, if the regression fallacy and the psychological mechanisms that give rise to it were not such prominent topics in JDM. To illustrate how much JDM has influenced what the field of social psychology looks like, how social psychologists think, and how they conduct their research—to highlight the debt that social psychology owes to JDM—we discuss three particularly important ideas developed in JDM that have found their way to social psychology. In particular, we discuss how the notion of confirmation bias, the concept of heuristics, and the application of normative theories have largely originated in the field of JDM and come to influence theoretical development and empirical research in social psychology.
Confirmation Bias Suppose a friend gives you a bunch of hostas from her garden for you to plant in yours. “I don’t really know what I’m talking about,” she adds, “but I suspect they need a lot of water. You might want to test that out.” How would you conduct your test? If you’re like most people, you would plant them, give them a lot of water, and see how
15 they fare. What you would not do is give a lot of water to some, very little to the others, and compare the results. If so, your actions would reflect a common tendency in inductive reasoning typically referred to as a confirmation bias (Beyth-Marom & Fischhoff, 1983; Crocker, 1982; Klayman & Ha, 1987; Mynatt, Doherty & Tweney, 1977; Nickerson, Oswald & Grosjean, 2004; Skov & Sherman, 1986). When evaluating a proposition (hostas need a lot of water; happy people live longer; Japanese-Americans are more self-critical than European-Americans), we more readily, reliably, and reflexively look for evidence that would support the proposition than evidence that would contradict it. The nomenclature is important, but potentially misleading. There are times when one wants a given proposition to be true and so one energetically, and not disinterestedly, sifts through the pertinent evidence in an effort to uncover information that confirms its validity (Dawson, Gilovich, & Regan, 2002; Ditto & Lopez, 1992; Ditto, Scepansky, Munro, Apanovich, & Lockhart, 1998; Gilovich, 1983, 1991; Hsee, 1995, 1996; Kruglanski & Webster, 1996; Kunda, 1990; Lord, Ross, & Lepper, 1979; Pyszczynski & Greenberg, 1987). But that is not the case in most investigations of the confirmation bias. The participants have no stake in the outcome and are not trying to confirm the proposition they are testing. But they nevertheless tend to look for and examine information that would fit the proposition being tested more than information that would contradict it. In part to try to head off any confusion between these two very different strategies, some have suggested the term positive test strategy for the disinterested form of this tendency (Klayman & Ha, 1987). Interest in the confirmation bias among JDM researchers was sparked by a pair of experimental paradigms pioneered by Peter Wason. In the first, participants were told
16 that the experimenter had in mind a rule specifying acceptable sets of three integers, and that one acceptable set was “2 4 6.” The participants were to generate their own sets of three integers and the experimenter would indicate if each one satisfied the rule. Participants were allowed to record their sets and the experimenter’s response, and they were to tell the experimenter what they thought the rule was once they were sure they had figured it out. Performance on the task was not impressive, with an apparent confirmation bias getting in participants’ way (Wason, 1960). That is, participants tended to generate only positive instances of the rules they were entertaining, which, given the rule Wason had in mind (“any ascending sequence”), made it virtually impossible for them to be disabused of their hypotheses. For example, someone with the hypothesis “equally spaced ascending integers” would tend to offer sets such as “10 15 20” or “25 50 75.” This would result in consistent feedback that the set fit the rule, increased confidence on the part of participants that they had figured it out, and then consternation when they learned that their assessment was incorrect. The tendency to construct positive tests of their hypotheses, in other words, made it difficult for them to discern that they weren’t on the right track when a negative test would readily have done so (the experimenter would have said, for example, that “10 11 15” fit the rule, making it clear that “equally spaced ascending integers” was not on the mark). The second paradigm, one that has inspired hundreds of replications, has come to be known as the Wason selection task. In the usual variant, participants are shown four cards (often just pictures of 4 cards on a piece of paper or computer screen), each said to have a number on one side and a letter on the other. The participant’s task is to specify
17 which cards need to be turned over to determine whether a given rule is valid—say, “All cards with a vowel on one side have an even number on the other.” In this case, participants might be shown cards with “A,” “B,” “2,” and “3” face up (Wason & Johnson-Laird, 1972). What Wason and many others have found is that a considerable majority of participants state that either the “A” card has to be turned over or the “A” card and “2” card (Evans, 2007; Evans, Newstead, & Byrne, 1993; Wason & Johnson-Laird, 1972). The latter response appears to reflect the use of a positive test strategy. The rule says that all vowels have an even number on the other side, and so one looks at the vowel and even-number cards to see if evidence to support the rule is obtained. Here too people don’t tend to pursue a disconfirmatory strategy that would allow them to adequately assess the rule—they don’t turn over the potentially decisive “3” card (which, if it had a vowel on the other side, would invalidate the rule). Content and Domain Influences on Hypothesis Testing. As one would expect, performance on the Wason selection task is improved if the rule to be evaluated is more engaging than “every card with a vowel on one side has an even number on the other” (Cheng & Holyoak, 1985; Cox & Griggs, 1982; Holyoak & Cheng, 1995; Johnson-Laird, Legrenzi, & Legrenzi, 1972; Wason & Shapiro, 1971; Yachanin & Tweney, 1982). But it is not a case of simply making the content of the task more like “the real world.” Instead, there appear to be two broad classes of selection tasks, indicative and deontic, and thematic content tends to have a much bigger facilitory effect on the latter (Evans, 2007; Manktelow & Over, 1991). Indicative rules refer simply to empirical
18 regularities—e.g., all individuals with red hair have freckles. Deontic rules involve permissions or obligations—e.g., everyone riding in a car must wear a seatbelt. People are much more likely to select the equivalent of the “3” card for deontic rules rich in content. The best known and most widely-cited example is the drinking age problem: “if a person is drinking beer in a bar, then he or she is 21 or older.” Most people find this problem easy. If shown the four statements “drinking beer,” “drinking soda,” “22 years old,” and “16 years old,” they quickly and reliably select “drinking beer” and “16 years old”—here, in other words, they quite naturally look for disconfirmatory evidence (Griggs & Cox, 1982). Evolutionary psychologists have interpreted this result as evidence of an innate module dedicated to reasoning about social contracts and the detection of “cheaters” (Cosmides, 1989; Pinker, 1997; Tooby & Cosmides, 1992). This claim has been criticized on a number of grounds (Cheng & Holyoak, 1989; Evans & Over, 1996; Fodor, 2000; Sperber, Cara, & Girotto, 1995; Sperber & Girotto, 2002) and, without independent substantiating evidence, it rests on an invalid chain of logic. That is, the claim that enhanced performance on social-contract versions of the selection task is the result of an evolved, domain-specific module is essentially a claim that the following syllogism is valid:
Performance on most versions of the Wason selection task is poor. Performance on social-contract versions of the task is quite good. Therefore, the ability to reason through social-contract versions is handled by a domain-specific module evolved through natural selection.
19 Exactly why deontic versions of the selection task are typically much easier remains an issue of great interest and controversy. Many argue for an evolutionary account (Fiddick, Spampinato, & Grafman, 2005; Stone et al., 2002), whereas others maintain that it stems from differential cuing (Sperber & Girotto, 2002) or the subtly different goals that are evoked by deontic and indicative versions of the selection task (Evans & Over, 1996; Manktelow & Over, 1991; Oaksford & Chater, 1994). Hypothesis Testing in Social Life. Whatever the cause of the pronounced content effects on performance on selection tasks, social psychologists were quick to see the relevance of the confirmation bias for everyday social life. In one often-cited study, Snyder and Swann (1978) asked half of their participants to interview a target individual and ascertain whether she was an extravert. The remaining participants were to ascertain whether the target was an introvert. Participants selected their interview questions from a list provided. Those charged with ascertaining whether the target was an extravert tended to ask questions that focused on sociability (“In what situations are you most talkative?”) whereas those charged with ascertaining whether the target was an introvert tended to ask questions that focused on social withdrawal (“In what situations do you wish you could be more outgoing”). In an important wrinkle that shows the powerful implications of relying on such an information-search strategy, Snyder and Swann tape-recorded the interview sessions, edited out the questions so that only the responses remained, and then played the responses to another, uninformed set of participants. These latter participants rated those who had been interviewed by someone testing whether they were extraverted as more outgoing than those who had been interviewed by someone testing whether they were introverted.
20 This work and much that followed showed how a positive test strategy can elicit behavior that confirms, erroneously, the very hypothesis being tested. Because people tend to be agreeable and hence somewhat acquiescent in most social interactions, they tend to respond in ways that accept the thrust of the questions they are asked (Zuckerman, Knee, Hodgins, & Miyake, 1995). Extending this idea further, social psychologists were particularly interested in exploring how these processes can cement and exacerbate erroneous stereotypes. Entertaining the possibility that members of a particular group might conform to a prevailing stereotype can lead people to behave toward them in ways that elicit stereotype-consistent behavior (Hebl, Foster, Mannix, & Dovidio, 2002; Word, Zanna, & Cooper, 1974) The idea of a confirmation bias has also been invoked to advance social psychologists’ understanding of a topic of longstanding interest to the field, social comparison. Mussweiler (2003) argues that social comparison involves a two-step process. First, a person makes a quick, holistic assessment of whether one is similar to or different from a comparison target. For example, one might make a snap implicit judgment that one is different from someone who is the opposite gender, a different race, or who is thought of as an extreme exemplar (Albert Einstein, Angelina Jolie, Lebron James). Second, one tends to look for similarities between the self and similar targets, but dissimilarities between the self and dissimilar targets—that is, to seek out confirmatory information—with predictable effects on self-assessment. In one notable study in support of this model, Mussweiler, Ruter, & Epstude (2004) had participants think about their own athletic ability while being subliminally primed with the names of individuals regarded as extremely high or low in athleticism
21 (Michael Jordan, the Pope John Paul II), or moderately high or low in athleticism (racecar driver Nicky Lauda, Bill Clinton). Participants were then asked to estimate how many pushups they could do and how fast they could run 100 meters. What Mussweiler and colleagues found was that participants primed with moderate exemplars looked for similarities between themselves and the targets, resulting in assimilation. Those primed with Nicky Lauda thought they were stronger and faster than those primed with Bill Clinton. Those primed with extreme exemplars, in contrast, looked for dissimilarities between themselves and the targets, resulting in contrast. Those primed with Michael Jordan thought they were weaker and slower than those primed with the Pope. Further studies have shown that these effects are the product of the enhanced accessibility of target-consistent self-knowledge under conditions that foster similarity testing, and target-inconsistent self-knowledge under conditions that foster dissimilarity testing (Dijksterhuis et al., 1998; Mussweiler & Bodenhausen, 2002; Mussweiler et al., 2004).
Heuristics If one were to identify the single most important event that served to tie together the fields of social psychology and JDM, it would surely be Tversky and Kahneman’s short (7-page) paper in Science in 1974. In that paper, Tversky and Kahneman described three heuristics—anchoring, availability, and representativeness—that influence judgment in a stunningly wide range of areas and that provide a unifying explanation of otherwise isolated phenomena. In that paper, heuristics were described as “rules of thumb” that provide serviceable, but imperfect, answers to difficult judgment problems. They were contrasted with algorithms, which are more labor intensive but yield precise
22 and perfectly accurate answers. For example, one can determine the number of people in a lecture hall by exhaustively counting each person in attendance (the algorithmic solution), or one can simply estimate the average number of people in a representative row and multiply by the estimated number of rows (the heuristic solution). The heuristic in this example is a narrow one that applies only to the special purpose of estimating attendance in an auditorium. One of the reasons that Tversky and Kahneman’s work had such impact is that they identified a number of general-purpose heuristics that apply to all sorts of judgments. From the moment these three heuristics were introduced, most readers tended to think of them as serving an effort-saving role, an interpretation that helped inspire and give shape to early “dual process” theories in social psychology (to be described below), such as the Elaboration Likelihood (Petty & Caccioppo, 1986) and Heuristic-Systematic (Chaiken, 1980) Models of persuasion. According to these models, a motivated person is likely to respond to a persuasive message by carefully processing all of the information it contains and its implications, and hence is likely to be influenced (or not) by the soundness of the arguments presented. A less motivated person is inclined to give the message minimal attention, and hence is likely to be influenced by such superficial features as the attractiveness of the communicator or the number (rather than the quality) of the arguments presented. Later refinements of dual process accounts of cognition postulate that the two processes or systems of thought operate in less of an “either-or” fashion (Epstein, 1991; Evans, 2007; Gilbert, 1999; Kahneman & Frederick, 2002; Sloman, 1996; Stanovich, 1999; Strack & Deutsch, 2004). These later models stipulate that the two systems operate
23 in parallel, with the relative impact of each determined by the extent to which the characteristics of the problem at hand activate the very different cognitive processes that constitute the two systems. Heuristics, from this revised perspective, are automatic computations made by the reflexive, associative system (often referred to as System 1) that either powerfully influence the more reflective, rule-based analyses of System 2 or that are simply taken as acceptable answers to the judgment problem at hand (Kahneman, 2003; Kahneman & Frederick, 2002). When reading a letter of recommendation, for example, one automatically assesses the similarity between the candidate being described and various prototypes of individuals who have previously occupied the position (“jack of all trades,” “brilliant but sloppy,” “a grinder”). The similarity to the prototype then either strongly influences a more reflective assessment of the applicant’s likely success, or entirely substitutes for such an assessment (“her type never works out here”). Tversky and Kahneman initially identified the three heuristics of availability, representativeness, and anchoring-and-adjustment. We therefore focus our review of heuristics on these three, and then briefly discuss additional heuristics that have been proposed since. Availability. Given that there are a lot of Jewish comedians, one can probably think of particular examples very readily. There is merit, then, in turning this around and concluding that if one has an easy time thinking of Jewish comedians, there probably are a lot of them. The logic is generally sound and it constitutes the essence of the availability heuristic, or the tendency to use the ease with which one can generate examples as a cue to category size or likelihood. But the “probably” in this inference is important. There can be other reasons why examples of a given category are easy or hard
24 to generate and so availability is not always a reliable guide to actual frequency or probability (Folkes, 1998; Kahneman & Tversky, 1973; Macleod & Campbell, 1992; Oppenheimer, 2004; Rothbart, Fulero, Jensen, Howard, & Birrell, 1978; Tversky & Kahneman, 1983). Kahneman and Tversky (1973) first demonstrated this in a series of classic experiments. In one, participants were asked whether there are more words that begin with the letter “r” or that have “r” as the third letter. Because it’s easier to generate words that start with “r” (red, rabid, ratatouille…) than words that have an “r” in the third position (…Huron, herald, unreasonable), most participants thought there were more of the former than the latter. In reality, there are three times as many words with an “r” in the third position. Ross and Sicoly (1979) explored the implications of the availability heuristic for everyday social life. They asked couples to specify their own percentage contribution to various tasks and outcomes that come with living together—keeping the house clean, maintaining the social calendar, starting arguments, etc. They predicted that each person’s own contributions would be more salient than their partner’s contributions and so both partners would overestimate their own role. And that is just what participants did. When the estimates made by each member of a couple were summed, they tended to exceed the logical maximum of 100%. This was true, notably, for negative actions (e.g., starting fights) as well as positive actions—evidence that it is the availability heuristic and not self-enhancing motivations that is responsible for this effect. Norbert Schwarz and his colleagues have shown how the availability heuristic can influence people’s self-assessments and, in so doing, also settled an important conceptual
25 issue that lies at the core of the availability heuristic (Schwarz et al., 1991; Schwarz & Vaughn, 2002; see also Gabrielcik & Fazio, 1984). Recall that people are assumed to use the ease with which they can come up with instances of a given category when making judgments about the category. But note that if instances are easy to generate, one will probably come up with a lot of them. So how can we be sure that people are in fact influenced by the ease with which they generate instances (a meta-cognitive feature) rather than the number of instances they generate (a cognitive feature)? Typically, we can’t. What Schwarz and colleagues did was to disentangle these two, usuallyintertwined features. In one representative experiment, they asked half their participants to think of times they had been assertive and the other half to think of times they had been unassertive. Some of the participants in each group were asked to think of six examples and the others were asked to think of twelve examples. The required number of instances, six and twelve, were carefully chosen so that thinking of six examples would be easy but thinking of twelve would be a challenge. This manipulation separates ease of generation (process) from the number of examples generated (content). Those asked to think of twelve examples of their assertiveness (or unassertiveness) will think of more examples than those asked to think of six, but they will have a harder time doing so. What Schwarz and colleagues found was that those asked to think of six examples of their past assertiveness later rated themselves as more assertive than those asked to think of twelve examples. The same pattern held for those asked to think of past examples of unassertiveness. Thus, it is the ease with which people can recall examples, not the number of examples recalled, that dominates people’s judgments. The effect was so strong, in fact, that those asked to
26 come up with twelve examples of their own unassertiveness (and who thus had lots of examples of their failure to be assertive on the top of their heads) rated themselves as more assertive than those asked to come up with twelve examples of assertiveness (and who thus had lots of examples of their past assertiveness at the top of their heads)! Schwarz and colleagues have shown that the meta-cognitive experience of the ease with which one can access pertinent evidence affects people’s judgments of their own vulnerability to sexual assault and heart disease (Grayson & Schwarz, 1999; Rothman & Schwarz, 1998), the quality of their memory (Winkielman, Schwarz, & Belli, 1998), and the pleasantness of their childhood (Winkielman & Schwarz, 2001). Other investigators have shown that this instance-listing procedure influences people’s estimates of their own past behavior (Aarts & Dijksterhuis, 1999), their assessments of how much their home towns and high-school friends have changed (Eibach, Libby, & Gilovich, 2003), and their attitudes toward proposed policy changes (Brinol, Petty, & Tomala, 2006). In a wry application of this paradigm, Craig Fox had students list either two or ten ways a course could be improved as part of the standard end-of-the-term course evaluation process (Fox, 2006). Students asked to list ten possible improvements apparently had difficulty doing so because they rated the course significantly more favorably (median = 5.5 on a 7-point scale) than students asked to list two ways to improve (median = 5.0). Availability’s Close Cousin: Fluency. The mere act of imagining an outcome can make it seem more likely to occur. Imagining one candidate winning an election makes it seem more likely that that candidate will triumph (Carroll, 1978) and imagining what it would be like to have a disease makes it seem that one is more at risk of getting it
27 (Sherman, Cialdini, Schwartzman, & Reynolds, 1985). This effect was originally interpreted as the result of availability. Imagining the event made it more cognitively available and hence it was judged more likely. But is availability really the culprit? After all, the concept of, say, getting an ulcer is made highly available—perhaps maximally available—by the very question that elicits the judgment of likelihood, whether one had earlier imagined having an ulcer or not. How hard can it be for the thought of having an ulcer to “come to mind” when one has just been presented the words “what are the chances that you will someday have an ulcer?” And what exactly are the “relevant instances” that easily (or not) come to mind when one is asked to estimate the likelihood of having an ulcer? The point here is that the very question about likelihood puts the target event at the forefront of one’s mind regardless of whether one had earlier imagined it. But thinking of the target event is nonetheless likely to have a different feel if one had, in fact, mentally tried it on earlier. It is likely to feel more “fluent.” Fluency refers to the experience of ease or difficulty associated with information processing. A clear image is easy to process, and fluent. A phonemically irregular word is hard to process, and disfluent. People use the metacognitive experience of fluency as a cue when making inferences about all sorts of judgments (Jacoby & Dallas, 1981; Oppenheimer, 2008). People judge fluent names to be more famous (Jacoby Woloshyn, & Kelley, 1989), fluent objects to be better category members (Whittlesea & Leboe, 2000), and adages that rhyme to be more valid than those that don’t (McGlone & Tofighbakhsh, 2000). In addition to these direct effects on judgment, fluency appears to influence how people process relevant information. In many respects, the feeling of fluency/disfluency
28 has the same effects as being in a good/bad mood (see mood effects on judgment below). A feeling of disfluency while processing information appears to undermine people’s confidence in what they are doing, leading to something of a “go slow, be careful” approach to judgment and decision making. Thus, people are more likely to choose a default option when choosing between consumer products that are made disfluent (Novemsky, Dhar, Schwarz, & Simonson, 2007). The shift to more cautious information processing was shown even more directly in a study by Alter and colleagues who gave participants the Cognitive Reflection Task (CRT: Frederick, 2005) in either a normal or degraded font. The CRT requires stifling an immediate gut reaction to arrive at the correct answer to each question. For example, when given the question, “A bat and ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?,” one must think beyond the immediate response of “10 cents” to arrive at the correct response of 5 cents. Alter and colleagues found that participants answered more questions correctly when the questions were presented in a degraded, and hence disfluent, font (Alter, Oppenheimer, Epley, & Eyre, 2007). Fluency also appears to influence the level of abstraction at which information is encoded. As described below in our discussion of Temporal Construal Theory (Trope & Liberman, 2003), the same event (taking a test) can be construed relatively concretely (answering questions) or abstractly (assessing aptitude) and it has been shown that physically distant objects tend to be construed more abstractly (Fujita et al., 2006) than close objects. Given that blurry (disfluent) objects tend to appear to be farther away than distinct objects (Tversky & Kahneman, 1974), one might expect disfluent entities more generally to appear relatively far away. Indeed, Alter and Oppenheimer (2008) found
29 that cities are judged to be farther away when their names are presented in a difficult-toread font. To link this finding to construal level, they then examined archival records of an online Balderdash tournament, in which players make up bogus definitions for obscure words. They found that participants provided more abstract definitions of more phonemically complex, or disfluent, words. Representativeness. Jenna Jones, nutrition program manager for Cornell Cooperative Extension (a component of Cornell’s land-grant mission to disseminate information to the public), informs readers of her column that a tomato “has four chambers and is red” and that eating tomatoes is good for the heart; a walnut “looks like a little brain” and “we now know that walnuts help develop more than three dozen neurontransmitters (sic) for brain function”; and kidney beans assist with the healthy functioning of their organ namesake (Jones, 2008). Ms. Jones’ advice to her readers appears to be heavily influenced by a second heuristic identified by Kahneman and Tversky, representativeness. Making judgments on the basis of representativeness reflects the mind’s tendency to automatically assess the similarity between two entities under consideration and to use that assessment as input to a judgment about likelihood. Judgments about the likelihood of an object belonging to a category are powerfully influenced by how similar the object is to the category prototype (Kahneman & Tversky, 1972, 1973; Tversky & Kahneman, 1983). Judgments of the likelihood that an outcome stems from a particular cause are powerfully influenced by the similarity between putative cause and observed effect (Gilovich & Savitsky 2002; Nisbett & Ross, 1980). Judgments about the likelihood of obtaining a given result are powerfully influenced by the similarity between the features
30 of the imagined result and those of the processes thought to be at work (Kahneman & Tversky, 1972, 1973; Tversky & Kahneman, 1971). The most compelling way to demonstrate that judgments are “powerfully” influenced by a hypothesized process is to show that they are excessively influenced. Much of the research on representativeness has therefore sought to show that the heuristic leads people to make judgments that violate clear normative standards. Judging whether a sample is likely to have come from a particular generating process by assessing the similarity between the two, for example, has been shown to give rise to a “law of small numbers,” or a tendency to believe, contrary to probability theory, that even small samples should be representative of the populations from which they are drawn (which is true of large samples and is captured in the law of large numbers). The belief in a law of small numbers has been established by studies showing that people (including expert statisticians and psychologists) are excessively confident about the replicability of research findings (Tversky & Kahneman, 1971), have difficulty recognizing or generating random sequences (Falk & Konold, 1997; Gilovich, Vallone, & Tversky, 1985; Wagenaar, 1972), and are overly influenced by the relative proportion of successes and failures, and insufficiently influenced by sample size, in assessments of how confident they can be in a particular hypothesis (Griffin & Tversky, 1993). The work on representativeness that garnered the most attention and sparked the greatest controversy, however, involved experiments demonstrating that the allure of representativeness can prevent people from utilizing base rates or basic set-inclusion principles when making predictions. In one now-classic study (Kahneman & Tversky,
31 1973), participants were given the following description of an individual enrolled in graduate school: Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to have little feel and little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense.
One group of participants was asked to rank nine disciplines in terms of how closely Tom W. resembled the typical student in that field. A second group ranked them in terms of the likelihood that Tom was actually enrolled in each. A third group simply estimated the percentage of all graduate students in the U.S. who were enrolled in each discipline. There were two critical findings. First, the rankings of the likelihood that Tom W. actually studied each of the disciplines were virtually identical to the rankings of how similar he seemed to the typical student in each field. Participants’ assessments of likelihood, in other words, were powerfully influence by representativeness. Second, the rankings of likelihood did not correspond at all with what the participants knew about the popularity of the different disciplines. Information about the base-rate, or the a priori likelihood of Tom being a student in each of different fields, was simply ignored.
32 Experiments like this sparked a long-running controversy about whether and when people are likely to ignore or under-utilize base-rates (Cosmides & Tooby, 1996; Gavanski & Hui, 1992; Gigerenzer, 1991; Griffin & Buehler, 1999, Koehler, 1996). The controversy was productive, as it yielded such findings as that people are more likely to utilize base rate information if it is presented after the information about the individual (Krosnick, Li, & Lehman, 1990), if the base-rate is physically instantiated in a sampling paradigm (Gigerenzer, Hell, & Blank, 1988, but see Poulton, 1994, p. 153), and if the base-rate is causally related to the to-be-predicted event (Ajzen, 1977; Tversky & Kahneman, 1982). But in an important respect the controversy was misguided because the essential idea being put forward was that people’s judgments are powerfully influenced by representativeness, not that people can’t use, or typically don’t use, baserates. Instead, the Tom W. studies and others like it were existence proofs. They showed that, however often or intelligently people might utilize base-rates in their everyday and professional lives, the allure of representativeness is so strong that it can blind them to the relevance of information they would otherwise use quite readily. In fact, the more people typically utilize base-rates, the stronger the demonstration: Showing that representativeness leads people to ignore information they are too keen to ignore anyway is not impressive; showing that it leads them to ignore information they typically embrace is. Every bit as controversial as the work on representativeness and base rates were Tversky and Kahneman’s (1983) demonstrations that the allure of representativeness could lead people to commit the “conjunction fallacy,” and end up judging that the
33 conjunction of two events is more likely than either of the two events alone. For example, participants in one study were given the following description of an individual: Bill is 34 years old. He is intelligent but unimaginative, compulsive, and generally lifeless. In school, he was strong in mathematics but weak in social studies and humanities. They were then asked to rank the likelihood of eight possible life outcomes for Bill, including: (1) Bill is an accountant, (2) Bill plays jazz for a hobby, and (3) Bill is an accountant who plays jazz for a hobby. Ninety-two percent of the respondents assigned a higher rank to (3) than to (2), even though any state of the world that satisfies (3) automatically satisfies (2) and so (3) cannot be more likely than (2). (The results were the same for Bill’s better-known counterpart Linda, the feminist bank teller; we present the results for Bill to bring him out from Linda’s long shadow.) Because the conjunction fallacy violates one of the most basic rules of probability theory, Kahneman and Tversky (1983) anticipated controversy, and provided a wideranging discussion of alternative interpretations. They included additional controls for the possibility that respondents misunderstood the words “and” or “or”; they made sure that the same effects occurred with frequencies as well as probabilities and that the effect applied when reasoning about heart attacks as when reasoning about personality descriptions; and they made sure that the same effects obtained with political forecasters as with students from Stanford and the University of British Columbia. Nonetheless, the anticipated controversy ensued, centering around participants’ interpretation of the conjunction (e.g., Mellers, Hertwig, & Kahneman, 2001), the effects of frequency versus
34 probability response formats (Hertwig & Gigerenzer, 1999) and the limits of laboratory research. Anchoring. Suppose someone asks you how long it takes Venus to orbit the sun. You reply that you don’t know (few people do), but your interrogator then asks for an estimate. How do you respond? You might think to yourself that Venus is closer than the earth to the sun and so it probably takes fewer than the 365 days it takes the earth to make its orbit. You might then move down from that value of 365 days and estimate that a year on Venus consists of, say, 275 days. (The correct answer is 224.7) To respond in this way is to use what Tversky and Kahneman referred to as the anchoring and adjustment heuristic (Tversky & Kahneman, 1974). One starts with a salient or convenient value and adjusts to an estimate that seems right. The most notable feature of such adjustments is that they tend to be insufficient. In most investigations of such “anchoring effects,” the investigators take care to ensure that the respondents know that the anchor value is entirely arbitrary and therefore carries no implication whatsoever about what the right value might be. In the initial demonstration, Tversky & Kahneman (1974) spun a “wheel of fortune” device and then asked participants whether the percentage of African countries in the UN is higher or lower than the number that came up. After participants indicated whether they thought it was higher or lower, they were asked to estimate the actual percentage of African countries in the UN. What they found was that the transparently arbitrary anchor value significantly influenced participants’ responses. Those who confronted larger numbers from the wheel of fortune gave significantly higher estimates than those who confronted lower numbers. Anchoring effects using paradigms like this have been observed in people’s evaluation of gambles
35 (Carlson, 1990; Chapman & Johnson, 1999), estimates of risk and uncertainty (Plous, 1989; Wright & Anderson, 1989), perceptions of self-efficacy (Cervone & Peake, 1986), anticipations of future performance (Switzer & Sniezek, 1991), and answers to general knowledge questions (Jacowitz & Kahneman, 1995; Mussweiler & Strack, 1999, 2000; Strack & Mussweiler, 1997). As the research on anchoring evolved, comparable effects using all sorts of other paradigms have been observed and it appears that such effects are not always the result of insufficient adjustment. Indeed, probably the fairest reading of the anchoring literature is that there is not one anchoring effect produced by insufficient adjustment, but a family of anchoring effects produced by at least three distinct types of psychological processes (Epley, 2004). Epley and Gilovich (2001, 2004, 2005, 2006) have provided evidence that people do indeed adjust insufficiently from at least some anchor values, particularly those that people generate themselves (like the question about Venus above). They have found, for example, that people articulate a process of adjusting from self-generated anchors, and that manipulations that should influence adjustment, but not other potential causes of anchoring, have a significant effect on people’s judgments. In particular, people who are incidentally nodding their heads while answering, are cognitively busy, or lack incentives for accurate responding tend to be more influenced by self-generated anchor values than those who are incidentally shaking their heads, are not busy, or are given incentives for accuracy. Manipulations such as these, however, have generally been shown to have no effect on participants’ responses in the standard (experimenter-generated) anchoring paradigm pioneered by Tversky and Kahneman (Chapman & Johnson, 1999; Epley &
36 Gilovich, 2001, 2005; Tversky & Kahneman, 1974; Wilson, Houston, Etling, & Brekke, 1996). At first glance, this is a bit of a puzzle because it raises the question of why, without insufficient adjustment, anchoring effects would occur. Why, if not because of insufficient adjustment, do people’s estimates tend to assimilate toward anchor values presented to their attention? This question has been addressed most extensively by Thomas Mussweiler and Fritz Strack (Mussweiler, 2002; Mussweiler & Strack, 1999, 2000; Strack & Mussweiler, 1997). They maintain that most anchoring effects are the result of the enhanced accessibility of anchor-consistent information. The attempt to answer the initial question put to them by the investigator—“Is the Nile longer or shorter than 5,000 [800] miles?”—leads the individual to first test whether the given value is correct—is the Nile 5,000 [or 800] miles long? Because people evaluate hypotheses by attempting to confirm them (Evans, 2007; Snyder & Swann, 1978; Skov & Sherman, 1986), such a search generates evidence disproportionately consistent with the anchor. Mussweiler and Strack (2000) provide support for their analysis by showing that information consistent with the anchor value presented to participants is indeed disproportionately accessible. For example, participants who were asked whether the price of an average German car is higher or lower than a high value were subsequently quick to recognize words associated with expensive cars (Mercedes, BMW); those asked whether the price of an average German car is higher or lower than a modest value were subsequently quick to recognize words associated with inexpensive cars (Volkswagon, Golf). Oppenheimer, LeBeouf, and Brewer (2008) have recently shown that the semantic activation elicited by different anchors can be quite general. They asked one group of
37 participants whether the Mississippi River was longer or shorter than 4,800 miles, and another group whether it was longer or shorter than 15 miles. They then asked their participants to draw a line equal to the length of a standard toothpick. Those exposed to the high initial anchor drew longer toothpicks than those exposed to the low initial anchor. This suggests that exposure to the initial anchor activated the general concept of “long” or “short,” which influenced their representation (and production) of a standard toothpick. To test this idea, Oppenheimer and colleagues had participants in a follow-up experiment perform a word completion task after being exposed to high or low anchor values. Participants exposed to the high anchors were more likely to form words connoting bigness (BIG for B_G, LONG for _ONG) than those exposed to the low anchors. Recent research suggests that there is likely a third source of anchoring effects— pure numeric priming. That is, an anchor activates its own numeric value and those close to it, which are then highly accessible and influential when the person tries to fashion a response. In one notable experiment, participants were asked whether the runway at Hong Kong International Airport was longer or shorter than 7.3 kilometers or 7,300 meters and were then asked to estimate the cost of an unrelated project. Those asked the question in terms of meters gave higher estimates on the second, unrelated task than those asked the question in terms of kilometers—presumably because the latter primed smaller absolute numbers (Wong & Kwong, 2000). Although some have argued otherwise, this does not appear to be the result of the differential accessibility of semantic information consistent with the initial anchor because 7.3 kilometers and 7,300 meters represent the same value, just in different units. More recent research casts further doubt on the
38 possibility that the differential accessibility of anchor-consistent semantic information is responsible for such effects. Critcher and Gilovich (2008) asked participants what percentage of the sales of a P-97 (or P-17) cell phone would be in the European market. Participants estimated a higher percentage of European sales for the P-97 than the P-17. Note that the process that gives rise to anchor-consistent semantic accessibility is testing whether the anchor value might be the correct value. Here it seems far-fetched to maintain that participants asked themselves whether part of the model label (97 or 17) might be the European market share. Social psychologists have made great use of the idea of anchoring, citing it as a core component of such diverse phenomena as trait inference (Gilbert, 1989, 2002), selfenhancement (Kruger, 1999), self-efficacy (Cervone & Peake, 1986), perspective taking (Ames, 2004; Epley, Keysar, Van Boven, & Gilovich, 2004) language production and comprehension (Keysar & Barr, 2002), and the twin egocentric biases known as the “spotlight effect” (Gilovich, Medvec, & Savitsky, 2000) and the “illusion of transparency” (Gilovich, Savitsky, & Medvec, 1998; Holder & Hawkins, 2007; Savitsky & Gilovich, 2003; Miller & McFarland, 1987; Van Boven, Gilovich, & Medvec, 2003; Vorauer & Cameron, 2002; Vorauer & Ross, 1999). Some accounts of social psychological phenomena that are based on anchoring simply draw upon the idea that people’s judgments are assimilated to prominent anchor values. Others draw more specifically on the notion that people adjust from an initial mental representation, but do so insufficiently. Probably the most influential of these accounts is Gilbert’s correction model of causal attribution (Gilbert, 1989). According to Gilbert, we first characterize people in
39 terms of the behavior we see them perform. Someone acting in an angry fashion is initially categorized as an angry person. But we later note the prevailing situational constraints acting on the person in question and, if those constraints call for it, we adjust our initial, unmitigated impression. The initial phase of characterizing individuals in line with their behavior is thought to be automatic, unavoidable, and effortless. The later correction phase, however, is thought to be effortful and therefore insufficient whenever adequate cognitive resources are scarce (Epley & Gilovich, 2006; Quattrone, Lawrence, Finkel, & Andrus, 1981). The result is that observers end up falling prey to the “correspondence bias” (Gilbert & Jones, 1986) or “fundamental attribution error” (Ross, 1977), believing that people’s personal dispositions are a more important cause of their behavior than is justified. Other Heuristics. Kahneman and Tversky did not offer their three heuristics of availability, representativeness, and anchoring-and-adjustment thinking that those were the complete set of all general-purpose heuristics. Indeed, other heuristics have been identified since the publication of their seminal work, although perhaps fewer than they might have expected. We have already seen that a fluency heuristic should be included. Kahneman and Tversky themselves added the notion of a simulation heuristic, by which the likelihood and emotional impact of an event is influenced by the ease with which alternatives to its occurrence can be imagined in a simulated mental scenario (Kahneman & Tversky, 1982). Such mental simulations can yield a sense of causal propensity and related feelings of surprise, both of which have also been argued to function much like heuristics (Kahneman & Miller, 1986; Kahneman & Varey, 1990). The heuristic that has
40 probably captured the most attention since Kahneman and Tversky’s “big three” is the affect heuristic, which involves utilizing one’s immediate good/bad affective reactions to stimuli as a cue to various judgments and decisions, such as valuation and, most important, approach and avoidance (Slovic, Finucane, Peters, & MacGregor, 2002). A host of decision or choice heuristics have also been proposed (see Frederick, 2002, for a discussion). Two notable programs of research that examine such heuristics are the Adapative Decision Maker framework (Payne, Bettman, Johnson, 1990) and the Fast and Frugal Heuristics framework (e.g., Gigerenzer, Todd, and the ABC Research Group, 1999). Both derive from Simon’s call for delineating the satisficing rules people use that are well-adapted to particular task or information environments. The Adaptive Decision Maker model describes the role of effort-accuracy tradeoffs in the selection of decision strategies and the use of more or less complex heuristics as a function of problem complexity and importance. The Fast and Frugal program identifies heuristics that maximize simplicity and accuracy within a given task structure. Both programs of research are highly compatible with the strong influence in social psychology of the cognitive miser analogy (e.g., Fiske & Taylor, 2007) and the related effort-accuracy models of social cognition (e.g., Petty & Cacioppo, 1986), but despite—or because of— the overlap, the concept of choice heuristics has thus far had little impact on social psychology.
Normative Theories It is hard to imagine how someone could be interested in judgment and decision making without being interested in how to make the best judgments and decisions. As a
41 result, there has always been a normative bent to JDM research, with the existence and character of the most prominent normative theories shaping the kind of research JDM scholars conduct. The existence of agreed-upon normative theories leads naturally to investigations of how well actual judgments and decisions measure up to the ideal. They also provide researchers with readily-specified null hypotheses, a useful attribute for a science that places such great emphasis on significance-testing. Indeed, the most prominent normative theories have often provided researchers the most convenient null hypotheses—straw men, really, that are easy to shoot down and allow the investigators in question to stake a claim for the importance, and publishability, of a set of findings. For example, the null hypothesis that people are entirely selfish and have no concern at all about fairness, although important to investigate given the theoretical lay of the land at the time (Kahneman, Knetsch, & Thaler, 1986), was not one that had any real chance of standing. JDM research is intricately connected to two crucial normative theories— Subjective Expected Utility (SEU) theory for choice and Bayes’ theorem for probability judgment (note that these theories are intertwined, as Bayesian probability is used as the “expectation” component of SEU). According to SEU theory, a decision maker assigns a utility to each possible outcome for each course of action and weights each outcome by its perceived probability of occurrence. The decision maker then chooses the course of action with the highest expected utility. Few people find this idea objectionable or even noteworthy. More noteworthy, however, is the specification of a number of axioms and principles that one’s choices must follow in order to ensure that one maximizes overall utility. Among these are the axioms of transitivity (if x is preferred to y, and y is
42 preferred to z, then x should be preferred to z), independence (if x is preferred to y, then x or a given chance of z should be preferred to y or a given chance of z), and consistency (if x is preferred to y, then some probability mixture of receiving x or y should be preferred to y), and the principle of invariance (one’s preference between x and y should not depend on the descriptions of x and y or the method by which one’s preference is elicited). Few people find any of the axioms objectionable either, but JDM researchers have had no trouble demonstrating that although people endorse these axioms in principle, they often violate them in practice. For example, the principle of invariance is violated by people’s responses to one of the most famous decision problems in the JDM literature, Kahneman and Tversky’s “Asian disease” problem (1984). Faced with the outbreak of a disease that “is expected to kill 600 people”, nearly ¾ of the respondents preferred a policy that would save 200 lives for sure over a policy with a one-third chance of saving 600 and a two-thirds chance of saving none. In other words, most respondents were risk averse in that they chose the guaranteed outcome over the risky gamble even though the “expected” number of lives saved were equal. However, most respondents were risk seeking when presented with a different version of the problem in which the very same policies were described in terms of the number of lives lost rather than lives saved. That is, only about 20% of the respondents preferred the policy that would result in 400 people dying over the policy with a one-third chance of nobody dying and a two-thirds chance of 600 dying. Participants’ responses are clearly not always invariant across different descriptions of the same problem.
43 Tversky and Kahneman (1986) have used this discrepancy between what people agree to in the abstract and what they do in concrete choice contexts to make the point that there can never be a single theory of decision making that is both normatively valid and descriptively adequate. The normative force of the axioms of rational choice is unassailable and hence cannot be dispensed with. And yet it is an empirical fact that people’s choices often fail to conform to the axioms. Subjective expected utility theory thus serves as an ideal to which people’s decisions can be compared. And this comparison guides a great deal of research in JDM. It quite naturally raises the question of how decisions might be improved, but it also frames the effort to understand what it is that people are doing when they are making choices if they are not doing what the axioms prescribe. Bayes’ theorem plays much the same role in the study of judgment. It provides a mathematical specification of how much one’s initial belief should change in response to new information. Consider hypothesis A: People’s judgments are predictably biased. Now consider event B: the cumulative research of Kahneman and Tversky. How much should we change our belief in the probability of A (predictably biased judgment) given B (the evidence reported by Kahneman and Tversky)? The probability form of Bayes’ theorem is P(A|B) =[P(A) x P(B|A)]/P(B). Thus, to determine the subjective probability of systematic bias given the evidence, we take our prior belief in systematic bias, P(A), (which can vary from person to person and hence puts the “subjective” in subjective probability) and adjust it by multiplying it by the extent to which the experimental evidence is consistent with systematic bias, P(B|A). The result is then further adjusted by
44 the unconditional probability of finding the evidence whether or not people are systematically biased, P(B). Early research comparing people’s probability judgments with Bayes’ theorem indicated that people were too slow to revise their prior beliefs. They were “conservative Bayesians” (Edwards, 1968). A big part of the heuristics and biases program of research, however, was the demonstration that on many occasions people are overly influenced by new case-specific information (Tversky & Kahneman, 1971). A great deal of work in JDM is focused on understanding the psychological processes that yield judgmental conservatism versus judgmental rashness, and on specifying the features of the presented information and of the problem context that give rise to each (Griffin & Tversky, 1993). The field of social psychology was always at least somewhat concerned with normative issues, of course, because it is a field with considerable interest in applied issues and in improving the human condition—minimizing group conflict, extinguishing prejudice, overcoming excessive group influence, and so on. But the effect of normative thinking on research in social psychology was amplified by Nisbett and Ross’s (1980) treatise on human inference, a work that brought JDM front and center to social psychology. Nisbett and Ross delineated many of the most important inferential tasks facing the social perceiver—covariation detection, causal analysis, prediction, belief revision—and explicitly compared how the average person approaches those tasks with the procedures scientists use to approach them. The direct comparison between formal and lay approaches to social judgment necessarily invokes a consideration of normative principles. Sometimes, as in the case of prediction and belief revision, the relevant normative standard is the Bayesian analysis so commonly invoked in JDM (cf., Ajzen &
45 Fishbein, 1975). At other times, however, Nisbett and Ross took the general JDM focus on normative comparison and invoked, opportunistically, more circumscribed normative considerations that apply only to a particular inferential problem. For example, they considered what happens when two groups, on opposite sides of a policy debate, are exposed to a mixed body of evidence germane to the debate. It is hard to conceive of a normative analysis that wouldn’t dictate that the two sides’ opinions should converge to some degree. But in fact, because each side readily accepts the evidence that supports its position and finds fault with the evidence that supports the other side, the two groups’ opinions can actually diverge, not converge (Lord, Ross, & Lepper, 1979). They also considered what happens when the evidential basis of one’s opinion is completely invalidated. When the entire set of reasons one had for holding a particular belief are unambiguously shown to be in error, logic dictates that one should “start over” and have the same opinions as those who were never exposed to the misleading information that gave rise to the belief. But what has been found in numerous studies is that the initial, unfounded belief hangs tough and survives to some degree after its foundation is undercut (Anderson, 1995; Anderson & Lindsay, 1998; Anderson, Ross, & Lepper, 1980; Davies, 1997; Ross, Lepper, & Hubbard, 1975; Sherman & Kim, 2002; Walster, Berscheid, Abrahams, & Aronson, 1967) Nisbett and Ross’s explicit treatment of normative issues served to accentuate the consideration of normativity in several areas of social psychology that had always had something of a normative focus. Early treatments of attribution theory, for example, viewed the basic principles of attribution—covariation and discounting/augmentation— as principles a person should follow in assigning causes to effects. Although it was
46 generally thought that people typically did follow these principles, some attention was paid to those instances in which the attributions people made departed from the dictates of the theory (Kelley, 1973). The most notable departure is the tendency of people to make person-centered attributions when the behavior in question can be entirely explained by the dictates of the prevailing situation (Gilbert & Jones, 1986; Gilbert & Malone, 1995). There is little question that research on this correspondence bias (Gilbert & Jones, 1986) or fundamental attribution error (Ross, 1977) took something of a different shape as a result of the heightened concern with normativity that was inherited from JDM. This is reflected in how the phenomenon was cast, in how the research was conducted (Gilbert & Jones, 1986; Quattrone, 1982), and in the nature of the theoretical controversies that were sparked (Funder, 1987; Hilton, 1990). Another area of research in social psychology that was greatly affected by the enhanced emphasis on normativity that came from JDM is the study of stereotyping. The field had always been marked by the conviction that there was something “wrong” with stereotyping. Although it was granted that some stereotypes are valid (Volvo and Prius owners are more likely to listen to NPR), the research was motivated and shaped by the concern that many stereotypes are erroneous and do a great deal of harm, particularly, of course, to those belonging to the stereotyped group. It had long been recognized that stereotypes could sometimes arise, not from base human motives or inter-group rivalry, but from faulty information processing alone (Allport, 1954). But the rise of the cognitive revolution in psychology, and the emergence of the sub-field of social cognition in particular, greatly accentuated this recognition. And the notion that information processing might be “faulty” necessitates a consideration—a consideration
47 made easier because of the groundwork laid by JDM—of what constitutes sound thinking. It is one thing to say that the distinctiveness of negative behavior on the part of minority group members is troublesome; it is quite another to specify how it is inferentially out of line (Fiedler, 1991; Hamilton & Gifford, 1976; Risen, Gilovich, & Dunning, 2007). It is one thing to bemoan the fact that stereotypes often survive exposure to stereotype-inconsistent information; it is something else to identify the normative principle it violates.
JUDGMENT AND DECISION MAKING’S DEBT TO SOCIAL PSYCHOLOGY
Because humans are a social species, many of our most important judgments and decisions are those that take place in a social context and concern other people. It would be odd indeed, then, if JDM research did not deal extensively with the social embeddedness of judgments and decisions. And it would be odder still if JDM research were not heavily influenced by the one subdiscipline of psychology specifically devoted to the inherently social nature of human life. In fact, JDM researchers have had a longstanding interest in judgments about people and in social life, from the inherently social element of game theory (Axelrod, 1984; Nash, 1950), to the comparison of clinical versus statistical prediction (Meehl, 1954; Dawes & Corrigan, 1974), to applications of Brunswick’s lens model (Hammond, 1996). And this interest has served to develop and strengthen the ties between JDM and social psychology. Just as social psychology would look very different today if not for
48 the influence of JDM, the field of judgment and decision making would look very different if not for the influence of social psychology. To showcase that influence, we discuss three particularly important ideas from social psychology that have had—and are having—a substantial impact on JDM. In particular, we discuss how social psychological theorizing about channel factors, emotion, and norms and identity has influenced the field of JDM.
Channel factors Kurt Lewin is generally credited with noticing that when people try to change someone else’s behavior, they typically try to increase the person’s motivation to behave differently. They try to increase the “push” toward the desired behavior. Lewin recognized that often people are already motivated to perform the desired behavior, but they just can’t get themselves to translate their good intentions into effective action. A more effective strategy for changing behavior, then, is to figure out what is preventing the desired behavior, and then to eliminate any sources of resistance. Rather than increase the push, it is often more effective to dampen the pushback. Related to this analysis is Lewin’s concept of “channel factors” (Lewin, 1952), or the notion that seemingly minor details of the situational context can powerfully facilitate or block desired behavior—an idea championed and elaborated by Ross and Nisbett (1991). Subtle and often invisible elements of the surrounding situation, in other words, serve to create a channel that leads people down one path rather than another. A study frequently used to illustrate the notion of channel factors is one that examined the effectiveness of different efforts to get Yale undergraduates to get their free tetanus
49 vaccines (Levanthal, Singer, & Jones, 1965). One attempt centered on frightening the students with all the ways one can get tetanus and what the late stages of lockjaw look like. The students were told that they could avoid such an awful fate by going to the health center on campus at any time and getting a free inoculation. This succeeded in impressing the students about the severity of the disease and the importance of getting inoculated, but almost none of them actually did so. In an alternative approach, other participants were likewise given the scary materials, but they were also handed a map of the Yale campus with the health center circled and they were asked to review their schedules and come up with a convenient time to visit the center and get their shots. These seemingly trivial details—pointing out the health center’s location and encouraging thoughts about the best time to visit—increased the rate at which students actually got their shots by a factor of nine. In other words, simply increasing students’ motivation to get inoculated had almost no effect; creating a channel that made the desired behavior easier had a considerable effect. A similar effect of channel factors can be found in people’s use of health services more generally. One of the most powerful predictors of whether people will use the services available to them, more powerful than attitudes about health and various demographic variables such as age, gender, and socioeconomic status, is the distance between an individual’s residence and the closest facility (Van Dort & Moos, 1976). When the University of Rochester moved its student health center from the campus to an off-campus site, usage of the facility declined 37% (Simon & Smith, 1973). Another telling and illustrative study examined efforts to get US citizens to buy war bonds during World War II (Cartwright, 1949). The war bond campaign, with
50 memorable posters featuring vulnerable American children and scary Axis warmongers, was generally considered effective and raised a considerable amount of money for the war effort. Nevertheless, when the slogans depicted on the posters were altered slightly (changing “Buy War Bonds” to “Buy an Extra $100 Bond Today”) or the request made more specific (“Buy Them When the Solicitor at Your Workplace Asks You to Sign Up”) to create a channel that would smooth the path to donation, sales of war bonds doubled. It doesn’t take much thought to recognize the relevance of the idea of channel factors to judgment and decision making. The structure of various problems, or the context surrounding those problems, channel or facilitate certain responses (creating what one might call “downhill” responses) and block or impede others (what one might call “uphill” responses). This has been powerfully demonstrated in research documenting the powerful impact that defaults and status quo options have on people’s choices. One early and noteworthy investigation examined the auto insurance purchase decisions of consumers in New Jersey and Pennsylvania, two adjacent states that offered their residents essentially the same pair of options, but with opposite defaults (Johnson, Hershey, Meszaros, & Kunreuther, 1993). One option was a “full-priced” policy that included the right to sue for any auto-related injury; the other, cheaper option did not include the right to sue. Pennsylvania offered the full-priced policy as the default, allowing residents to opt for the less expensive policy if they chose to forgo the right to sue. In contrast, the less expensive policy was the default in New Jersey, but consumers could obtain the right to sue by paying more. In a striking demonstration of the power of defaults, only about 20% of New Jersey residents thought it was in their best interests to have the full-priced policy, compared to 75% of Pennsylvania residents.
51 An even more striking demonstration of the influence of defaults comes from a study of organ donation rates in countries that have explicit consent (“opt-in”) or presumed consent (“opt-out”) policies (Johnson & Goldstein, 2003). As Figure 1 so clearly shows, opt-out policies that require people to take steps to avoid being organ donors “channel” donation and thus lead to rates of willingness to donate that are near ceiling. In contrast, opt-in policies that reverse the burden of action put up a formidable barrier to donation, leading to donation rates that fall well short of the need for healthy organs for transplant. Defaults appear to have such powerful effects on people’s choices for a number of reasons. For one thing—the deepest, clearest channel—the default option profits from laziness, mindlessness, and decision paralysis (Iyengar & Lepper, 2000; Langer, 1989; Schwartz, 2004). When one can’t be bothered to figure out the right choice, forgets to figure out the right choice, or can’t discern, despite considerable effort, what the right choice might be, the default becomes the choice. Second, the fact that a particular option was selected as the default is often taken as informative (McKenzie, Liersch, & Finkelstein, 2006). It is often interpreted as a hint of what the best option might be— however mindlessly the default may, in fact, have been selected. Finally, the default is also often experienced as the status quo, from which people have been shown to be reluctant to deviate (Baron & Ritov, 1994; Samuelson & Zeckhauser, 1988; Schweitzer, 1995). The existence of such a status quo bias makes it clear that there are other types of privileged options beyond defaults. In particular, although they are often conflated, the adherence to defaults, the status quo bias, and the omission bias constitute separate
52 influences on choice (Schweitzer, 1994). The omission bias refers to the tendency to judge harmful actions more harshly than harmful inactions, and the concomitant reluctance in many circumstances to go out on a limb by taking action (Gleicher, Kost, Baker, Strathman, Richman, & Sherman, 1990; Kahneman & Miller, 1986; Kahneman & Tversky, 1982; Spranca, Minsk, & Baron, 1991). All of these tendencies speak to the power of norms (which we discuss more extensively below), and the influence of each stems in part from the anticipated regret that would come with a counter-normative choice. To defy a default is to alter the way things are preset to be. To defy the status quo is to depart from tradition. And to defy the omission bias is to shoulder an extra burden of responsibility. All of this makes one vulnerable to self-criticism and the secondguessing of others. There are times, of course, when the default or status quo is to take action, and when it is, people tend to experience more regret for failing to act rather than acting—that is, the omission bias is turned on its head (Connolly & Zeelenberg, 2002; Davis, Lehman, Wortman, Silver, & Thompson, 1995; Gilovich & Medvec, 1995; Seta, McElroy, & Seta, 2001; Zeelenberg, Van den Bos, Van Dijk, & Pieters, 2002). Although default and status quo effects are perfect illustrations of the power of channel factors, they attracted the interest of JDM researchers simply because they are such prominent elements of the landscape of decision making. Recently, however, JDM researchers have pursued research agendas inspired very directly by the theoretical notion of channel factors. This is particularly true of scholars associated with the field of behavioral economics. The aim of many behavioral economists is to figure out why many people don’t spend, save, invest, or borrow as wisely as they might, and to design
53 interventions that make it easier for people to act in accordance with their economic interests. Investigations of employee participation in company-sponsored retirement plans, for example, have found that the channel leading people to productive savings tends to be blocked by the provision of a large number of different investment options (Iyengar, Jiang, & Huberman, 2004) but is deepened when employers offer automatic (Madrian & Shea, 2001) or easy enrollment (Choi, Laibson, & Madrian, in press) options. Taking advantage of the fact that people find it easier to commit to taking unpleasant or costly actions in the future than in the here-and-now, Benartzi and Thaler (2004) found that savings rates increased substantially when employees could sign up for increased (automatic) deductions from future raises—their “save more tomorrow” plan. Marianne Bertrand, Sendhil Mullainathan, and Eldar Shafir (2004, 2006) have applied the idea of channel factors to better understand, and to improve, the financial decisions made by the poor. For instance, many poor households do not participate in the banking system, a decision that can cost them dearly as they are forced rely on alternative financial institutions, such as check cashers and unregulated money lenders, that exact higher transaction costs. Many poor households also fail to take advantage of various welfare services for which they are eligible. Bertrand and colleagues have explored how a number of modest interventions—such as simplifying application forms and consolidating informational meetings and enrollment sessions into one encounter—can dramatically increase the use of banking and welfare services on the part of the poor. More broadly, channel factors play a prominent role in Thaler and Sunstein’s advocacy of “libertarian paternalism” (Thaler & Sunstein, 2003,2008; see also Camerer,
54 Issacharoff, Loewenstein, O’Donoghue, & Rabin, 2003). The choice environment in which individuals must confront many important decisions are often unintentionally structured in ways that steer people away from acting in their best interests—e.g., arbitrary and counter-productive defaults, bewildering presentations of available options, salient displays of the most troublesome alternatives. Rather than mandating actions that are deemed to be in the best interests of most citizens—that is, paternalism—the choice environment can be structured in ways that foster better decisions. No one is compelled to make a particular choice; they are simply nudged in the right direction—that is, libertarian paternalism. And much of the nudging they recommend involves creating the right channels in the choice environment to encourage sound decision making and enhanced well-being.
Affective Influences on Judgment and Decisions
It is surely no accident that social psychology and JDM became heavily intertwined in the 1970s, the heyday of cognitive theorizing in psychology. The roots of JDM in mathematical decision theory, in reasoning, and in statistical prediction guaranteed a pronounced, if not exclusive, focus on cognitive mechanisms of judgment and choice. Such a focus found a ready match in the predominant theoretical perspectives and research interests of social psychologists at that time. But although social psychologists were obsessed with determining how virtually every social psychological phenomenon might be explained solely with reference to cognitive processes, the field did not completely abandon its historical interest in emotional
55 influences on behavior (see Bruner, 1992, Bruner & Klein, 1960, Bruner and Goodman, 1947, and Bruner & Postman, 1947 for an earlier treatment of the effects of motivation of perception and judgment). And when the “cognitive revolution” ebbed, there was a rapid and pronounced return to the study of emotion (see Keltner & Lerner, this volume). Given the new tight links between JDM and social psychology, social psychology’s renewed interest in emotion was certain to have a pronounced influence on JDM. Social psychologists had much earlier studied the impact of emotion on behavior, with a particular focus on the influence of various affective states on compliance. Feeling guilty, for example, has been shown to increase people’s willingness to comply with requests from strangers (Carlsmith & Gross, 1969; Cialdini, Darby, & Vincent, 1973; Darlington & Macker, 1966; Regan, 1971; D. Regan, Williams, & Sparling, 1972). And, on the other end of the emotional spectrum, being in a good mood has also been shown to increase compliance (Carlson, Charlin, & Miller, 1988; Isen & Levin, 1972; Isen, Clark, & Schwartz, 1976). Inspired in part by findings such as these, Johnson and Tversky (1983) examined whether incidental affective states influence people’s assessments of risk. They had participants first rate the journalistic quality of news stories depicting either anxietyprovoking, depressing, or uplifting events, and then had them estimate the number of people who die each year as a result of such things as traffic accidents, leukemia, and homicide. Those who had read news stories that induced anxiety and depression provided significantly higher estimates than those who read either neutral or uplifting stories (see also Loewenstein, Weber, Hsee, & Welch, 2001; Slovic & Peters, 2006). Subsequent work established that hypnotically induced mood states have the same effect:
56 Those in a good mood estimated higher probabilities of occurrence for such things as world peace and a cure for cancer, and those in a bad mood estimated higher probabilities for such things as nuclear power accidents and automobile injuries (Wright & Bower, 1992). More dramatic manifestations of this effect can be found in studies of stock market returns. An examination of stock markets in 26 countries over a 15-year period found that the amount of sunshine on a given day is positively correlated with market performance. The investigators suggested that this effect was due to investors attributing their weather-induced good moods to positive economic circumstances rather than the true source, sunshine (Hirshleifer & Shumway, 2003; Kamstra, Kramer, & Levi, 2003). A similar, more recent study found that stock market returns decline when a country’s soccer team is eliminated from a prominent tournament such as the World Cup, and that similar dips occur in countries following losses in other sports (cricket, rugby, basketball) popular in those countries (Edmans, Garcia, and Norli, 2007). Three explanations have been offered for the influence of mood on judgment. According to the priming account, positive and negative moods activate positive and negative information, respectively, and the enhanced accessibility of one type over the other distorts how objects or propositions are evaluated (Bower, 1981; Isen, Shalker, Clark, & Karp, 1978; Mayer, Gaschke, Braverman, & Evans, 1992; Mayer, Gayle, Meehan, & Haarman, 1990). A considerable body of research has examined this explanation, but it remains controversial (Clore & Huntsinger, 2007). According to an alternative account, the affect-as-information hypothesis, mood and emotional states serve as cues about how one feels about a given stimulus (Clore,
57 1992; Clore & Huntsinger, 2007; Schwarz, 1990; Schwarz & Clore, 2003; Sechrist, Swim, & Mark, 2003). Positive stimuli typically induce positive affect and so people implicitly reason that if they feel good while confronting or contemplating a given stimulus, they must think it has positive features. And the same applies for negative moods and emotions. In the most widely-cited study in support of this idea, students at a Midwestern university were called on the telephone (by someone claiming to be from out of town) and asked a few questions about their life satisfaction (Schwarz & Clore, 1983). Some calls were made on sunny days and others on gloomy days and, as predicted, respondents reported that they were significantly happier and more satisfied with their lives overall if they were contacted on one of the sunny days. In a critical additional condition of the study, half the respondents were asked before the life-satisfaction questions, “By the way, how’s the weather down there?” Asking this question completely erased the impact of the weather on respondents’ ratings of happiness and satisfaction with life. This manipulation almost certainly did not influence the respondents’ moods, but it did influence what they understood their moods to signify about their lives. Without the question about the weather, respondents interpreted their feelings as indicative of their overall satisfaction with life; with the question, they did not. As research on the affect-as-information hypothesis progressed, it became clear that the informative value of moods was quite a bit more general than originally envisioned and had implications far beyond their influence on evaluative judgments (Schwarz, 1990). This led to the third account of the influence of mood and emotion on judgment, the processing style perspective. The idea here is that people implicitly understand that their moods reflect the state of their environment. A bad mood may
58 signify a troublesome situation, serving as something of a “trouble ahead, slow down” sign and leading to an information processing style that is careful, systematic, and deliberate. A happy mood, in contrast, may signify a benign situation and serve as something of a “smooth sailing” sign, leading one to process information more heuristically and reflexively. A great deal of research supports this view, with people in positive moods being shown to engage more than those in negative moods in top-down rather than bottom-up analysis, heuristic rather than systematic processing, and global/abstract thought rather than specific/concrete thought (Bless, Mackie, & Schwarz, 1992; Bless, Schwarz, & Wieland, 1996; Bodenhausen, Kramer, & Susser, 1994; Forgas, 1998; Fredrickson & Branigan, 2005; Gasper & Clore, 2002; Park & Banaji, 2000). Beyond Positive and Negative. Our emotional lives, of course, are more complex than feeling bad and feeling good, and so there is more to emotion than positive or negative feelings. Many emotion researchers maintain that a key element of what distinguishes various emotional states is the pattern of cognitions—appraisals— associated with each (Lazarus, 1991; Smith & Ellsworth, 1985). The good-bad dimension may be the primary appraisal, but it is not the only one. Anger, for example, is not just negative, it is also associated with appraisals of harm to oneself or someone one cares about, certainty about the source of the harm, and the attribution of the harm to an agent. And just as the positivity or negativity of an incidental emotional state might spill over and influence an unrelated judgment, so too can these other appraisals. Most of the research that has examined this issue has focused on the appraisal of certainty. Emotions associated with certainty—anger, disgust, and happiness—are thought to be characterized by a sense of confidence about what is happening in the situation, how to
59 respond, and what will happen next, whereas emotions associated with uncertainty—fear, sadness, and hope—are characterized by less confidence in these assessments (Smith & Ellsworth, 1985). In one study of the impact on judgment of emotions that differ in certainty appraisals, Lerner and Keltner (2001) found that participants who had been asked to think about events from their lives that made them angry (high certainty) gave lower estimates of the likelihood of suffering various maladies than those who had been asked to think about events that made them afraid (low certainty). The feeling of certainty associated with anger made various hazards seem less likely (see also Leith & Baumeister, 1996; Taylor, Lerner, Sage, Lehman, & Seeman, 2004; Tiedens & Linton, 2001). In a notable extension of this finding, a national sample of Americans was contacted two months after the 9/11 terrorist attacks on the World Trade Center and the Pentagon. Some of the respondents were led to focus on fear-inducing elements of the attacks; others on angerinducing elements. The angry participants gave lower estimates of the likelihood of both future terrorist attacks and other, unrelated risks (Lerner, Gonzalez, Small, & Fischhoff, 2003). Another examination of the impact of the certainty appraisals associated with different emotional states went beyond perceptions of risk. Tiedens and Linton (2001) found that participants led to feel angry or content (high certainty) relied more on the source of a persuasive communication when evaluating its merits than did participants led to feel worried or surprised (low certainty). The feeling of certainty associated with anger and contentment apparently encouraged participants to think that they had a handle on things and could rely on relatively superficial cues, such as who was pitching the
60 message. This conclusion was reinforced by the results of another study in which participants led to feel disgusted (high certainty) were more likely to use stereotypes in making predictions about another person than were participants led to feel anxious (low certainty). In an investigation of the impact of other appraisals associated with different emotions, Lerner, Small, & Loewenstein (2004) examined how the incidental emotional states of disgust and sadness might influence the endowment effect, or the tendency for owners of a good to value it more than those considering owning it (Kahneman, Knetsch, & Thaler, 1991; Thaler, 1980). Disgust is associated with the appraisal of being too close to an entity or idea and so the investigators predicted that it would lead to a desire to expel close objects and hence would undermine the endowment effect. It did. After having earlier seen a film that elicited feelings of disgust, those given a small gift and asked how much they would sell it for did not inflate their asking price—in marked contrast to sellers in a control condition. The investigators also predicted that sadness, associated with the implicit goal of changing one’s current circumstances, would lead to a reversal of the endowment effect. Those endowed with the gift would want to change by getting rid of what they have and those not given the gift would want to change by acquiring it. And they did. Sellers stated a significantly lower price for the gift than neutral choosers did. Non-inferential Effects of Emotion. All of the theoretical accounts of the impact of affective states on judgment discussed thus far posit an element of reasoning as a critical component of the reported effects. More specifically, by these accounts, emotions are seen as cues. According to the mood-as-information account, one’s positive
61 feelings are taken as a cue that the stimulus being confronted or the thought being entertained must be congenial. According to the processing-style perspective, one’s negative feelings are taken as a cue that not all is right in the world and one must be careful and process all incoming information carefully. Such cognitive elements in theories of emotional influence are to be expected, as people’s cognitive and affective systems are tightly interconnected and emotions typically have much of their effects through their impact on cognitive processes. But emotions sometimes have simpler, more direct effects. We often think of emotion, not as intertwined with reason, but as highjacking reason. And we sometimes think of emotions as at odds with reason, and as exerting an influence on our behavior that is too strong to be overcome by our “better judgment.” Is there evidence of more direct, and less inferential, emotional influence on judgment and decisions? One program of research that addresses the direct influence of emotion on judgment and behavior examines how people physiologically encode the affective consequences of different courses of action and how these “somatic markers” influence action (Bechara, Damasio, & Damasio, 2000; Bechara, Damasio, Tranel, & Damasio, 1997; Damasio, 1994; Damasio, Bechara, & Damasio, 2002; Hinson, Jameson, & Whitney, 2002; but see Dunn, Dalgleish, & Lawrence, 2006; Tomb, Hauser, Deldin, & Caramazza, 2002). The researchers investigated the impact of somatic markers by giving participants a stack of play money and having them play a card game in which the goal was to win as much money as possible by turning over cards that specified wins and losses of varying amounts. Players selected cards from one of four decks, without knowing (at first) the composition of the decks or how long the game would last. The
62 composition of the decks is depicted in Table 1. Note that the decks that often provide participants with a $50 payout have a positive expected value and should be pursued; the decks that provide a $100 payout have a negative expected value and should be avoided. Do participants learn to choose wisely from Decks C and D and to avoid Decks A and B? All of them did—initially. But participants with damage to their orbitofrontal cortex quickly resumed picking from the high-gain, high loss decks. It thus appears that although all participants knew these decks were excessively risky, those with damage to the orbitofrontal cortex couldn’t stop themselves from going after the alluring $100 payout. Furthermore, patients without brain damage, but not those suffering from damage to the orbitofrontal cortex, soon developed a pattern of skin conductance that preceded their choices from the excessively risky decks. Thus, the orbitofrontal cortex appears to be a critical region for uniting factual knowledge and somatic markers, or emotional reactions. If this region is not intact, individuals will still recognize the long odds against a given course of action, but will not experience the emotional warning signal—the somatic marker—that steers them away from it. Note that it is the emotion itself, not an inference drawn from the emotion (even participants with orbitofrontal damage have drawn the right inferences), that directs behavior. There are times, however, when ignoring such emotional warning signals will result in better decisions. In these cases, the absence of somatic markers will improve decision making. Most people, for example, won’t accept 50-50 odds of winning $200 or losing $150 despite the positive expected value of such a bet because the fear of losing $150 looms too large. Basketball coaches who find their teams trailing by 2 points, but in possession of the ball, are reluctant to set up a game-settling 3-point shot even though the
63 chances of making it and winning the game (~33%) are greater than the chances of making a 2-point shot and then winning the game in overtime (~50% of making the shot x ~50% of winning the overtime period = ~25%). In situations like these, patients with damage to critical brain regions have been found to make better decisions because they don’t have the normal level of risk aversion (Shiv, Loewenstein, Bechara, Damasio, & Damasio, 2005). Emotional Influences on Probability Assessment. Another possible direct effect of emotion on judgment involves the tendency for emotionally-laden events to distort the weights people assign to the events’ probability of occurrence (Rottenstreich & Hsee, 2001). According to standard models of decision making, people independently assess the valence of an event and its likelihood of occurrence, and then combine the two to decide on the best course of action. This can seem psychologically unrealistic, however, as the following thought experiment suggests. Suppose you were to roll a die and, if it lands on six, you pay a fine of $10. How likely does it seem that a six will come up? Now suppose you play Russian roulette with one bullet in a six-chambered revolver. How likely does it seem that it will end in disaster? Most people report that they recognize the odds of a bad outcome are the same in the two scenarios, but that the odds “feel” less favorable in the latter. The contamination of an event’s perceived likelihood by its valence appears to be partly responsible for people’s reluctance to “tempt fate.” People know that a negative outcome will be experienced as especially negative if they won’t be able to shake the thought that, accurate or not, they did something to bring it about. Getting rained on feels bad, but getting rained on after deciding not to carry an umbrella feels worse; the end of a
64 winning streak feels bad, but having it end after calling attention to the streak feels worse. And it appears to be the very negativity of imagined negative outcomes that follow from actions that tempt fate makes them especially accessible and makes them seem especially likely to occur (Risen & Gilovich, 2007, 2008). If the emotions elicited by anticipating different uncertain events—emotions like excitement and dread—influence the impact of their subjective probability of occurrence, we might expect a “compression” of people’s reactions to varying probabilities of experiencing an emotionally rich outcome. Rottenstreich and Hsee (2001) found just such an effect, showing that people’s willingness to pay to avoid an electric shock, an affect-rich outcome, was relatively insensitive to the probability of receiving the shock. In contrast, their willingness to pay to avoid losing $20, a less affectively-rich outcome, was very sensitive to the probability of the loss. Thus, a very low probability of experiencing a dreaded outcome can seem too likely, and a very low probability of a delightful treat can be enough to maintain hope. Ditto and colleagues (2006) obtained similar results in an experiment in which participants were given the opportunity to play a game in which winning would result in their getting to eat chocolate chip cookies but losing would result in their having to work on a boring task for an extra half-hour. Half of the participants were simply told about the cookie reward; for the other half, the cookies had been freshly baked and placed in front of the participants as they decided whether to play the game. In line with the results of Rottenstreich and Hsee (2001), the participants’ willingness to play the game was very sensitive to their odds of winning when the cookies were described abstractly, but insensitive to the probability of winning when the aroma and sight of the cookies got their juices flowing.
65 Visceral Influence on Behavior. Strong visceral feelings can not only distort our sense of the likelihood of an emotionally-laden event, or the psychological weight that we assign to its likelihood, they can also directly influence our actions, sometimes leading us to act in ways that are at variance with our better judgment (Ainslie, 1975). The average person has no difficulty appreciating this idea, as nearly everyone is familiar with such expressions as “I must have been crazy when I…,” “I just couldn’t control myself and…,” and “my emotions just got the better of me.” George Loewenstein (1996) provided a formal account of visceral influences on behavior, with the aim of making the study of such influences a central component of the science of decision making. In one notable experimental investigation inspired by this analysis, Ariely and Loewenstein (2005) examined the effects of sexual arousal on male participants’ self-reported willingness to engage in problematic sexual behavior. All of their participants were asked to answer a number of questions on a laptop computer loaned to them for the experiment. Control participants answered the questions in their normal, presumably notintensely-aroused state. Experimental participants did so in the course of following instructions to begin masturbating while viewing erotica. Aroused participants reported that they would be less likely to use a condom during intercourse, that they would be more likely to lie to obtain sex and persist in the effort to do so more vigorously after a woman said no, and that they found practices such as S & M, anal sex, and sex with a wider age range of partners more appealing. Note that the aroused participants’ physiological state did not influence their perceptions of the dangers of some of these activities. They were no less likely than control participants, for example, to agree with the statement, “If you pull out before you ejaculate, a woman can still get pregnant.” But
66 their awareness of the dangers notwithstanding, the heat of the moment had a pronounced effect on their inclinations to engage in behavior that more dispassionate participants view as highly questionable (see also Loewenstein, Nagin, & Paternoster, 1997). As Loewenstein (1996) pointed out, theories that fail to take account of the impact of visceral states like sexual arousal, hunger, thirst, and tiredness will fail to capture some of the most common and most powerful determinants of the choices people make.
Norms and Identity The study of norms has always been an important part of social psychology and doubtless will always remain an important area of investigation. There are at least two reasons for this, one of them being the undeniable fact that norms determine so much of human behavior. Consider what a snapshot of rush hour at Grand Central Station would look like in 1935 and, say, 50 years later. Two things would immediately stand out. The overwhelming majority of the people in the 1935 photo would be men, but the 1985 photo would be more gender balanced. And nearly all of the men in 1935 would be wearing hats. The only explanation for this would be the change in norms governing fashion and work outside the home. (A less attention-grabbing norm in the same scene, one present in both 1935 and 1985, is the one that prevents everyone from bumping into one another—that of staying to the right as they walk.) The second reason that norms are so important for social psychologists to study is that they are often invisible. People follow them but often don’t know and can’t articulate what they are following. For many of the most basic and powerful norms, people only notice their existence on those rare occasions when someone violates them,
67 engaging in what Erving Goffman (1963) referred to as “negatively eventful actions.” Thus, norms are an important focus of social psychological research because they hide in plain sight. Norms influence human behavior in two ways. First, they impart meaning to the situations and stimuli a person encounters, and to the different courses of action a person must choose between. This is a big component of what social psychologists have long referred to as “informational social influence” (Deutsch & Gerard, 1955). Norms provide answers to the implicit questions, “What kind of situation is this?” and “What would this response mean?” A series of studies by Kay, Wheeler, Bargh, and Ross (2004) provides an informative illustration (see also Chen, Lee-Chai, & Bargh, 2001; Fitzsimons, Chartrand, & Fitzsimons, in press; Gilovich, 1981). They exposed some participants to a number of objects associated with business environments—a briefcase, boardroom table, fountain pens—before having them participate in the Ultimatum Game. In this game, one of two participants proposes how a sum of money given by the experimenter should be split between them. The second participant can either accept the proposed split or reject it, in which case neither participant receives anything (Camerer & Thaler, 1995; Guth, 1995; Thaler, 1998). As Kay et al. anticipated, exposure to the objects associated with the business world activated the competitive norms associated with that environment, leading these participants to make less generous offers compared to those in a control condition. Norms also influence behavior through the identities they establish. They influence both how one views oneself, and how one is viewed and treated by others (opening the door to the “normative social influence” long discussed by social
68 psychologists). To follow a norm is to align oneself with others and thereby signal respect for their take on the world. To go against the norm, in contrast, is to stand against others and can therefore be an implicit slap in the face, an action not to be taken lightly. To go against the norm is to be a renegade, for both good (if that is the identify one seeks) or for ill (if one would prefer to fit in). Note that our review of research on norms in both JDM and social psychology will be especially selective, both because much of the relevant literature is covered in the chapter on social influence (see Hogg, this volume) and because we examine how meaning is assigned to situations and to actions in our discussion of construal below. Note also that it is not possible to completely separate the two ways that norms have their influence because the first, the meaning one attaches to a stimulus or one’s response to it, determines the second, the sorts of identities that fall out or can be claimed. Consider the impact that training in Economics has on contributions to the public good. Academic economists are less likely than their peers in other academic disciplines to contribute to charity and economics majors are more likely than students majoring in other disciplines to defect in the Prisoner’s Dilemma Game (Frank, Gilovich, & Regan, 1993). The disciplinary training one receives in economics doubtless influences the meaning one assigns to ambiguous situations such as the Prisoner’s Dilemma Game (“Is this about maximizing my own profit or about being cooperative?”). But in so doing it also influences the identities one ascribes to an individual who cooperates (a kind soul vs. a sap) or defects (a selfish opportunist vs. a savvy player). The two components of the power of norms—the assignment of meaning to situations and the assignment of identities to individuals—cannot be cleanly separated.
69 Nearly all social behavior can be viewed through the lens of people’s understanding, often implicit, of prevailing norms and the importance they attach to them (Bicchieri, 2006; Cialdini, 2007; Cialdini, Kallgren, & Reno, 1991; McKirnan, 1980; Miller, 2006). The growing influence of this insight from social psychology on JDM is apparent in a number of areas, including a number of demonstrations of the influence of norms in commercial and economic contexts. Consider two illustrative examples. Goldstein, Cialdini, and Griskevicius (2008) examined the efforts of hotel managers to get their “guests” to reuse their towels as part of an effort to conserve water and energy. They found that when the small card placed in the bathroom urging guests to reuse their towels contained a statement that a majority of past guests had chosen to reuse their towels, a significantly higher percentage of the current guests complied. Interestingly, the rate of compliance was increased even further when the card stated that a majority of guests who “stayed in this room” reuse their towels. A parallel effect was observed in an examination of efforts to improve income tax compliance (Coleman, 1996). Tax payers in Minnesota were sent one of several letters containing different types of information— e.g., what services their tax dollars provide, what would happen if they were found not to be in compliance, how they might get help in filling out their returns. One letter informed taxpayers of the norm of compliance: “…many Minnesotans believe other people routinely cheat on their taxes. This is not true, however. Audits by the Internal Revenue Service show that people who file tax returns report correctly and pay voluntarily 93 percent of the income taxes they owe. Most taxpayers file their returns accurately and on time.” Notably, only the letter informing citizens of the typical behavior of their peers significantly increased compliance.
70 Cultures As Carriers of Norms. Certainly the biggest determinant of the prevailing norms, and one’s understanding of those norms, is the culture in which one is embedded. A culture is a nexus of norms. It stands to reason, then, that a great deal of recent work on the impact of norms on judgment and decision-making would take a cross-cultural perspective. This has been particularly true of research on cooperation and resource allocation. This work takes it start from earlier evidence that people do not act in the completely self-interested fashion postulated by traditional economics. In the Ultimatum Game, for example, the traditional economic prediction is clear: If splitting, say, $10, the first person will propose keeping $9.99 and giving $.01 to the other. The second person will accept, on the principle that a penny is more than nothing. This virtually never happens. 50-50 offers are common and those much below 50-50 are frequently rejected. There thus appears to be a powerful norm of fairness governing people’s behavior in the Ultimatum Game. Proposers’ awareness of the norm makes them disinclined to make an offer much different from 50-50 and they recognize that the other person is aware of the norm and hence unlikely to accept an imbalanced offer. Indeed, brain imaging studies have shown that imbalanced offers tend to activate brain regions such as the anterior insular cortex that are associated with disgust (Sanfrey, 2002). But is this the case in all cultures? One study compared the behavior in the Ultimatum Game of the Machiguenga, a largely hunter-gatherer people in the Peruvian Amazon, with that of various control groups consisting of participants from Pittsburgh, Tokyo, Jerusalem, etc. (Henrich, 2000). Machiguengan proposers were much stingier than their counterparts in the other societies, offering the other person an average of only 26 percent of the total.
71 The Machiguengan respondents did not seem to expect more than this, as nearly all offers were accepted, even the stingiest (less than 5% of the offers were refused). More extensive examination (Henrich et al., 2001) has linked the extent to which participants adhere to norms of fairness to the degree of market integration in their culture. Markets require mutually beneficial exchange and thus encourage norms of fairness and trust. These norms define imbalanced offers as unfair, triggering emotions like anger and disgust that lead to rejection of offers that, in the short term at least, would advance participants’ material interests. Interestingly, Murnighan and Saxon (1998) found that American kindergarteners acted much like respondents in cultures with little market integration, being much more willing to accept a minimal contribution in the Ultimatum Game (one candy M&M or one cent) than were older children. This implicates the role of learned norms in perceived fairness. The predominance of non-selfish behavior in experimental games is a major challenge for economic theory rooted in utility maximization. However, several recent models have attempted to generalize the notion of utility to include the pleasure of seeing others appropriately rewarded. Rabin (1993) included the psychological concept of attributions in his model of “fairness equilibriums,” arguing that the utility of a given action in experimental games is determined not only by the behavior of the other but also by interpretations of the other’s intentions, leading to a positive utility for seeing the unkind punished and the kind rewarded. Camerer and Thaler (1995) went beyond the concept of utility and concluded that “the outcomes of ultimatum, dictatorship, and many other bargaining games have more to do with manners than altruism” (p. 216). In other words, even in economics, norms rule!
72
CONCEPTS THAT HAVE ARISEN INDEPENDENTLY IN SOCIAL PSYCYHOLOGY AND JDM
To understand social life, one must understand how people make judgments about the social stimuli they confront and the social dilemmas they face. And to understand judgment, one must understand that many of our most important judgments are made in a social context and are about social phenomena. It thus stands to reason that, as we have seen, social psychologists have drawn productively on ideas developed in the field of judgment and decision-making, and JDM researchers have drawn productively on ideas developed in social psychology. And given this substantial overlap in the concerns of both fields, it would be odd indeed if some important ideas and perspectives had not arisen independently in both social psychology and JDM. As it happens, there are a number of examples of very similar ideas being developed in the two fields, and we will focus on three of them. We first discuss how both fields have struggled to understand how people make some judgments and decisions relatively quickly and intuitively, and others with greater effort and deliberation. We then discuss the importance both fields have attached to the way people construct or construe the stimuli they encounter. We end with a discussion of the emphasis in both fields on what might be called the “givenness” of experience. Social psychologists have explored people’s predisposition toward naïve realism, or the tendency to treat one’s understandings of the world, not as subjective constructions, but as direct reflections of how the world really is. Research in JDM has uncovered a parallel tendency for people
73 to accept a given frame or construction of a problem without seeing, or trying to see, how it can be framed or constructed differently and what the implications of alternative frames and constructions might be.
Dual-process and Two-systems Theories As Neisser (1963) noted in an early review of dual “modes” of cognition, “The psychology of thinking seems to breed dichotomies.” Consistent with this, social psychologists and JDM researchers have both recognized that people appear to approach various cognitive tasks in two very different ways (Chaiken & Trope, 1999; Evans, 2004; Sloman, 1996; Strack & Deutsch, 2004). One involves mental processes that are fast, associationist, and often automatic and uncontrolled. The other involves processes that are slower, rule-based, and more deliberate. Scholars in both disciplines have devoted a lot of energy trying to specify the nature of these two types of processes, or “systems” of thought, and to delineate when each is operative and how they interact when people make a judgment or choose a course of action. The two systems have been given many names and, following Stanovich (1999), we refer to them simply as “system 1” and “system 2” for ease of exposition. The most agreed upon characteristics of the two systems are depicted in Table 2. Dual-process theories in social psychology. Dual-process or two-systems theories arose in social psychology out of the effort to understand how people attach meaning to the information they encounter. How does one know that a gesture is threatening, a person is kind, or a persuasive message is worth taking seriously? Doing so clearly takes some combination of “top down,” theory-driven, categorical processes on the one hand,
74 and “bottom up,” data-driven, piecemeal processes on the other. Some have argued that bottom-up processing is more effortful than top-down processing. In their review of the history of dual-process theories in social psychology, for example, Moskowitz, Skurnik, & Galinsky (1999) explicitly equate top-down processes with effortless, mindless processes, and bottom-up processes with effortful, mindful processes. It is an important assumption in nearly all dual-process models in social psychology that people are motivated to minimize the amount of effort they devote to information processing. People are likened to “cognitive misers” (Fiske & Taylor, 1991) who follow a “principle of least effort” in forming impressions of others and thus can be expected to rely heavily on relatively effortless top-down processes (Allport, 1954; Eagly & Chaiken, 1993; Tajfel, 1969). This distinction between top-down and bottom-up processes, with top-down processing viewed as less effortful, is present in Fiske and Neuberg’s (1990) continuum model of person perception. In this model, perceivers are thought to quickly (and often automatically) assign target individuals to a category, and then apply the attributes of the category to the individual in question. The target person is understood in terms of these attributes. If the attributes don’t fit other information about the target, if they violate the perceiver’s preferences, or if the perceiver is especially motivated to form an accurate impression, this initial category-based impression is effortfully refined or replaced by individuating information about the target person. It is assumed that “perceivers tend toward category-based processes as default processes” (Fiske, Lin, & Neuberg, 1999). With varying levels of commitment, many of the most prominent dual-process models in social psychology also imply something of an “either-or” operation of the two
75 sets of mental operations. Perceivers are thought to most often go with an immediate, category-based judgment, and to sometimes supplant that judgment with a more careful, individuated analysis. The continuum model (as the name suggests) specifies some feedback between the two processes, resulting in impressions that can be a blend of the two. Gilbert’s correction model also entails considerable interaction between the two systems (Gilbert, 1989, 2002). An initial, effortless impression of a person in line with his or her behavior is corrected by a more deliberate and effortful recognition of prevailing situational constraints. But an either-or nature of the two processes is emphasized in other influential dual-process accounts, such as Petty and Caccioppo’s (1986) elaboration likelihood model of persuasion (ELM). The ELM proposes two routes to persuasion—a central route in which persuasive arguments are thought about in depth and detail, and a peripheral route in which one attends (minimally) to more superficial features of the persuasive communication such as how many arguments were offered and who offered them (see Albarracin, this volume). Note that the theory’s very labels testify to the presumed either-or nature of the two processes: One can’t be at two places at one time and hence one is either on one “route” or the other. This either-or assumption is also present in a close cousin of the ELM, Chaiken’s heuristic-systematic model of persuasion (Chaiken, 1980). This assumption of the model is implicit in the statement that “…recipients will employ a systematic strategy when reliability concerns outweigh economic concerns, and a heuristic strategy when economic concerns predominate” (Chaiken, 1980, p. 754). Note that it is one or the other.
76 Another important building block in the development of dual-process theories in social psychology was the awareness that social perception involves both implicit and explicit processes, and draws on both implicit and explicit knowledge. This awareness was aided and abetted by Schneider & Schiffren’s (1977) work on automatic and controlled information-processing, by Nisbett and Wilson’s (1977) influential paper on people’s lack of awareness of many of their most consequential thought processes (see also Wilson, 2002), and, from outside of psychology, by the availability of desktop computers. The latter made it easy for researchers to use subliminal priming and reaction time procedures to reveal the frequent rift between implicit attitudes and beliefs, on the one hand, and consciously-accessible convictions on the other (see Banaji, this volume). One particularly influential paper used such procedures to demonstrate that pejorative stereotypes can influence the inferences made by individuals who would (genuinely) deny making any negative assumptions about members of an outgroup (Devine, 1989). Devine subliminally presented participants with either neutral words or words stereotypically associated with African-Americans (welfare, jazz, busing) and then had both groups read a vignette about a person who acted in an ambiguously hostile manner. Participants who had just been primed with words associated with AfricanAmericans, even those who showed no trace of bias on the Modern Racism Scale (McConahay, Hardee, & Batts, 1981), rated the person as more hostile and more negative overall. This and other conceptually (if not methodologically) similar studies led to all manner of theorizing about the nature of implicit and explicit attitudes and to considerable controversy over whether and how readily implicit attitudes can be
77 consciously controlled (Fazio & Olson, 2003; Gawronski & Bodenhausen, 2006). It also inspired a number of empirical studies that probe the potentially destructive real-world consequences of implicit stereotyping and prejudice on the part of individuals who would deny being in any way bigoted (see Dovidio & Gaertner, this volume). Most noteworthy are studies showing that participants who deny having any prejudice toward AfricanAmericans (notably, in some experiments, African-American participants themselves) are more likely to decide that a target person is holding a hand-gun rather than an innocuous object if the target person is black than if the target is white—and to react accordingly (Correll, Park, Judd, & Wittenbrink, 2002; Judd, Blair, & Chapleau, 2004; Payne, 2001; Payne, Lambert, & Jacoby, 2002). Dual-process theories in judgment and decision-making. Dual-process theories in JDM arose in response to different questions than those that motivated social psychologists and so it stands to reason that the accounts in the two fields differ somewhat in their details. JDM came to consider the question of whether there were two systems of thought by the simple observation that when making a variety of judgments and (especially) decisions, people often experience a conflict between a “gut feeling” and a more considered analysis (Denes-Raj & Epstein, 1994; Epstein, 1991; Hammond, 1996; Sloman, 1996; Tversky & Kahneman 1983). JDM thus inherited the longstanding concern of philosophers about the nature of “intuition” and “reason.” To the extent that there is a consensus in JDM about the features of System 1 and System 2, it is a consensus that has evolved slowly. Several strands of research were particularly influential. One was the study of reasoning, which uncovered evidence of apparent conflict between two very different types of thinking on the part of participants
78 asked to tackle various deductive and inductive problems. (To our knowledge, the term “dual processes” first appeared in Wason & Evans, 1975.) In particular, “belief bias” in syllogistic reasoning (i.e., the tendency to accept syllogistic conclusions to the extent that they agree with prior beliefs) and “matching bias” in the Wason selection task (i.e., selecting cards that match those mentioned in the rule to be tested) came to be seen as results of an intuitive system yielding tentative responses based on relatively superficial features of the information presented—responses that are sometimes over-ridden by more reflective evaluations of an analytic system (see Evans, 2004, 2007, 2008, for reviews). Such a conclusion was strongly reinforced by evidence that performance on tasks that tap mainly analytic processes tends to correlate with overall cognitive ability, but performance on tasks that largely tap intuitive processes do not (Stanovich, 1999, 2008). This bolstered the idea that judgment is controlled by at least two types of cognitive processes with different constraints. Perhaps the most dramatic evidence of two mental systems that guide judgment and behavior is Epstein’s work on the “ratio bias” phenomenon (Denes-Raj & Epstein, 1994; Epstein, 1991). Epstein told participants that they could win a prize by blindly selecting a jellybean of a given color from one of two urns. One urn had 1 winning jellybean and nine of another, losing color. The second urn had, say, 9 winning jellybeans and 91 of the losing color. The participants’ task was to select an urn from which to draw and then to try to pull out a winner. What Epstein found was that many participants chose to select from the larger urn that offered lower odds of winning because they couldn’t resist the thought that the larger urn had more winning beans. They did so despite the fact that the chances of winning with each of the urns was
79 explicitly provided for them. When the choice was between a 10% chance in the small urn and a 9% chance in the large urn, 61 percent of the participants chose the large urn. When it was a contest between 10% in the small urn and 5% in the large urn—odds only half as good in the latter—23 percent of the participants still chose the large urn! Epstein attributes this decidedly irrational result to an “experiential” system of reasoning that operates on concrete representations and hence finds the greater number of winning jellybeans in the large urn to be more promising. This experiential or intuitive impulse, however, usually conflicts with the rational realization that the actual odds are better in the small urn. Some participants explicitly stated that they knew they should pick the smaller urn, but they nonetheless were going with a gut feeling that they were more likely to pick a winner from the large one. This experience of being pulled in two different directions suggests that there are two things—two mental systems—doing the pulling. This was emphasized by Sloman (1996), who described a possible cognitive architecture consisting of two relatively independent systems to explain the diverse findings implicating dual processes in reasoning, choice, and judgment. Similarities, Differences, and Extensions. Although developed to address different questions, the two-systems or dual-process models in JDM closely resemble those in social psychology. Because the System 1 processes highlighted by JDM researchers are fast and relatively untaxing, they are consistent with the first wave of dual process theories in social psychology that emphasized cognitive economy. But as we saw earlier, many of those models specified relatively effortless and effortful processes that operate in a largely one-or-the-other fashion (although some did allow for interaction and mutual influence; see Gilbert, 1999). In contrast, the two-systems accounts in JDM,
80 precisely because they were inspired by the conflict between rational and intuitive impulses, more pointedly emphasized the contemporaneous, side-by-side operation of the two systems. System 1 operates more quickly and so its output is often available sooner. But according to these accounts, one doesn’t “choose” to respond reflexively because of laziness. System 2 is typically engaged as well, unless the output of System 1 seems particularly compelling. Kahneman and Frederick (2002) highlighted these relations between System 1 and System 2 in their influential restatement of the heuristics and biases program of research. In their “attribute substitution” account, System 1 automatically computes an assessment with some connection to the task at hand—an emotional reaction, a sense of fluency, the similarity between examples or between an example and a category. Both the perceived relevance of the assessment and its immediacy often give rise to the sense that the task is done and that the assessment produced by System 1 is the answer being sought. For example, one cause of death is judged to be more common than another because it is easier to think of examples of the former (Slovic, Fischoff, & Lichtenstein, 1982). One attribute (ease of retrieval) substitutes for another, desired attribute (likelihood). In many circumstances, however, and for a variety of different reasons, System 2 intervenes and deems the automatic assessment inadequate for the task at hand. A more deliberate, rule-based response is given. For example, one might consciously realize, especially if one has received training in statistics and threats to validity, that a given cause of death is highly available because it is frequently discussed in the media. Kahneman and Frederick’s attribute substitution model has captured a great deal of
81 attention because it offered a unified account of a diverse set judgmental phenomena, such as the impact of heuristics, duration neglect, and the problems inherent in contingent valuation methods used to assess people’s willingness to pay (WTP) for such things as environmental remediation. The two-systems frontier. Although dual-process theories in social psychology had somewhat different origins and therefore differed in some details from those developed in JDM, the interaction between the two fields has been pronounced and their perspectives have converged over time. Current theorizing in both fields, furthermore, is being shaped by the same forces. One of these is the field of neuroscience. Brain imaging studies have, in fairly short order, uncovered evidence consistent with the broad outlines of a two-systems view of judgment and decision making. One notable study (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001) found that people’s very different reactions to the moral dilemmas posed in the “trolley” and “footbridge” dilemmas are matched by very different patterns of brain activation. In the trolley dilemma, participants are told that a trolley is heading down a track and will run over and kill an unseen group of five people in its path. The five deaths can be avoided, however, if the participant is willing to flip a switch, which will move it to another track and lead to the death of a single individual. Most people endorse the idea of flipping the switch to reduce the carnage from five to one. And, if asked to think about this dilemma while in an fMRI machine, they reveal activation in regions associated with working memory—consistent with the idea that they are rationally deliberating about the proper response. In the footbridge dilemma, a trolley headed down a track is, again, about to kill five people, but the participant is asked to imagine that she or he is standing
82 next to a heavy-set gentleman on a footbridge spanning the track. The gentleman can be pushed off the footbridge, killing him but sending the train off the track and sparing the five individuals in its path. (It is explained that the participant is not heavy enough to derail the train—ruling out a self-sacrificial solution—and that the derailing of the train would not result in harm to anyone on board.) Most people say it’s not appropriate to push the person off the footbridge even though it involves the very 1-for-5 exchange that they endorse in the trolley dilemma. And thinking about this dilemma reveals enhanced activation of brain regions associated with emotional responding—consistent with the “gut feeling” that it would be wrong to sacrifice the gentlemen, even if it would save five others. Building on findings such as this, Lieberman (2003; Lieberman, Gaunt, Gilbert, & Trope, 2002; Lieberman, Jarcho, & Satpute, 2004) presented neuroscientific evidence in support of an X system (for reflexive) and a C system (for reflective). The reflective system involves the prefrontal cortex, the anterior cingulate nucleus, and the medialtemporal lobe. These regions have been implicated in executive control and explicit learning, and thus are responsible for most of the heavy lifting of System 2. The reflexive system, in contrast, consists, in part, of the basal ganglia, the amygdala, and the lateral temporal cortex. These regions have been linked to associative learning, and hence the implicit processes of System 1. Another influence on current views of the two systems of thought is the recognition that System 1 is almost certainly not as unitary as System 2. System 2 may indeed be a single system involving deliberate, conscious, rule-based cognition. System 1, in contrast, may be more accurately described as a set of processes with some similar
83 properties. Stanovich (2004) refers to them as a “set of autonomous subsystems.” The processes assigned to System 1 include: (1) some sort of general, associative learning mechanism; (2) pragmatic inference processes that nonconsciously supply System 2 with information for conscious processing; (3) once-deliberate processes that have become automated as a result of frequent use; and (4) something like a set of modules devoted to perception and language-processing. A key question that will doubtless attract attention in the coming years is whether an increased understanding of these subsystems will undermine the whole two-systems framework or whether it will still be conceptually useful to link them together as the drivers of implicit thought. Another development that has attracted attention and might be considered something of a challenge to current conceptions of the two systems of thought is Dijksterhuis’s work on the capacity of what he calls nonconscious thought to make accurate judgments and beneficial decisions (Dijksterhuis, 2004; Dijksterhuis, Bos, Nordgren, & von Baaren, 2006). Dijksterhuis argues that refraining from consciously thinking about a decision and instead turning the task over to nonconscious thought will typically lead to higher quality decisions if the decision is complex (see Dijksterhuis, this volume). He bases this prediction on the relatively unlimited capacity of the associative, parallel processes of non-conscious thought and the limited, serial processes of consciousness. He argues that if a great number of different cues need to be integrated to make the best decision, conscious thought will not be up to the task. Better decisions will be made if the decision maker is distracted and the decision is made largely nonconsciously. The experimental demonstrations of this “deliberation-withoutattention” effect have been controversial (Acker, 2008; Evans, 2008). The point we wish
84 to make here is that if the effect were to prove reliable and robust (and if participants in his nonconscious thinking conditions really are not devoting any conscious thought to the task), it would constitute something of a challenge to current perspectives on the two systems of judgment. The results do not fit with the view that a set of quick assessments are made by System 1, which are then (sometimes) elaborated and modified by System 2. In this case, both systems do a lot of deliberating. Theoretical anomalies like that of Dijsterhuis and further neuroscientific experiments are especially important given how firmly dual-process theories have taken root in both social psychology and JDM. At least some reference is made to reflexive and deliberate thought in virtually all textbook and trade book treatments of human cognition and decision-making. And a number of scholars have confessed that they worry that the distinction may be too convenient, and that they themselves are now hard pressed to think about any higher-order cognitive output without reference to the two systems dichotomy. It is precisely in situations like this in which theoretical perspectives benefit from being tested and pushed to the limit.
The primacy of representation: There’s no “there” in there Both social psychology and JDM rely on insights about the malleable, constructed nature of the represented world to explain many of their central findings. This parallel development is not surprising given the influence of perceptual psychology, especially Gestalt psychology, on the fields’ founders, including Lewin, Asch, and Heider in Social Psychology and Brunswik, Hammond, and Kahneman in JDM. As described earlier, Gestalt psychology focuses on how the perceptual system constructs
85 meaningful patterns out of isolated cues. A few lines can suggest the face of an old witch; but looking again, the viewer sees a smiling girl. Thus, the act of construction can cause the same cues to give rise to very different experiences, experiences that seem equally real. A person’s perceptions, moreover, although constructed, feel immediate, raw, and impervious to deliberate reconstruction. Beyond this common foundation, there are two other reasons that fated social psychology and JDM to embrace construction and construal as primary mechanisms. First, each field faced a hegemonistic neighbor. For social psychology, the neighbor was behaviorism, which threatened to engulf all of American psychology in the 1930s in stimulus-response learning theory. According to the behaviorists, social behavior such as aggression, compliance, and prejudice could be explained, predicted, and controlled in terms of the objective contingencies, the punishments and rewards, that followed behavior. This explanatory rival pushed social psychology to be even more cognitivist in outlook and to emphasize the importance of the individual’s active search for meaning in social situations. In particular, it gave extra prominence to demonstrations that seemed to defy reinforcement or stimulus-response explanations—for example, that less money could lead to greater attitude change (Festinger & Carlsmith, 1959), that too much praise or unnecessary rewards could undermine intrinsic motivation (Lepper, Greene, & Nisbett, 1973), or that a person’s reaction to an event was a critical determinant of the degree of emotion it elicited (Schachter & Singer, 1962). For JDM, the threatening neighbor has always been economics, with its welldeveloped formal theories built on the twin foundations of hyper-rationality and selfinterest, which also assigned a privileged role to (objective) financial incentives. Again,
86 this led to a concern with phenomena that were interestingly counter-economic—how more choice can lead to more misery rather than more pleasure (Iyengar, Jiang, & Huberman, 2004; Iyengar & Lepper, 2000; Schwartz, 2004), or how 60 seconds of ownership can lead to a doubling of the perceived value of a mug or pen (Kahneman, Knetsch, & Thaler, 1991). Both fields also confronted the empirical fact of widespread instability and unpredictability of behavior. Why was it so difficult for social psychologists to predict on the basis of measured attitudes how people would behave in the polling booth or when told to administer electric shock? Why did people change their consumer choices when an additional but less-preferred alternative was added to the choice set (Simonson, 1989)? Both fields needed to embrace such instability—not explain it away as measurement or response “errors”—and to account for changing and unstable behaviors and choices in a systematic way. Both fields emphasize the importance of the “three C’s” in understanding the malleability of behavior. That is, reactions to a given stimulus differ across time and situations because of: (1) the Construal of the object of judgment, (2) the Construction of the perceiver’s own attitudes, values, and preferences, and (3) the Context-dependent processes of evaluation. The choice between an apple and a piece of chocolate cake depends not only on the construal of the object (is the apple a symbol of self-deprivation or of the fresh delights of Mother Nature, is the cake represented as a tempting treat or as a member of an abstract category of desserts), but also the guiding attitudes and values that come to mind (perhaps a prior donation to a charity has “licensed” indulgence as the guiding consideration) and the decision processes and
87 routines suggested by the surrouding context (perhaps adding a salad to the menu of choices would highlight the distinctive properties of the cake). Situational construal (Griffin & Ross, 1991; Ross & Nisbett, 1991) refers to the subjective representation of a stimulus or a person’s “definition of the situation” (Thomas & Znaniecki, 1918). The notion that an individual’s response to a situation can be predicted only from knowledge of the meaning assigned to it has long been a central tenet of social psychology (Fazio, 1990), a perspective powerfully shaped by Jerome Bruner’s (1957a, 1957b) account of how people “go beyond the information given” to determine the meaning of a social stimulus and by Solomon Asch’s (1940; 1952) discussion of “change of meaning” effects in persuasion and conformity. Following Kant (1965/1781), Bruner pointed out that all perceptual and social stimuli are inherently ambiguous (e.g., a second-hand account of a riot) and cannot be understood without the perceiver “filling in” details of context and content (e.g., the intentions and expressions of the rioters). He also proposed that in addition to chronic differences between people in their biases toward interpreting or categorizing behavior along certain dimensions—construal biases that might result from differing motivations or ideologies (Hastorf & Cantril, 1954)— there is also systematic variability within people in the way they interpret or categorize the same stimulus at different times. In particular, an individual may construe the same act, event, or object in different ways according to whatever category label is most cognitively accessible at the moment or what aspects of the situation happen to be most immediately salient or eye-catching. This provided a conceptual foundation for the remarkably rich body of research that has used priming manipulations to shape situational construal by altering the cognitive accessibility of different labels or schemas (Gilovich,
88 1981; Higgins & Bargh, 1987; Wyer & Srull, 1989), goals (Ferguson & Bargh, 2004) or identities (Hong, Morris, Chiu, and Benet-Martínez, 2000). The key mediating role of representation or construal has remained in the forefront of social psychology, and has been used to explain such biases as the actorobserver difference in attribution (Jones & Nisbett, 1972), the solo actor effect (Fiske & Taylor), the false consensus effect (Ross, Greene, & House, 1977), and the fundamental attribution error (Ross, 1977; Ross & Nisbett, 1991). Gilovich (1990) provided direct evidence that construal differences were associated with false consensus, the tendency to exaggerate the commonness of one’s own beliefs, preferences, and actions. In one study, he found that those who preferred 1960s music to 1980s music brought to mind different exemplars of each era than those who preferred 1980s music—and that the 60s exemplars from the ‘60s-lovers were indeed more consensually positive when rated by independent judges. In a social psychological study with direct implications for JDM, Liberman, Samuels, & Ross (2004) found that merely renaming the Prisoner’s Dilemma game the “Community Game” led to twice as much cooperation as renaming it the “Wall Street Game.” Liberman and Trope’s (1998) temporal construal theory spans social psychology and JDM, but its explicit focus on the content of representation places it firmly in the social psychology tradition. According to the theory, and substantiated in many empirical tests, events or options in the near future are represented in terms of low-level concrete attributes embedded in a context; those in the distant future are represented in terms of high-level abstractions removed from any particular context (Liberman, Sagristano, & Trope, 2002; Liberman & Trope 2008; Trope & Liberman, 2003; Wakslak,
89 Nussbaum, Liberman, & Trope, 2008). A request to write a chapter that is due in 6 months leads to a representation in terms of a rewarding scholarly accomplishment whereas a chapter due next week is represented in terms of the painful specific actions needed to complete it within a well-specified context. More abstract, decontextualized (future) representations lead to more optimistic expectations because the low-level details that make them difficult are absent from the representation. Numerous demonstrations testify that confidence and optimism are higher for events in the far future, and also that such events are represented in a simpler, more summary fashion (Gilovich, Kerr, & Medvec, 1993; Nussbaum, Liberman, & Trope, 2006; Shepperd, Ouellete, & Fernandez, 1996; Taylor & Shepperd, 1998). Another central and well-documented aspect of temporal construal theory is that distant-future events are evaluated primarily in terms of their desirability, whereas near-future events are evaluated primarily in terms of their feasibility (Liberman & Trope, 1998). Note that the basic principles of temporal “distance” apply to social and physical distance as well (Trope & Liberman, 2003; Trope, Liberman, & Wakslak, 2007). These differences leave people open to manipulation because the definition of “near” is to some extent arbitrary and can be set by context and instructions (e.g., Broemer Grabowski, Gebauer, Ermel, & Diehl, 2008). Representation and construal also play decisive roles in judgment biases and attempts to ameliorate them. Consider base rate neglect and the role of causal versus statistical representation, described earlier. When the base rate of accidents for one particular taxi company is described purely statistically (85% of cabs in the city are Blue), this information is ignored in favor of the testimony of a witness (Ajzen, 1977; Tversky & Kahneman, 1980). However, when the base rate is described in causal terms
90 that imply a propensity for some types of cabs to have accidents (85% of the accidents in the city involve Blue cabs), the information is readily used, presumably because the causal base rate or individual propensity is compatible with such System 1 assessments as associative strength and causal simulation. A similar explanation speaks to the apparent paradox between the overuse of stereotypes (which imply a propensity for the members of the category to act a particular way) and the underuse of statistical base rates. Many attempts at debiasing judgments focus on making set-based representations more readily available; in essence, using representations to engage System 2 reasoning rather than System 1 operations. For example, many authors (e.g., Cosmides & Tooby, 1996; Gigerenzer & Hoffrage, 1995; Kahneman & Tversky, 1983) have proposed that problems presented in a frequency format (e.g., how many accidents out of 100 will involve a Blue cab) should result in more statistical reasoning than problems presented in a probability format (e.g., what is the probability that an accident will involve a Blue cab). However, different authors call on very different theories to account for the facilitation of statistical reasoning by frequency formats. Kahneman and Tversky (1983) proposed that frequency formats trigger a representation that makes the set inclusion relations more transparent and hence increase the apparent relevance of set-based reasoning that underlies statistical logic. Others explain the superior performance of frequency formats by appealing to evolved special-purpose reasoning modules that are compatible with natural frequencies but not probabilities (e.g., Cosmides & Tooby, 1996; Gigerenzer & Hoffrage, 1995). Overall, the evidence for the efficacy of frequency formats alone to improve Bayesian performance is mixed (Griffin & Buehler, 1999; Kahneman & Tversky, 1983; Barbey & Sloman, 2007) and most consistent with the view
91 that frequency presentations and other format changes are effective when they engage a problem representation that highlights and makes concrete the set inclusion relations. The most effective way to do this appears to be to draw out the probability logic in graphic formats such as decision trees or Venn diagrams, rather than to rely on a frequency format per se (see Barbey & Sloman, 2007, for a review). Thinking about aggregate frequencies rather than individual outcomes may not lead to Bayesian reasoning, but it does change many important aspects of the problem representation. As charity fund-raisers well know, individual bears, tigers, or babies evoke a more emotionally charged representation than groups or aggregates—the “identifiable victim effect” (Jenni & Loewenstein, 1997; Small & Loewenstein, 2003; Small, Loewenstein, & Slovic, 2007). For example, Redelmeier and Tversky (1990) found that practicing physicians gave different treatment recommendations when they were presented with a problem faced by a specific young woman than when told about the same problem faced by a set of young women. Those presented with the individual case were almost twice as likely to recommend an additional blood test and were significantly more likely to suggest a follow-up appointment than those who were presented with the aggregate case. The prominent role of normative theories in JDM in defining what is interesting and worth investigating has led JDM researchers to focus more on value or preference construction than on stimulus construal. Subjectivity enters the SEU model in at least two ways. First, the objective stimulus (whether money, kisses, or electric shocks) is translated into subjective utility through a psychophysical function that is characterized by decreasing marginal utility—that is, 2 dollars or 2 kisses are generally less than twice
92 as rewarding as one (Bentham, 1789; Bernoulli, 1738). A second psychophysical element arises in decisions across time, such that future events, whether rewards or punishments, are discounted and worth less as they extend farther into the future (Frederick, Loewenstein, & O’Donohue, 2003; Read, 2004). However, once these psychophysical translations have occurred, rational models assume an underlying true and enduring preference relation among alternatives. An individual’s choice among alternatives is assumed to be consistent, coherent, and determined only by relevant, available alternatives. If one is willing to walk farther to eat sushi than to eat pizza, then one should also pay more to eat sushi than pizza (invariance across measurement, or consistency). If one is willing to pay more for sushi than pizza, and more for pizza than falafel, then one should be willing to pay more for sushi than falafel (transitivity of choice, or coherence). One’s preference between pizza and falafel should not depend on whether sushi is also listed on the menu (independence of irrelevant alternatives). Violations of these principles not only have special significance in theoretical terms, but also cast doubt on the most fundamental tool of applied economics, the idea that individuals have a unique “reference price” or “stable preference” for each outcome (Knetsch & Tang, 2004; Loomes, 1999). “The idea of constructive preferences goes beyond a mere denial that observed preferences result from reference to a master list in memory….It appears that decisionmakers have a repertoire of methods for identifying their preferences and developing their beliefs” (Payne, Bettman, & Johnson, 1992, p. 89). In a pioneering demonstration, Tversky (1969) showed that people would make intransitive choices in even the most simple problems. Recall that transitivity is a key normative axiom and one that virtually
93 all people readily endorse in the abstract. Moreover, intransitivity is not merely a theoretical nicety: Intransitivity cannot be rationalized because it can lead a consumer to become a “money pump”: because pizza is preferred to falafel, the consumer will pay a premium to give up falafel and receive pizza; then because sushi is preferred to pizza, the consumer will pay a premium to give up pizza and receive sushi; however, an intransitive chooser could prefer falafel to sushi, and pay a premium to get the falafel—and so the cycle of preference would start again with the consumer paying a premium to receive pizza rather than the falafel currently held….and so on. Tversky’s demonstration required participants to choose which of a pair of university applicants should be admitted. He carefully set up the attributes of the various applicants so that adjacent pairs differed only slightly in intelligence (the most important dimension, but more substantially on other dimensions. Thus, Bob would be chosen over Ace even though Ace was somewhat more intelligent, because this slight difference was outweighed by Bob being notably more sociable and balanced that Ace. Cal would in turn be chosen over Bob, Deb would be chosen over Cal, and Ed over Deb for the same reason. However, even though Ed stood at the top of the pairwise choice chain, he would be rejected when paired with Ace, at the bottom of the chain. Ace would be chosen over Ed because the difference in the most important attribute, intelligence, was too big to ignore, and overwhelmed even the sizable differences in the less important attributes of balance and sociability. According to this account, people were using the stimulus array—the choice context—to decide what differences were worth attending to, and hence showed predictable incoherence in their pattern of choices. Tversky suggested that many people would show the same preference incoherence when choosing optional
94 equipment for a car: They would usually choose to add a single additional option because it added a negligible difference in price; but they would also tend to choose the basic model over the fully “loaded” car. The notion of a stable, underlying reference price or a true valuation was further undermined by the observation of systematic preference reversals between choice and valuation (Lichtenstein & Slovic, 1971; 1973; Tversky, Slovic, & Kahneman 1990). People—including experienced gamblers in Las Vegas—regularly choose highprobability, low-value bets (P bets) over low-probability, high-value bets, but place a higher cash equivalent (buying price) on the low-probability, high-value bets ($ bets). Thus, a typical participant will prefer a “P” bet offering a 35/36 chance of winning $4 (and a 1/36 chance to lose $1) to an “$” bet offering a 11/36 chance of winning $16 (and a 25/36 chance of losing $1.50), but will offer a higher price to purchase the “$” bet than the “P” bet. So which bets do they “really” prefer? The answer, it seems, is “it depends on how they are asked.” This result was so shocking in its implications for the lability of preferences that a pair of leading experimental economists (Grether & Plott, 1979) conducted a series of follow-up studies, offering as one hypothesis that the nonrationality of the subjects was due to the original studies being conducted by psychologists! (Note that even when conducted by economists, that is, by Grether and Plott themselves, the preference reversals proved robust to economic manipulations such as the magnitude of incentives.) Slovic, Griffin, andTversky (1990) explained preference reversals and other related examples of violations of procedural invariance in terms of the scale compatibility hypothesis—that is, that attributes that are more compatible with the output task (e.g., choice or pricing), in the sense of being easy to translate onto the output
95 scale, will receive more weight than less compatible attributes (see also Fischer & Hawkins, 1993).. In the example above, the dollar value of the bet is more compatible with pricing and so is weighted more heavily in pricing than in choice, leading to greater preference—as implied by higher prices—for the $ bet. Tversky, Sattath, and Slovic (1988) proposed the related prominence effect, whereby choice (a qualitative procedure) is most influenced by a qualitative comparison of which option is higher on the most important dimension, and is less affected than quantitative procedures—such as pricing, matching, or rating—by the option’s actual values on the set of attributes. A family of related phenomena testifies to the fact that people construct their preferences at least partly based on the stimulus array itself—that is, the context of choice. Consider a consumer who is indifferent between a moderately priced, average quality pair of speakers and a high-priced, high-quality set. When a low-price, lowquality pair of speakers is added to the comparison set, the consumer becomes considerably more favorable towards the moderately priced speaker set. Why? Because the moderate price and quality is a compromise on both dimensions and provides a defensible and convincing argument to buy it. This compromise effect (Simonson, 1989) illustrates the importance of the context in resolving conflict. The context provided by the other choice options can be said to “construct” an observed preference (Shafir, Simonson, & Tversky, 1993). A similar analysis can be made of the attraction effect (Simonson, 1989) originally known as the asymmetric dominance effect (Huber, Payne, & Puto, 1982). Here, a difficult choice between two or more options becomes easier when a dominated option is added to the choice set. Choosing between a laptop computer that is fast but heavy and one that is slower but light becomes easier when a
96 moderately slow but extremely heavy laptop is added to the choice set. That is, adding an additional option that is inferior to one of the original choices (and only one) on all dimensions provides a knockout argument in favor of that choice—the fast, heavy laptop dominates the moderately slow, extremely heavy laptop. This provides a clear reason to choose one of the original laptops over the other. Even though the dominated model is irrelevant and should be disregarded in the choice process, its presence serves to add to the attraction of one of the options by providing a strong reason to choose it. Again, an element of the choice context serves to guide and even control preferences. The notion of constructed preferences or constructive choice processes is often explained with reference to an old joke about three umpires discussing their philosophy of calling balls and strikes. Umpire 1: I call them as I see them. Umpire 2: I call them as they are. Umpire 3: They ain’t nothing til I call them. These three epistemological perspectives could be called “social construal”, “naïve realism”, and “social construction” (Griffin & Ross, 1991). Naïve realism refers to the tendency of social observers to treat their social (and physical) perceptions as veridical copies of the outside world. Both social psychology and JDM recognize that many phenomena result not simply because subjective representation is important, but because the act of representation is transparent to the actor—that is, social observers look through their lenses rather than at them (Bem, 1993; Goffman, 1974)—and hence social perception is experienced as “calling them as they are”.
97 The “Givenness” of Experience and the Transparency and Persistence of Representations
Both social psychologists and JDM researchers acknowledge that often it is not enough to explain certain phenomena by noting that the objective world is represented subjectively and that people act on their subjective representations. It is also necessary to understand that subjective representations are experienced as objective copies of the world. Because the represented or perceived world “in here” is experienced phenomenologically as veridically mirroring what is “out there,” human judgment is susceptible to a host of egocentric biases that promote misunderstanding and fuel conflict. For example, in a classic demonstration of the hindsight bias, Fischhoff and Beyth (1975) found that when individuals were provided with the outcome of a historical battle, their knowledge contaminated their judgments of the inevitability of that outcome, presumably because their knowledge altered their construal of the circumstances at the beginning of the conflict (Hawkins & Hastie, 1990;Sanna & Schwartz, 2006). This effect of outcome knowledge makes the world seem more predictable than it is, and hence makes individuals who suffer misfortune seem more culpable than they are (Kamin & Rachlinski, 1995). In their analysis of the curse of knowledge, Camerer, Loewenstein, and Weber (1989), generalize this finding to stock traders who have inside knowledge of a company’s fate. This inside knowledge can be a curse rather than a blessing because the insiders act as if everyone else perceives the world the same way they do—and they actually lose money because their knowledge contaminates their ability to forecast others’ behavior. The implication of the curse of knowledge for overconfidence and human misunderstanding was illustrated by Newton (1992) in her tapping study
98 (described in Griffin and Ross, 1991). When Stanford undergraduates were given a list of songs to communicate to listener subjects by tapping on a table, the “tappers” were relatively confident—providing estimates of at least a 50/50 chance—that they could communicate such tunes as the “Star Spangled Banner.” But the listeners caught on less than 3% of the time. This pronounced mismatch between expectation and reality resulted from the tappers’ inability to undo the rich representation of the song they had in their heads while they tapped out their impoverished rhythms. Lee Ross and colleagues (e.g., Pronin, Gilovich, & Ross, 2004; Ross & Ward, 1996; Ross & Nisbett, 1991) have linked naïve realism to the maintenance of ideological enmity and the breakdown of negotiations. If people believe that their perceptions are deeply rooted in reality, it stands to reason that they would expect any reasonable person to see things the way they do. In Ichheiser’s classic words “We thus imply, of course, that things are in fact as we see them, and that our ways are the normal ways." (1940, p. 39). A summary of the diverse applications of naïve realism to bias, misperception, and misunderstanding is presented in Figure 2 (from Pronin, Gilovich, & Ross, 2004). A second implication of naïve realism is that people hold but a single representation, without any awareness of—or adjustment for—alternative representations or what is not represented (Griffin, Dunning, & Ross, 1990). Fischhoff, Slovic, and Lichtenstein (1978) described this as "what is out of sight is also out of mind" (p. 333). This simple principle underlies at least three major research areas in JDM: framing effects on choice, description effects on probability judgment, and “inside” biases in prediction. In each case, the richness of perceived experience crowds out any awareness
99 of other perspectives or of the constructed nature of one’s representation. Like reversing optical illusions, people can see only one perspective at a time. The variety of “framing effects” documented by JDM researchers involve instances in which choices are influenced by different descriptions of the same objective information, as illustrated by the “Asian Disease problem” described earlier. In each case, whether it is gain-loss framing, temporal framing, narrow framing, attribute framing, or goal framing, the respondent is manipulated by the given frame because people do not typically transform the information given to them into some canonical or “neutral” representation (Bernartzi & Thaler, 1995; Read, Loewenstein, & Rabin, 1999; Tversky & Kahneman, 1981). Framing is a key mechanism in Kahneman and Tversky’s Prospect Theory (1979). Prospect Theory follows the basic logic of the classical normative SEU Theory—that is, choices are made to optimize the combination of the subjective probability of gaining an outcome and its subjective value (or utility)—with four important alterations. First, an editing phase is introduced to capture the simplifying heuristics that individuals use to reduce the complexity of many of the choices they confront. Second, a reference point is introduced to capture the fact that new options are evaluated in terms of the gain or loss relative to some expectation, comparison, or aspiration level. This reflects the perceptual principle that sensation is particularly sensitive to changes or differences (and adapts to steady states), and that declining marginal utility applies to both gains and losses. Third, losses relative to the reference point have disproportionate hedonic impact compared to the same magnitude of gains (loss aversion), again reflecting a basic principle of sensation and perception—pain
100 dominates pleasure (Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001; Rozin & Royzman, 2001). Fourth, probabilities are weighted non-linearly, such that the change from impossibility to possibility and from high probability to certainty are noted and weighted much more than changes in the intermediate ranges of the probability scale. Framing—and the role of the reference point in triggering risk-seeking or riskaversion—can be seen in the following simple example (Tversky & Kahneman, 1986): (A) Assume you are richer by $300 than you are today and you have to choose between:. a sure gain of $100 [72%] and a 50% chance to gain $200 and 50% chance to gain nothing [28%] OR B) Assume you are richer by $500 than you are today and you have to choose between. a sure loss of $100 [36%] and a 50% chance to lose nothing and 50% chance to lose $200 [64%]
Note first that both pairs of choices are objectively identical (providing final outcomes of $400 for sure versus a 50% chance of $500 and a 50% chance of $300), but are framed to evoke thoughts of gaining or losing money relative to an imagined reference point. The critical point is that people accept the frame as given, and don’t bother to (or have no way to) create a common overall representation. After accepting the frame, most people choosing between relatively balanced gains prefer the sure thing over a risky chance of a bigger prize—this risk aversion in gains is due to the decline in marginal utility between $100 and $200 (that is, the perceived difference between $0 and $100 is greater than that between $100 and $200.) A gain of $100 is experienced as more than half as valuable as a gain of $200, so why gamble to get $200? However, because
101 declining marginal utility also operates on losses, people choosing between relatively balanced losses prefer a risky chance to lose nothing or lose $200 to a sure loss of $100— and thus are risk seeking in losses. A loss of $100 is more than half as painful as a loss of $200, so why not gamble to (possibly) avoid a loss altogether? The impact of gain-loss framing around a reference point has also been found for many different kinds of outcomes beyond money. For example, medical recommendations made by experienced physicians are influenced by whether the outcomes are framed in terms of survival rates versus mortality rates (McNeil, Pauker, Sox, & Tversky, 1982) and negotiations fail more often when outcomes are framed in terms of losses (Bazerman, 1983). Loss aversion, the psychophysical principle that losses have greater hedonic impact than comparable gains, interacts with framing to make some (objectively identical) frames more acceptable than others. For example, credit card companies require that any consumer charges for using credit cards be described in terms of regular prices (with the credit card charges built in) and cash discounts, rather than regular prices (without the credit card charges) and credit card surcharges. Using the card thus becomes a foregone gain rather than the source of a loss. Similarly, the tax break associated with having children can either be framed as a deduction associated with each of the first two children (that is, a reference point of zero children and gains associated with having children) or framed as an additional tax for those with fewer than two children (that is, a reference point of two children and losses associated with not having children) (Schelling, 1981). Clearly, the loss frame is painful and hence politically unacceptable. As discussed earlier, the endowment effect refers to a manifestation of loss aversion whereby ownership—even randomly allocated ownership—immediately
102 increases the selling price of an item in part because exchanging the item for money means exchanging a loss for a gain (Kahneman, Knetsch, & Thaler, 1990; Thaler, 1980). Studies with mugs, pens, and many other items of exchange show that the selling prices specified by owners are generally about twice as much as the buying prices of those who were not endowed with the good. Interestingly, as predicted by Prospect Theory, items that are held specifically for trading purposes do not show the endowment effect (List, 2003). Temporal framing (Loewenstein, 1988) also uses the principle of loss aversion and framing, but in the context of time. Imagine that a desired gift, say a new television, is to be shipped to you in 2 (4) weeks. How much money would have to be taken off (added to) the price to delay (expedite) the shipping by 2 weeks? For the reference point of 2 weeks, an additional 2 weeks delay is a loss, whereas for the reference point of 4 weeks, 2 weeks’ expediting is a gain, and so the discount one would demand for a delay is much greater than the fee one would be willing to pay for expediting. Narrow framing refers to the tendency to treat choices one at a time and resist aggregating them (Bernartzi & Thaler, 1995; Kahneman & Lovallo, 1993). This can lead to myopic loss aversion, as illustrated in the well-known Samuelson paradox: even economists will turn down a bet offering a 50-50 chance to win $200 and lose $100, but are happy to play a set of 10 such bets. Each bet individually is unattractive (when framed narrowly), but the set is attractive (because of its broader frame). People, whether gamblers, managers, or stock market investors, often respond to each risky choice in its own separate frame, rather than combining them across time or portfolios (Thaler, 1999). Because individual investors assess their stock-holdings very frequently—and are averse
103 to losses—they find holding stocks to be relatively painful. This leads to the “Equity Premium”—the discrepancy in payoffs between holding (safe) bonds and (risky) stocks. If individual investors were to broaden their framing and evaluate either a longer time period or a broader portfolio of stocks, the chance of experiencing the painful loss would be lowered and the equity premium would be smaller. Narrow framing also explains the behavior of New York cab drivers who quit earlier during their most profitable days (Camerer, Babcock, Loewenstein, & Thaler, 1997). The cab drivers lease their cabs for a daily fixed fee and set a daily income target. This leads them to work longer hours during slow-traffic days and shorter hours during high-traffic days. If, instead, they framed their income targets more broadly (per-week or per-month) they could allocate their time more efficiently by driving longer when business was most profitable (and shorter when it was unrewarding)—and provide better service to customers as well. The family of framing effects also includes phenomena that are not directly linked to losses or gains. In attribute framing, the same quantitative information is expressed using either the positive or negative end of the scale as a reference point. For example, consumers evaluate beef described as 75% lean more positively than beef described as 25% fat (Levin and Gaeth, 1988) and students evaluate a condom described as having a 90% success rate more positively than one with a 10% failure rate (Linville, Fischer, & Fischhoff, 1993). Once again, the striking aspect of these results is that people fail to see through the frame they are given. Attribute framing is generally attributed to the salience of the positive or negative information highlighted by the valence of the frame. In goal framing (also known as message framing), the outcome of some preventive effort is described either in terms of the positive effects of engaging in the action or the negative
104 effects of failing to engage in the action (e.g., Detweiler, Bedell, Salovey, Pronin, & Rothman, 1999). Messages that emphasize the latter typically have greater impact. (Note, however, that many studies in this tradition do not provide identical information in the two “framing” conditions, and thus would not fit our technical definition of framing.) Some framing effects are self-inflicted, but are nonetheless effective. For example, people allocate money to certain “mental accounts” and treat their funds in different accounts very differently (Epley, Mak, & Idson, 2006; Thaler, 1980; Tversky & Kahneman, 1981). Money won in a casino is treated very differently than money inherited from a Puritan aunt. A lost $100 ticket to the opera prevents the purchase of another ticket but a loss of $100 cash on the way to the opera does not. Mental accounting violates the economic principle of the fungibility of wealth (that is, that all of one’s monetary assets should be interchangeable, regardless of how they were acquired). But as Thaler (1999) notes, mental accounts are just as binding for economists as for anyone else. Probability judgments are also remarkably frame dependent. In one classic example, Fischhoff, Slovic, and Lichtenstein (1978) asked auto mechanics to judge the probability that a particular engine malady was caused by various mechanical failures. A number of possible causes were specified along with a residual category labeled “all other causes”. Remarkably, the probability allocated to the residual category did not increase when important causes were removed or “pruned” from the list. That is, the mechanics seemed to consider only the specific causes they were provided and divided the allotted 100% among them.
105 Support Theory (Tversky & Koehler, 1994; Rottenstreich & Tversky, 1997) was developed to explain the role of representation in intuitive judgments. According to Support Theory, subjective probability is not attached to events, but to descriptions of events, a tenet that acknowledges that representations are crucially important in driving perceptions of likelihood. For example, the probability of an earthquake killing 1000 people this year may seem greater than the probability of a natural disaster killing 1000 people this year. And the probability of a homicide in Detroit on a given day may be judged higher than the probability of homicide that day in Michigan as a whole. In both cases, the more specific event is a subset of the second more inclusive event and cannot be more likely—but the more specific and concrete representation provides a better search cue to bring to mind evidence and examples, hence increasing the support for that hypothesis. Support Theory provides a formal symbolic account of the operation of judgment heuristics by proposing support as an intervening psychological construct between represented hypotheses and expressed probability. Subjective probability is constructed as a ratio of the support for various competing hypotheses, and support itself is constructed from the balance of evidence for the hypotheses in question (Brenner, Koehler, & Rottenstreich, 2002). A key explanatory principle in Support Theory, akin to framing in Prospect Theory, is unpacking, the process of breaking a superordinate or inclusive description into the sum of its parts. For example, respondents judged the probability of death from natural causes to be 58%, but also judged that the probability of death by heart disease was 22%, by cancer 18%, and by all other natural causes 33%. Thus, by unpacking the category of death by natural causes into 3 subsets, the total
106 probability assigned to this category rose from 58% to 73% (22+18+33). The unpacking effect works with counts of frequency as well as probability, so it is not simply a case of an unfamiliar scale driving a contrived bias. Forecasting and Planning. Many of our most important judgments are forecasts and predictions. What will life be like after we are married? Will I be happier if I move to Sydney? How much time will a Handbook chapter take? Clearly, predictions about the future involve irreducible uncertainty of many kinds, including uncertainty about what will happen and how we will respond when it does. However, the tendency for representations to be concrete, singular, and experienced as copies of the outside world also characterizes intuitive prediction and forecasting. Kahneman and Tversky (1979; 1982; see also Kahneman & Lovalo, 1993) contrasted the inside view of prediction with an outside view. The inside, singular, or case-based view focuses on the unique details of the problem at hand and involves a scenario representing the most (subjectively) important and available details. The outside, distributional, or class-based view focuses on the set of comparable instances and gives little weight to what is unique about the current problem. The planning fallacy illustrates the intuitive appeal of the inside approach and the biases that can result. The planning fallacy is the juxtaposition of a general belief that some class of tasks generally take longer to complete than expected, with a specific belief that a particular current task will be completed in a shorter time than usual. The cardinal example is the case of a group of academics predicting that their joint textbook would be completed within a couple of years, even when every single one of them had firsthand knowledge that similar projects had taken quite a bit longer, and many were never
107 finished at all (Kahneman & Tversky, 1979). Because predictions are made from an inside perspective, the textbook writers came up with detailed plans for completion, without considering (and adjusting for) the many unspecified and unspecifiable things that might go wrong. However, a consideration of the relevant distribution of textbook projects as a whole might have—and should have—triggered the necessary adjustment. There is considerable empirical support for the idea that the tendency to adopt an inside perspective underlies the planning fallacy (Buehler, Griffin, & Ross, 2002), including: (1) thought protocols showing the preponderance of scenario planning and the scarcity of either distributional thinking or representations of uncertainty, (2) evidence that increasing the focus on details increases the magnitude of the optimistic bias, (3) evidence that making the distribution of past outcomes more obviously relevant to the present case reduces the optimistic bias, and (4) evidence that predicting from a thirdperson perspective reduces the exclusive focus on case-based planning. The inside approach to prediction also leads to systematic biases in forecasting one’s own future reactions. Studies of affective forecasting (Buehler & McFarland, 2001;Gilbert, Pinel, Wilson, Blumberg, & Wheatley, 2002; Gilbert & Wilson, 2000; Wilson & Gilbert, 2003) have documented a general bias to overpredict the magnitude and duration of emotional responses to a given future event, whether it be a family holiday, a major professional setback, a loss by one’s favorite sports team, or a win by a hated presidential candidate. The existence of this robust bias is particularly noteworthy, given that we make such forecasts and experience the resulting outcomes day in and day out. Two explanations have been offered. One is focalism: people’s attempts to simulate their future emotional experience focus entirely on the target event itself and neglect the
108 many situational details and distractions that will reduce the impact of the event when it actually unfolds (Wilson, Wheatley, Meyers, Gilbert, & Axsom, 2000; also called focusing bias by Schkade & Kahneman, 1998). Focalism also gives rise to the overweighting of intentions in predicting one’s future actions. For example, stated intentions to give blood almost perfectly map onto behavioral predictions, but in fact the observed relation between intentions and actual donation is much more moderate because of the intervening effects of other situational variables such as competing priorities (Koehler & Poon, 2005; see also Kruger & Gilovich, 2004). A second reason that people overpredict their emotional reactions to future events is their failure to anticipate the operation of adaptation and rationalization, or what Gilbert and Wilson and colleagues call the psychological immune system (e.g., Gilbert, Blumberg, Pinel, Wilson, and Wheatley, 1998). As with the planning fallacy, people fail to learn sufficiently from the past and thus do not adjust their construals of the future to account for the fact that the effects of even very good and very bad events regularly and predictably are less enduring than their mental simulations lead them to believe. In research on both the planning fallacy and affective prediction, we see a complete overlap between social psychology and JDM, begging the question of how much it is possible to distinguish the two overlapping fields and their mutual influence on current work.
CONCLUSION We began this chapter with the question of why JDM can be considered a part of—or at least necessary for—social psychology, and why there should be a chapter such
109 as this one in the Handbook of Social Psychology. We confined our answer to historical and formal causes, focusing on the history of how the two fields mutually influenced one another and on the inherent overlap in their intellectual content and guiding concerns. We would now like to end by noting a teleological reason that the two fields are substantially conjoined. Aligning them serves a purpose. Drawing on the ideas that originated in the two fields, or those that originated in one and were further developed in the other, allows us, as we have seen, to better explain a wide range of diverse phenomena. Because of their distinct history and theoretical tools, the two fields are complementary and able to reach different audiences. But note that the ideas advanced and investigated in the twin fields also hold considerable promise for helping people solve some of the most pressing problems human beings face, and hence offer great promise for improving human lives. If we are to dampen global conflict, head off global warming, or remediate environmental degradation, we will have to thoroughly understand and wisely use channel factors, cleverly frame the issues and potential outcomes, manage emotional hot buttons, and mix appeals to reason with those to emotion and intuition. In short, solving the most important problems of today and tomorrow requires the combined wisdom of the twin fields of social psychology and JDM.
110
References Aarts, H. & Dijksterhuis, A. (1999). How often did I do it? Experienced ease of retrieval and frequency estimates of past behavior. Acta Psychologica, 103, 77–89. Acker, F. (2008). New findings on unconscious versus conscious thought in decision making: Additional empirical data and meta-analysis. Judgment and Decision Making, 3, 292-303. Ajzen, I., & Fishbein, M. (1975). A Bayesian analysis of the attribution process. Psychological Bulletin, 82, 267-277. Ainslie, G. (1975). Specious reward: A behavioral theory of impulsiveness and impulse control. Psychological Bulletin, 82, 463–469. Ajzen, I. (1977). Intuitive theories of events and the effects of base-rate information on prediction. Journal of Personality and Social Psychology, 35, 303-314. Allport, G. (1954). The nature of prejudice. Cambridge: Addison-Wesley. Alter, A.L. & Oppenheimer, D.M. (2008). Effects of fluency on psychological distance and mental construal (or why New York is a large city, but New York is a civilized jungle). Psychological Science, 19, 161-167. Alter, A.L., Oppenheimer, D.M., Epley, N., & Eyre, R.N. (2007). Overcoming intuition: Metacognitive difficulty activates analytic reasoning. Journal of Experimental Psychology: General, 136, 569-576. Ames, D. R. (2004). Inside the mind-reader’s toolkit: Projection and stereotyping in mental state inference. Journal of Personality and Social Psychology, 87, 340-353. Anderson, C.A. (1995). Implicit personality theories and empirical data: Biased assimilation, belief perseverance and change, and covariation detection sensitivity. Social cognition, 13, 25-48. Anderson, C.A., & Lindsay, J.J. (1998). The development, perseverance, and change of naïve theories. Social Cognition, 16, 8-30. Anderson, C.A., Ross, L., & Lepper, M.R. (1980). Perseverance of social theories: The role of explanation in the persistence of discredited information. Journal of Personality and Social Psychology, 39, 1037-1049. Ariely, D., & Loewenstein, G. (2005). The heat of the moment: The effect of sexual arousal on sexual decision making. Journal of Behavioral Decision Making, 18 (1), 1-12. Asch, S. E. (1940). Studies in the principles of judgments and attitudes: II. Determination of judgments by group and by ego standards. Journal of Social Psychology, 12, 433-465. Asch, S. E. (1952). Social psychology. New York: Prentice-Hall. Axelrod, R.M. (1984). The evolution of cooperation. New York: Basic Books. Barbey, A.K., & Sloman, S.A. (2007). Base-rate respect: From ecological rationality to dual processes. Brain and Behavioral Sciences, 30, 241–298. Baron, J., & Ritov, I. (1994). Reference points and omission bias. Organizational Behavior and Human Decision Processes, 59, 475-498. Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5, 323–370. Bechara, A., Damasio, H., & Damasio, A.R. (2000). Emotion, decision making,
111 and the orbitofrontal cortex. Cerebral Cortex, 10, 295-307. Bechara, A., Damasio, H., Tranel, D., & Damasio, A.R. (1997). Deciding advantageously before knowing the advantageous strategy. Science, 275, 1293-95. Benartzi, S & Thaler, R.H. (1995). Myopic Loss Aversion and the Equity Premium Puzzle. The Quarterly Journal of Economics, 110, 73-92. Benartzi, S., & Thaler, R.H. (2004). Save more tomorrow: Using behavioral economics to increase employee saving. Journal of Political Economy, 112, 164-187. Bentham, J. (1789/1948) An Introduction to the Principle of Morals and Legislations : reprinted Oxford, UK: Blackwell, 1948. Bernoulli, Daniel. (1738/1954). Exposition of a new theory on the measurement of risk. Econometrica, 22, 23-36. Bertrand, M., Mullainathan, S., & Shafir, E. (2004). A behavioral-economics view of poverty. American Economic Review Papers and Proceedings, 6, 419-423. Bertrand, M., Mullainathan, S., & Shafir, E. (2006). Behavioral economics and marketing in aid of decision making among the poor. Journal of Public Policy and Marketing, 25, 8-23. Beth-Marom, R., & Fischhoff, B. (1983). Diagnosticity and pseudodiagnosticity. Journal of Personality and Social Psychology, 45, 1185-1195. Bicchieri, Cristina. 2006. The Grammar of Society: The Nature and Dynamics of Social Norms. New York: Cambridge University Press. Bless, H., Mackie, D. M., & Schwarz, N. (1992). Mood effects on attitude judgments: Independent effects of mood before and after message elaboration. Journal of Personality and Social Psychology, 63(4), 585–595. Bless, H., Schwarz, N., & Wieland, R. (1996). Mood and the impact of category membership and individuating information. European Journal of Social Psychology, 26, 935–959. Bodenhausen, G.V., Kramer, G.P., & Susser, K. (1994). Happiness and stereotype thinking in social judgment. Journal of Personality and Social Psychology, 66, 621-632. Brenner, L. A., Koehler, D.J., & Rottenstretich, Y. (2002. Remarks on support theory: Recent advances and future directions. In: T Gilovich, D Griffin and D Kahneman, Editors, Heuristics and biases: the psychology of intuitive judgment. (489509). New York: Cambridge University Press. Brinol, P., Petty, R.E., & Tormala, Z.L. (2006). The malleable meaning of subjective ease. Psychological Science, 17, 200-206. Bower, G.H. (1981). Mood and memory. American Psychologist, 36, 129-148. Broemer, P., Grabowski, A., Gebauer, J. E., Ermel, O., & Diehl, M. (2008). How temporal distance from past selves influences self-perception. European Journal of Social Psychology, 38, 697-714. Bruner, J. (1957a). On perceptual readiness. Psychological Review, 64, 123-152. Bruner, J. (1957b). Going beyond the information given. In: H. Gulber et al. (eds.): Contemporary approaches in cognition. Cambridge, Mass.: Harvard University Press. Bruner, J. S. (1992). Another look at New Look I. American Psychologist, 47, 780-783. Bruner, J. S. & Goodman, C. C. (1947). Value and need as organizing factors in perception. Journal of Abnormal and Social Psychology, 42, 33-44. Bruner, J. S., & Klein, G. S. (1960). The functions of perceiving: New Look retrospect. In B. Kaplan & S. Wapner (Eds.), Perspectives in psychological theory (pp.
112 61-77). New York: International Universities Press. Bruner, J. S., & Postman, L. (1947). Tension and tension-release as organizing factors in perception. Journal of Personality, 15, 300-308. Buehler, R. Griffin, D.W. & Ross, M. (2002). Inside the planning fallacy: the causes and consequences of optimistic time predictions. In: T Gilovich, D Griffin and D Kahneman, Editors, Heuristics and biases: the psychology of intuitive judgment. (250– 270). New York: Cambridge University Press. Buehler, R., & McFarland, C. (2001). Intensity bias in affective forecasting: The role of temporal focus. Personality and Social Psychology Bulletin, 27, 1480-1493. Camerer, C. (1990). Do markets correct biases in probability judgment? Evidence from market experiments. In L. Green & J.H. Kagel (Eds.), Advances in behavioral economics, 2, 125-172. Camerer, C., Babcock, L., Loewenstein, G., & Thaler, R. (1997). Labor supply of New York City cabdrivers: One day at a time. The Quarterly Journal Of Economics ,112, 407–441. Camerer, C., Issacharoff, S., Loewenstein, G., O'Donoghue, T., & Rabin, M. (2003). Regulation for conservatives: Behavioral economics and the case for “asymmetric paternalism.” University of Pennsylvania Law Review, 1151, 1211-1254. Camerer, C., Loewenstein, G., & Weber, M. (1989). The curse of knowledge in economic settings: An experimental-analysis. Journal of Political Economy, 97, 1232-1254. Camerer, C., &. Thaler, R.H. (1995). Anomalies: Ultimatums, dictators and manners. Journal of Economic Perspective, 9, 209–219. Carlsmith, J. M., & Gross, A. E. (1969). Some effects of guilt on compliance. Journal of Personality and Social Psychology, 11, 232–239. Carlson, B. W. (1990). Anchoring and adjustment in judgments under risk. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16, 665-676. Carlson, M., Charlin, V., & Miller, N. (1988). Positive mood and helping behavior: A test of six hypotheses. Journal of Personality and Social Psychology, 55, 211–229. Carroll, J.S. (1978). The effect of imagining an event on expectations for the event: An interpretation in terms of the availability heuristic. Journal of Experimental Social Psychology, 14, 88-96. Cartwright, D. (1949). Some principles of mass persuasion: Selected findings of research on the sale of U.S. War Bonds. Human Relations, 2, 253-267. Cervone, D., & Peake, P. (1986). Anchoring, efficacy, and action: The influence of judgmental heuristics on self-efficacy judgments and behavior. Journal of Personality and Social Psychology, 50, 492-501. Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39, 752-766. Chaiken, S., Trope, Y. (1999) Dual-process theories in social psychology. New York: Guildford Press. Chapman, G.B., Johnson, E.J. (1999). Anchoring, activation and the construction of value. Organizational Behavior and Human Decision Processes, 79, 115-153. Chen, S., Lee-Chai, A.Y., & Bargh, J.A. (2001). Relationship orientation as a moderator of the effects of social power. Journal of Personality and Social Psychology, 80, 173-187.
113 Cheng, P. W., Holyoak, K.J. (1985). Pragmatic reasoning schemas. Cognitive Psychology, 17, 391-416. Choi, J. Laibson, D., & Madrian, B. (in press). Reducing the complexity costs of 401(k) participation through quick enrollment. In D.A. Wise (Ed.), Research findings in the economics of aging. Chicago: University of Chicago Press. Cialdini, R. (2007). Descriptive Social Norms as Underappreciated Sources of Social Control. Psychometrika, 72, 263-268, Cialdini, R. B., Darby, B. L., & Vincent, J. E. (1973). Transgression and altruism: A case for hedonism. Journal of Experimental Social Psychology, 9, 502–516. Cialdini, R.B., Kallgren, C.A., & Reno, R.R. (1991). A focus theory of normative conduct: A theoretical refinement and reevaluation of the role of norms in human behavior. In M.P. Zanna (Ed.), Advances in Experimental Social Psychology (vol. 24, pp. 202-234). San Diego, CA: Academic Press. Clore, G.L. (1992). Cognitive phenomenology: Feelings and the construction of judgment. In L.L. Martin & A Tesser (Eds.), The construction of social judgments (133163). Hillsdale, NJ: Lawrence Erlbaum Associates. Clore, G.L., & Huntsinger, J.R. (2007). How emotions inform judgment and regulate thought. Trends in Cognitive Science, 11, 393-399. Coleman, S. (1996). The Minnesota income tax compliance experiment state tax results. http://www.taxes.state.mn.us/legal_policy/research_reports/content/complnce.pdf Connolly, T., & Zeelenberg, M. (2002). Regret and decision making. Current Directions in Psychological Science, 11, 212–216. Correll, J., Park, B., Judd, C. M., & Wittenbrink, B. (2002). The police officer’s dilemma: Using ethnicity to disambiguate potentially threatening individuals. Journal of Personality and Social Psychology, 83, 1314–1329. Cosmides, L. (1989). T he logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition, 31, 187-316. Cox, J.R., & Griggs, R.A. (1982) The effects of experience on performance in Wason’s selction task. Memory and Cognition, 10, 496-502. Critcher, C.R., & Gilovich, T. (2008). Incidental environmental anchors. Journal of Behavioral Decision Making, 21, 241-251. Crocker, J. (1982). Biased questions in judgment of covariation studies. Personality and Social Psychology Bulletin, 8, 214-220. Damasio, A. R. (1994). Descartes’ Error: Emotion, Reason, and The Human Brain. New York: Putnam. Damasio, H., Bechara,A., & Damasio, A.R. (2002). Reply to ‘Do somatic markers mediate deisions on the gambling task?’ Nature Neuroscience, 5, 1102-1104. Darlington, R. B., & Macker, C. E. (1966). Displacement of guilt-produced altruistic behavior. Journal of Personality and Social Psychology, 4, 442–443. Davies, M.F. (1997). Belief persistence after evidential discrediting: The impact of generated versus provided explanations on the likelihood of discredited outcomes. Journal of Experimental Social Psychology, 33, 561-578. Davis, C. G., Lehman, D. R., Wortman, C. B., Silver, R. C., &Thompson, S. C. (1995). The undoing of traumatic life events. Personality and Social Psychology Bulletin, 21, 109–124. Dawes, R.M. (1988). Rational choice in an uncertain world. New York: Harcourt Brace Jovanovich.
114 Dawson, E., Gilovich, T., & Regan, D.T. (2002). Motivated reasoning and performance on the Wason selection task. Personality and Social Psychology Bulletin, 28, 1379-1387. Denes-Raj, V., & Epstein, S. (1994). Conflict between intuitive and rational processing: When people behave against their better judgment. Journal of Personality and Social Psychology, 66, 819–829. Detweiler, J. B., Bedell, B. T., Salovey, P., Pronin, E., & Rothman, A. J. (1999). Message framing and sunscreen use: Gain-framed messages motivate beach-goers . Health Psychology, 18, 189-196. Deutsch, M., & Gerard, H.B. (1955). A study of normative and informational social influence upon individual judgment. Journal of Abnormal and Social Psychology, 51, 629-636. Devine, P. G. (1989). Stereotypes and prejudice: Their automatic and controlled components. Journal of Personality and Social Psychology, 56, 5–18. Dijksterhuis, A. (2004). Think different: The merits of unconscious thought in preference development and decision making. Journal of Personality and Social Psychology, 87, 586-598. Dijksterhuis, A., Bos, M.W., Nordgren, L.F., von Baaren, R.B. (2006). On making the right choice: The deliberation-without-attention effect. Science, 311, 10051007. Dijksterhuis, A., Spears, R., Postmes, T., Stapel, D.A, Koomen, W., van Knippenberg, A., & Scheepers, D. (1998). Seeing one thing and doing another: Contrast effects in automatic behavior. Journal of Personality and Social Psychology, 75, 862871. Ditto, P. H., & Lopez, D.F. (1992). Motivated skepticism: Use of differential decision criteria for preferred and nonpreferred conclusions. Journal of Personality and Social Psychology, 63, 568-584. Ditto, P.H, Pizarro, D.A., Epstein, E.B., Jacobson, J.A., & MacDonald, T.K. (2006). Visceral influences on risk-taking behavior. Journal of Behavioral Decision Making, 19(2), 99-113. Ditto, P.H., Scepansky, J.A., Munro, G.D., Apanovich, A.M., & Lockhart, L.K. (1998). Motivated sensitivity to preference-inconsistent information. Journal of Personality and Social Psychology, 75, 53-69. Dunn, B.D., Dalgleish, T., & Lawrence, A.D. (2006). The somatic marker hypothesis: A critical evaluation. Neuroscience and Biobehavioral Reviews, 30, 239-271. Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes. Fort Worth, TX: Harcourt Brace & Company. Edmans, A., Garcia, D., & Norli, O. (2007). Sports sentiment and stock returns. Journal of Finance, 62, 1967-1998. Edwards, K., & Smith, E.E. (1996). A disconfirmation bias in the evaluation of arguments. Journal of Personality and Social Psychology, 71, 5-24. Edwards, W. (1953). Probability-preferences in gambling. The American Journal of Psychology, 66, 349-364. Edwards, W. (1954). The theory of decision making. Psychological Bulletin, 41, 380-417. Edwards, W. (1961). Behavioral decision theory. Annual Review of Psychology, 12, 473-498.
115 Edwards, W. (1968). Conservatism in human information processing. In B. Kleinmuntz (Eds.), Formal representation of human judgment (pp.17-52). New York: Wiley. Edwards, W., Lindman, H., & Savage, L.J. (1963). Bayesian statistical inference for psychological research. Psychological Review., 70, 193-242. Eibach, R.P., Libby, L.K., & Gilovich, T. (2003). When change in the self is mistaken for change in the world. Journal of Personality and Social Psychology, 84, 917931. Epley, N. (2004). A tale of tuned decks? Anchoring as accessibility and anchoring as adjustment. In D.J. Koehler, & N. Harvey (Eds.), The Blackwell Handbook of Judgment and Decision Making (pp. 240-256). Oxford, U.K.: Blackwell Publishers. Epley, N., & Gilovich, T. (2001). Putting adjustment back in the anchoring and adjustment heuristic: Divergent processing of self-generated and experimenter-provided anchors. Psychological Science, 12, 391-396. Epley, N., & Gilovich, T. (2004). Are adjustments insufficient? Personality and Social Psychology Bulletin, 30, 447-460. Epley, N., & Gilovich, T. (2005). When effortful thinking influences judgmental anchoring: Differential effects of forewarning and incentives on self-generated and externally-provided anchors. Journal of Behavioral Decision Making, 18, 199-212. Epley, N., & Gilovich, T. (2006). The anchoring and adjustment heuristic: Why adjustments are insufficient. Psychological Science, 17, 311-318. Epley, N., Keysar, B., Van Boven, L., & Gilovich, T. (2004). Perspective taking as egocentric adjustment. Journal of Personality and Social Psychology, 87, 327-339. Epley, N., Mak, D., & Idson, L. (2006). Bonus or Rebate?: The impact of income framing on spending and saving. Journal of Behavioral Decision Making, 19, 213-227. Epstein, S. (1991). Cognitive-experiential self-theory: An integrative theory of personality. In R. Curtis (Ed.), The self with others: Convergences in psychoanalytic, social, and personality psychology (pp. 111–137). New York: Guilford Press. Evans, J.St. B.T. (2004). History of the dual process theory in reasoning. In K.I. Manktelow & M.C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 241-266). Hove, UK: Psychology Press. Evans, J. St. B.T. (2007). Hypothetical thinking: Dual processes in reasoning and judgment. New York: Psychology Press. Evans, J. St. B.T. (2008). Dual-processing accounts of reasoning, judgment, and Social Cognition. Annual Review of Psychology, 59, 255-278. Evans, J.St. B. T., Newstead, S.E., & Byrne, R.M.J. (1993). Human reasoning: The psychology of deduction. Hove, UK: Erlbaum. Evans, J., & Over, D.E., (1996). Rationality and Reasoning. Psychology Press, Hove. Falk, R., & Konold, C. (1997). Making sense of randomness: Implicit encoding as a basis for judgment. Psychological Review, 104, 301-318. Fazio R. H. (1990). Multiple processes by which attitudes guide behavior: the MODE model as an integrative framework. Advances in Experimental Social Psychology (Vol. 23, pp. 75–109). New York: Academic Press. Fazio, R.H., & Olson, M.A. (2003). Implicit measures in social cognition research: Their meaning and use. Annual Review of Psychology, 54, 297-327.
116 Ferguson, M. J., & Bargh, J. A. (2004). Liking is for doing: The effects of goal pursuit on automatic evaluation. Journal of Personality and Social Psychology, 87(5), 557-572. Festinger, L. (1964). Conflict, decision and dissonance. Palo Alto: Stanford University Press. Festinger, L., & Carlsmith, J.M. (1959). Cognitive consequences of forced compliance. Journal of Abnormal and Social Psychology, 58, 203-210. Fiddick, L., Spampinato, M.V., & Grafman, J. (2005). Social contracts and precautions activate different neurological systems: An fMRI investigation of deontic reasoning. Neuroimage, 28, 778-786. Fiedler, K. (1991). The tricky nature of skewed frequency tables: An information loss account of distinctiveness-based illusory correlations. Journal of Personality and Social Psychology, 60, 24-36. Fischer, G. W. & Hawkins, S. A. (1993). Scale compatibility, strategy compatibility and the prominence effect. Journal of Experimental Psychology: Human Perception and Performance, 19, 580-597. Fischhoff, B., & Beyth, R. (1975). I knew it would happen: Remembered probabilities of once-future things. Organizational Behavior and Human Performance, 13, 1–16. Fiske, S.T., Lin, M., & Neuberg, S. (1999). The continuum model: Ten years later. In Y. Trope & S. Chaiken (Eds.), Dual-process theories in social psychology (pp. 231–254). New York: Guilford Press. Fiske, S.T., & Neuberg, S.L. (1990). A continuum model of impression formation, from category-based to individuation processes: Influence of information and motivation on attention and interpretation. In M.P. Zanna (Ed.), Advances in experimental social psychology, 23, (pp. 1-74). New York: Academic Press. Fiske, S. T., & Taylor, S. E. (1991). Social cognition (2nd ed.). New York: McGraw-Hill:Frieze, I. H., Fisher. Fiske, S. T., & Taylor, S. E. (2007). Social cognition:From brains to culture. New York: McGraw-Hill. Fitzsimons, Gráinne M., Tanya L. Chartrand and Gavan J. Fitzsimons (in press) “Automatic Effects of Brand Exposure on Motivated Behavior: How Apple Makes You “Think Different”,” Journal of Consumer Research. Fodor, J. (2000). Why we are so good at catching cheaters. Cognition, 75, 29–32. Folkes, V.S. (1988). The availability heuristic and perceived risk. Journal of Consumer Research, 15, 13-23. Forgas, J. P. (1998b). On being happy and mistaken: Mood effects on the fundamental attribution error. Journal of Personality and Social Psychology, 75(2), 318– 331. Fox, C.R., (2006). The availability heuristic in the classroom: How soliciting more criticism can boost your course ratings. Judgment and Decision Making, 1(1), 8690. Frank, R.H., Gilovich, T., & Regan, D.T. (1993). Does studying economics inhibit cooperation? Journal of Economic Perspectives, 7, 159 - 171. Fredrickson, B.L., & Branigan, C. (2005). Positive emotions broaden the scope of attention and thought-action repertoires. Cognition and Emotion, 19, 313-332.
117 Frederick, S. (2002). Automated choice heuristics. In T. Gilovich, D.W. Griffin, & D. Kahneman (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (pp. 548-558). Cambridge, UK : Cambridge University Press. Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19, 25-42. Frederick, S., Loewenstein, G., & O' Donoghue, T. (2002). Time discounting and time preference: a critical review. Journal of Economic Literature. 40, pp. 350-401. Fujita, K., Henderson, M.D., Eng, J., Trope, Y., & Liberman, N. (2006). Spatial distance and mental construal of social events. Psychological Science, 17, 278-282. Funder, D.C. (1987). Errors and mistakes: Evaluating the accuracy of social judgment. Psychological Bulletin, 101, 75-90. Gabrielcik, A., & Fazio, R. H. (1984). Priming and frequency estimation: A strict test of the availability heuristic. Personality and Social Psychology Bulletin, 10, 85-89. Gasper, K., & Clore, G.L. (2002). Attending to the big picture: Mood and global versus local processing of visual information. Psychological Science, 13, 34-40. Gavanski, I., & Hui, C. (1992). Natural sample spaces and uncertain belief. Journal of Personality and Social Psychology, 63, 585-595. Gawronski, B., & Bodenhausen, G.V. (2006). Associative and propositional processes in evaluation: An integrative review of implicit and explicit attitude change. Psychological Bulletin, 132, 692-731. Gigerenzer, G. (1991). How to make cognitive illusions disappear: Beyond heuristics and biases. European Review of Social Psychology, 2, 83-115. Gigerenzer, G., Hell, W., Blank, H. (1988). Presentation and content: The use of base rates as a continuous variable. Journal of Experimental Psychology: Human Perception and Performance, 14, 513-525. Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102, 684–704. Gigerenzer, G., Todd, P. M., & the ABC Research Group. (1999). Simple heuristics that make us smart. New York: Oxford University Press. Gilbert, D. T. (1989). Thinking lightly about others: Automatic components of the social inference process. In J. S. Uleman & J. A. Bargh (Eds.), Unintended thought. New York: Guilford Press. Gilbert, D. T. (1999). What the mind's not. In S. Chaiken & Y. Trope (Eds.), Dual process theories in social psychology (pp. 3-11). New York: Guilford. Gilbert, D. T. (2002). Inferential correction. In T. Gilovich, D. W. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 167184). New York: Cambridge University Press. Gilbert, D. T., & Jones, E. E. (1986). Perceiver-induced constraint: Interpretations of self-generated reality. Journal of Personality and Social Psychology, 50, 269-280. Gilbert, D. T., & Malone, P. S. (1995). The correspondence bias. Psychological Bulletin, 117, 21-38. Gilbert, D. T., Pinel, E. C., Wilson, T. D., Blumberg, S. J., & Wheatley, T. P. (1998). Immune neglect: A source of durability bias in affective forecasting. Journal of Personality and Social Psychology, 75, 617-638. Gilbert, D. T., Pinel, E. C., Wilson, T. D., Blumberg, S. J., & Wheatley, T. P. (2002). Durability bias in affective forecasting. In Gilovich, T., Griffin, D., & Kahneman,
118 D. (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 292-312). Cambridge: Cambridge University Press. Gilbert, D.T., & Wilson, T. D. (2000). Miswanting: Some problems in the forecasting of future affective states. In J. Forgas, (ed.), Feeling and thinking: The role of affect in social cognition. (178-197). New York: Cambridge University Press. Gilovich, T. (1981). Seeing the past in the present: The effect of associations to familiar events on judgments and decisions. Journal of Personality and Social Psychology, 40, 797-808. Gilovich, T. (1983). Biased evaluation and persistence in gambling. Journal of Personality and Social Psychology, 44, 1110-1126. Gilovich, T. (1990). Differential construal and the false consensus effect. Journal of Personality and Social Psychology, 59, 623–634. Gilovich, T. (1991). How we know what isn’t so: The fallibility of human reason in everyday life. New York: Free Press. Gilovich, T., Kerr, M., & Medvec, V.H. (1993). The effect of temporal perspective on subjective confidence. Journal of Personality and Social Psychology, 64, 552 - 560. Gilovich, T., & Medvec, V.H. (1995). The experience of regret: What, when, and why. Psychological Review, 102, 379-395. Gilovich, T., Medvec, V.H., & Savitsky, K. (2000). The spotlight effect in social judgment: An egocentric bias in estimates of the salience of one’s own actions and appearance. Journal of Personality and Social Psychology, 78, 211-222. Gilovich, T., & Savitsky, K. (2002). Like goes with like: The role of representativeness in erroneous and pseudo-scientific beliefs. In T. Gilovich, D. W. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 617–624). New York: Cambridge University Press. Gilovich, T., Savitsky, K., & Medvec, V.H. (1998). The illusion of transparency: Biased assessments of others’ ability to read our emotional states. Journal of Personality and Social Psychology, 75, 332-346. Gilovich, T., Vallone, R., & Tversky, A. (1985). The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology, 17, 295-314. Gleicher, F., Kost, K. A., Baker, S. M., Strathman, A. J., Richman, S. A., & Sherman, S. J. (1990). The role of counterfactual thinking in judgments of affect. Personality and Social Psychology Bulletin, 16, 284-295. Goffman, E. (1963). Behavior in public places. New York: The Free Press. Goldstein, N.J., Cialdini, R.B., & Griskevicius, V. (2008). A room with a viewpoint: Using social norms to motivate environmental conservation in hotels. Journal of Consumer Research, 35, 472-482. Grayson, C. E., & Schwarz, N. (1999). Beliefs influence information processing strategies: Declarative and experiential information on risk assessment. Social Cognition, 17, 1–18. Greene, J.D., Sommerville, R.B., Nystrom, L.E., Darley, J.M., Cohen, J.D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105-2108. Grether, D. M., & Plott, C.R. (1979). Economic theory of choice and the preference reversal phenomenon. American Economic Review, 69, 623-638. Griffin, D.W., & Buehler, R. (1999). Frequency, probability, and prediction: Easy solutions to cognitive illusions? Cognitive Psychology, 38, 48–78.
119 Griffin, D.W., Dunning D., & Ross, L. (1990). The role of construal processes in overconfident predictions about the self and others. Journal of Personality and Social Psychology, 59, 1128–39. Griffin, D.W., & Ross, L. (1991). Subjective construal, social inference, and human misunderstanding. In M. Zanna (Ed.), Advances in experimental social psychology (Vol. 24, pp. 319–356). New York: Academic Press. Griffin, D.W., & Tversky, A. (1993). The weighing of evidence and the determinants of confidence. Cognitive Psychology, 24, 411-435. Griggs, R.A., & Cox, J.R. (1982). The elusive thematic-materials effect in Wason’s selection task. British Journal of Psychology, 73, 407-420. G¨uth, W. (1995). On ultimatum bargaining experiments—a personal review. Journal of Economic Behavior and Organization, 27, 329–344. Hamilton, D.L., & Gifford, R.K. (1976). Illusory correlation in interpersonal perception: A cognitive basis of stereotypic judgments. Journal of Experimental Social Psychology, 12, 392-407. Hammond, K.R. (1996). Human judgment and social policy: Irreducible uncertainty, inevitable error, unavoidable injustice. New York: Oxford. Hawkins, S. A., & Hastie R. (1990). Hindsight biased judgments of past events after the outcomes are known. Psychological Bulletin, 107, 311-327 Hastorf, A., & Cantril, H. (1954). They saw a game: A case study. Journal of Abnormal & Social Psychology, 49, 129-134. Hebl, M. R., Foster, J. B., Mannix, L. M., & Dovidio, J. F. (2002). Formal and interpersonal discrimination: A field study of bias toward homosexual applicants. Personality and Social Psychology Bulletin, 28, 815–825. Henrich, J. (2000). Does culture matter in economic behavior?: Ultimatum game bargaining among the Machiguenga of the Peruvian Amazon. American Economic Review, 90, 973-979. Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreah, R. (2001). In search of Homo Economicus: Behavioral experiments in 15 small-scale societies. American Economic Review, 91, 73-78. Herbert, S.A. (1955). A Behavioral Model of Rational Choice. Quarterly Journal of Economics, 69, 99-118. Hertwig, R., & Gigerenzer, G. (1999). The ‘conjunction fallacy’ revisited: How intelligent inferences look like reasoning errors. Journal of Behavioral Decision Making, 12, 275–306. Higgins, E. T., & Bargh, J. (1987). Social cognition and social perception. Annual Review of Psychology, 38, 369–425. Hilton, D.J. (1990). Conversational processes and causal explanation. Psychological Bulletin, 107, 65-81. Hinson, J.M., Jameson, T.L., & Whitney, P. (2002). Somatic markers, working memory, and decision making. Cognitive, Affective, and Behavioral Neuroscience, 2, 341-353. Hirshleifer, D, & Shumway, T. (2003). Good day sunshine: Stock returns and the weather, Journal of Finance. 1009–1032. Holder, M.D., & Hawkins, C. (2007). The illusion of transparency: Assessment of sex differences in showing and hiding disgust. Basic and Applied Social Psychology, 29, 235-243.
120 Holyoak, K.J., & Cheng, P.W. (1995). Pragmatic reasoning with a point of view. Thinking and Reasoning, 1, 289-313. Hong, Y., Morris, M. W., Chiu, C., & Benet-Martinez, V. (2000). Multicultural minds: A dynamic constructivist approach to culture and cognition. American Psychologist, 55, 709-720. Hsee, C. K. (1995). Elastic justification: How tempting but task-irrelevant factors influence decisions. Organizational Behavioral and Human Decision Process, 62, 330-337. Hsee, C. K. (1996). Elastic justification: How unjustifiable factors influence judgments. Organizational Behavior and Human Decision Processes, 66, 122-129. Huber. J, Payne, J.W., & Puto, C. (1982). Adding asymmetrically dominated alternatives: Violations of regularity and the similarity hypothesis. Journal of Consumer Research, 9, 90–98. Isen, A. M., Clark, M., & Schwartz, M. F. (1976). Duration of the effect of good mood on helping: Footprints on the sands of time. Journal of Personality and Social Psychology, 34, 385–393. Isen, A.M., & Levin, P.F. (1972). The effect of feeling good on helping: Cookies and kindness. Journal of Personality and Social Psychology, 21, 384-388. Isen, A.M., Shalker, T.E., Clark, M., & Karp, L. (1978). Affect, accessibility of material in memory, and behavior: A cognitive loop? Journal of Personality and Social Psychology, 37, 1-14. Iyengar, S., Jiang, W., & Huberman, G. (2004). How much choice is too much: Determinants of individual contributions in 401K retirement plans. In O. Mitchell & S. Utkus (Eds.), Pension design and structure: New lessons from behavioral finance (pp. 83-95). Oxford: Oxford University Press. Iyengar, S., & Lepper, M.R. (2000). When choice is demotivating: Can one desire too much of a good thing? Journal of Personality and Social Psychology, 79, 9951006. Jacoby, L.L, & Dallas, M. (1981). On the relationship between autobiographical memory and perceptual learning. Journal of Experimental Psychology, 3, 306-340. Jacoby, L.L., Woloshyn, V., & Kelley, C. (1989). Becoming famous without being recognized: Unconscious influences of memory produced by dividing attention. Journal of Experimental Psychology: General, 118, 115–125. Jacowitz, K. E., & Kahneman, D. (1995). Measures of anchoring in estimation tasks. Personality and Social Psychology Bulletin, 21, 1161-1167. Jenni, K.E., & Loewenstein, G.F. (1997). Explaining the "Identifiable Victim Effect." Journal of Risk and Uncertainty 14, 235–257. Johnson, E.J., & Goldstein, D. (2003) Do defaults save lives? Science, 302, 13381339. Johnson, E.J., Hershey, J., Meszaros, J., Kunreuther, H. (1993). Framing, probability distortions, and insurance decisions. Journal of Risk and Uncertainty, 7, 3551. Johnson, E.J., & Tversky, A. (1983). Affect, generalization, and the perception of risk. Journal of Personality and Social Psychology, 45, 20-31. Johnson-Laird, P.N., Legrenzi, P., & Legrenzi, M. (1972). Reasoning and a sense of reality. British Journal of Psychology, 63, 395-400.
121 Jones, E. E., & Nisbett, R. E. (1972). The actor and the observer: Divergent perceptions of the causes of behavior. In E. E. Jones, D. E. Kanouse, H. H. Kelley, R. E. Nisbett, S. Valins, & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior (pp. 79-94). Morristown, NJ: General Learning Press. Jones, J. (Feb 11, 2008). Truly functional foods. Eating Right. PressRepublican.com, The Online Community and News Source for Clinton, Essex and Franklin counties of Northeastern New York. http://www.pressrepublican.com/0808_health/local_story_042224534.html Judd, C. M., Blair, I. V., & Chapleau, K. M. (2004). Automatic stereotypes vs. automatic prejudice: Sorting out the possibilities in the Payne (2001) weapon paradigm. Journal of Experimental Social Psychology, 40, 75–81. Kahneman, D. (2003). A perspective on judgment and choice: Mapping out bounded rationality. American Psychologist, 58, 697-720. Kahneman, D., Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment . In T. Gilovich, D.W. Griffin, & D. Kahneman (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (pp. 49 – 81). Cambridge, UK : Cambridge University Press. Kahneman, D., Knetsch, J.L., & Thaler, R.H. (1986). Fairness as a constraint on profit seeking: Entitlements and the market. American Economic Review, 76, 728-741. Kahneman, D., Knetsch, J.L., & Thaler, R.H. (1991). Anomalies: The endowment effect, loss aversion, and status quo bias. Journal of Economic Perspectives, 5, 193-206. Kahneman, D., & Lovallo, D. (1993). Timid choices and bold forecasts: A cognitive perspective on risk taking. Management Science, 39(1), 17–31. Kahneman, D., & Miller, D. T. (1986). Norm theory: Comparing reality to its alternatives. Psychological Review, 93, 136-153. Kahneman, D., & Tversky, A. (1972). Subjective probability: A judgment of representativeness. Cognitive Psychology, 3, 430-454. Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237-251. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263-291. Kahneman, D., & Tversky, A. (1982a). The psychology of preferences. Scientific American, 246, 160-173. Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39, 341-350. Kahneman, D., & Tversky, A. (1996). On the reality of cognitive illusions. Psychological Review, 103, 582-591. Kahneman, D., & Varey, C.A. (1990). Propensities and counterfactuals: The loser that almost won. Journal of Personality and Social Psychology, 59, 1101-1110. Kamin, K.A., & Rachlinski, J.J. (1995). Ex post = ex ante : Determining liability in hindsight. Law and Human Behavior, 19, 89-104. Kamstra, M.J., Kramer, L.A., & Levi, M.D. (2003). Winter blues: Seasonal affective disorder (SAD) and stock market returns. American Economic Review, 93, 324– 343. Kant, I. (1965). The critique of pure reason (N. K. Smith, Trans). New York: St. Martin’s. (Original work published 1781).
122 Kay, A.C., Wheeler, S.C., Bargh, J.A., & Ross, L. (2004). Material priming: The influence of mundane physical objects on situation construal and competitive behavioral choice. Organizational Behavior and Human Decision Processes, 95, 83-96. Kelley, H.H. (1973). The processes of causal attribution. American Psychologist, 28, 107-128. Kelly, C. M., & Jacoby, L. (1998). Subjective reports and process dissociation: Fluency, knowing, and feeling. Acta Psychologica, 98, 127-140. Keysar, B., & Barr, D. J. (2002). Self-anchoring in conversation: Why language users don’t do what they “should.” In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 150–166). Cambridge, England: Cambridge University Press. Klayman, J., & Ha, Y. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94, 211-228. Knetsch, J. L., & Tang, F, 2006). The context, or reference, dependence of economics values: Further evidence and some predictable patterns. In M. Altman (Ed.), Handbook of Contemporary Behavioral Economics: A Handbook. (423-440). New York: Sharpe Publishers. Koehler, J. (1996). The base-rate fallacy reconsidered: Descriptive, normative, and methodological challenges. Behavioral and Brain Sciences, 19, 1-53. Koffka, Kurt. (1935). The principles of Gestalt psychology. Harcourt, New York: New York. Krosnick, J.A., Li, F., & Lehman, D.R. (1990). Conversational conventions, order of information acquisition, and the effect of base rates and individuating information on social judgments. Journal of Personality and Social Psychology, 59, 1140-1152. Kruger, J. (1999). Lake Wobegon be gone! The “below-average effect” and the egocentric nature of comparative ability judgments. Journal of Personality and Social Psychology, 77, 221-232. Kruger, J., & Gilovich, T. (2004). Actions, intentions, and trait assessment: The road to self-enhancement is paved with good intentions. Personality and Social Psychology Bulletin, 30, 328-339. Kruglanski, A.W., & Webster, D.M. (1996). Motivated closing of the mind: “Seizing” and “freezing.” Psychological Review, 103, 263-283. Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480-498. Lagnado, D.A. & Sloman, S.A. (2007). Inside and outside probability Judgment. Blackwell Handbook of Judgment and Decision Making, 155-176. Langer, E.J. (1989). Mindfulness. New York: Perseus Books. Lazarus, R. S. (1991). Emotion and adaptation. New York: Oxford University Press. Leith, K.P., & Baumeister, R.F. (1996). Why do bad moods increase selfdefeating behavior? Emotion, risk taking, and self-regulation. Journal of Personality and Social Psychology, 71, 1250-1267. Lepper, M.., Greene, D., Nisbett, R.E. (1973). Undermining children's intrinsic interest with extrinsic rewards: A test of "overjustification" hypothesis. Journal of Personality and Social Psychology, 28, 129–137.
123 Lerner, J. S., Gonzalez, R. M., Small, D. A., & Fischhoff, B. (2003). Effects of fear and anger on perceived risks of terrorism: A national field experiment. Psychological Science, 14, 144-150. Lerner, J. S., & Keltner, D. (2001). Fear, anger, and risk. Journal of Personality and Social Psychology, 81, 146-159. Lerner, J. S., Small, D. A., & Loewenstein, G. (2004). Heart strings and purse strings - carryover effects of emotions on economic decisions. Psychological Science, 15, 337-341. Leventhal, H., Singer, R.P., & Jones, S.H. (1965). The effects of fear and specificity of recommendation upon attitudes and behavior. Journal of Personality and Social Psychology, 2, 20-29. Levin, I.P., & Gaeth, G.J. (1988). Framing of attribute information before and after consuming the product. Journal of Consumer Research, 15, 374–378. Lewin, K. (1952). Group decision and social change. In G.E. Swanson, T.M. Newcomb, & E.L. Hartley (Eds.), Readings in social psychology. New York: Henry Holt. Lewin, K., Dembo, T., Festinger, L., & Sears, P. (1944). Level of Aspiration. In J. M. Hunt (Ed.). Personality and the behavior disorders, 333-378. Oxford: Ronald Press. Liberman, N., Sagristano, M. D., & Trope, Y. (2002). The effect of temporal distance on level of mental construal. Journal of Experimental Social Psychology, 38, 523-534. Liberman, N., & Trope, Y. (1998). The role of feasibility and desirability considerations in near and distant future decisions: A test of temporal construal theory, Journal of Personality and Social Psychology, 75, 5–18. Liberman, N. & Trope, Y. (2008). The psychology of transcending the here and now. Science, 322, 1201-1205. Lichtenstein, S., & P. Slovic. (1971). Reversals of preference between bids and choices in gambling decisions. Journal of Experimental Psychology, 89, 46–55. Lichtenstein, S., & Slovic, P. (1973). Response-induced reversals of preference in gambling:An extended replication in Las Vegas,” Journal of Experimental Psychology, 101, 16-20. Lieberman, M.D. (2003). Reflective and reflexive judgment processes: A social Cognitive neuroscience approach. In J. Forgas, K.R. Williams, & W. von Hippel (Eds.), Social judgments: Implicit and explicit processes (pp. 44-67). New York: Cambridge University Press. Lieberman, M.D., Jarcho, J.M., & Satpute, A.B. (2004). Evidence-based and intuition-based self-knowledge: An fMRI study. Journal of Personality and Social Psychology, 87, 421-435. Lieberman, M. D., Gaunt, R., Gilbert, D. T., & Trope, Y. (2002). Reflexion and reflection: A social cognitive neuroscience approach to attributional inference. In M. Zanna (Ed.), Advances in experimental social psychology (Vol. 34, pp. 199-249). New York: Elsevier. Linville, P.W., Fischer, G.W., & Fischhoff, B. (1993). AIDS risk perceptions and decision biases. In J. B. Pryor and G. D. Reeder, (Eds.) The social psychology of HIV infection. (5–38). Hillsdale: Lawrence Erlbaum. Loewenstein, G. (1987). Anticipation and the valuation of delayed consumption. The Economic Journal, 97, 666-84.
124 Loewenstein, G. (1996). Out of control: Visceral influences on behavior. Organizational Behavior and Human Decision Processes, 65, 272-92. Loewenstein, G. (2001). The Creative Destruction of Decision Research. The Journal of Consumer,28, 499-505. Loewenstein, G., Weber, E.U., Hsee, C.K., & Welch, N. (2001). Risk as feelings. Psychological Bulletin, 127, 267-286. Loewenstein, G., Nagin, D., & Paternoster, R. (1997). The effect of sexual arousal on predictions of sexual forcefulness. Journal of Research in Crime and Delinquency, 34, 443–473. Loomes, G. (1999). Some lessons from past experiments and some challenges for the future. The Economic Journal, 109, 35-45. Lord, C.G., Ross, L., & Lepper, M.R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37, 2098-2109. Macleod, C., & Campbell, L. (1992). Accessibility and probability judgments: An experimental evaluation of the availability heuristic. Journal of Personality and Social Psychology, 63, 890-902. Madrian, B.C., & Shea, D.F. (2001). The power of suggestion: Inertia in 401(k) participation and savings behavior. Quarterly Journal of Economics, 116, 1149-1187. Manktelow, K.I., & Over, D.E. (1991). Social roles and utilities in reasoning with deontic conditionals. Cognition, 39, 85–105. Markman, A.B., & Medin, D.L. (1995). Similarity and alignment in choice. Organizational Behavior and Human Decision Processes, 63(2), 117-130. Mayer, J.D., Gaschke, Y.N., Braverman, D.L., & Evans, T.W. (1992). Moodcongruent judgment is a general effect. Journal of Personality and Social Psychology, 63, 119-132. Mayer, J.D., Gayle, M., Meehan, M.E., & Haarman, A.K. (1990). Toward better specification of the mood-congruency effect in recall. Journal of Experimental Social Psychology, 26, 465-480. McConahay, J. B., Hardee, B. B., & Batts, V. (1981). Has racism declined in America? It depends upon who is asking and what is asked. Journal of Conflict Resolution, 25, 563–579. McGlone,M. S., & Tofighbakhsh, J. (2000). Birds of a feather flock conjointly(?): Rhyme as reason in aphorisms. Psychological Science, 11, 424-428. McKenzie, G.R.M., Liersch, J.J., & Finkelstein, S.R. (2006). Recommendations implicit in defaults. Psychological Science, 17, 414-420. McKirnan, D.J. (1980). The conceptualization of deviance: A conceptualization and initial of a model of soci norms. European Journal of Social Psychology, 10, 79-93. McNeil, B.J., Pauker, S.G., Sox, H.C., & Tversky, A. (1982). On the elicitation of preferences for alternative therapies. New England Journal of Medicine, 1259–62. Meehl, P.E. (1954). Clinical versus statistical prediction. Minneapolis: University of Minnesota Press. Meehl, P.E., & Rosen, A. (1955). Antecedent probability and the efficiency of psychometric signs, patterns, or cutting scores. Psychological Bulletin, 52, 194-216. Mellers, B., Hertwig, R., & Kahneman, D. (2001). Do frequency representations eliminate conjunction effects? An exercise in adversarial collaboration. Psychological
125 Science, 12, 269-275. Miller, D.T. (2006). Social psychology: An invitation. Belmont, CA: Belmont. Miller, D. T., & McFarland, C. (1987). Pluralistic ignorance: When similarity is interpreted as dissimilarity. Journal of Personality and Social Psychology, 53, 298–305. Miller, D. T., & Taylor, B. R. (1995). Counterfactual thought, regret, and superstition: How to avoid kicking yourself. In N. J. Roese & J. M. Olson (Eds.), What might have been: The social psychology of counterfactual thinking (pp. 305-332). Mahwah, NJ: Lawrence Erlbaum Associates. Moskowitz, G. B., Skurnik, I., & Galinsky, A. D. (1999). The history of dual process notions in social psychology. In S. Chaiken & Y. Trope (Eds.), Dual-Process Theories in Social Psychology (pp. 12-36). New York: Guilford. Mussweiler, T. (2002). The malleability of anchoring effects. Experimental Psychology, 49, 67-72. Mussweiler, T. (2003). Comparison processes in social judgment: Mechanisms and consequences. Psychological Review, 110, 472-489. Mussweiler, T., & Bodenhausen, G. (2002). I know you are but what am I? Selfevaluative consequences of judging in-group and out-group members. Journal of Personality and Social Psychology, 82, 19-32. Mussweiler, T., Ruter, K., & Epstude, K. (2004). The ups and downs of social comparison: Mechanisms of assimilation and contrast. Journal of Personality and Social Psychology, 87, 832-844. Mussweiler, T., & Strack, F. (1999). Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective accessibility model. Journal of Experimental Social Psychology, 35, 136-164. Mussweiler, T., & Strack, F. (2000). The use of category and exemplar knowledge in the solution of anchoring tasks. Journal of Personality and Social Psychology, 78, 1038-1052. Mynatt, C.R., Doherty, M.E., & Tweney, R.D. (1977). Confirmation bias in a simulated research environment: an experimental study of scientific inference. Quarterly Journal of Experimental Psychology, 29, 85-95. Nash, J. (1950). Equilibrium points in n-person games. Proceedings of the National Academy of Sciences 36, 48-49. Nickerson, R.S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175-220. Nisbett, R., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs, NJ: Prentice-Hall. Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231–259. Novemsky, N., Dhar, R., Schwarz, N., & Simonson, I. (2007). Preference fluency in choice. Journal of Marketing Research, 44, 347-356. Nussbaum, S., Liberman, N., & Trope, Y. (2006). Predicting the near and distant future. Journal of Experimental Psychology: General, 135, 152-161. Oaksford, M., & Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological Review, 101, 608-631. Oppenheimer, D.M. (2004). Spontaneous discounting of availability in frequency judgment tasks. Psychological Science, 15, 100-105.
126 Oppenheimer, D.M. (2008). The secret life of fluency. Trends in Cognitive Sciences, 12, 237-241. Oppenheimer, D.M., LeBoeuf, R.A., & Brewer, N.T. (2008). Anchors aweigh: A demonstration of cross-modality anchoring. Cognition, 206, 13-26. Oswald, M.E., & Grosjean, S. (2004). Confirmation bias. In R. Pohl (Ed.), Cognitive illusions (pp 79-98). Hove, UK: Psychology Press. Park, J., & Banaji, M.R. (2000). Mood and heuristics: The influence of happy and sad states on sensitivity and bias in stereotyping. Journal of Personality and Social Psychology, 78, 1005-1023. Payne, B. K. (2001). Prejudice and perception: The role of automatic and controlled processes in misperceiving a weapon. Journal of Personality and Social Psychology, 81, 181–192. Payne, J. W., Bettman, J. R., & Johnson, E. J. (1990). The adaptive decision maker: Effort and accuracy in choice. In M. H. Robin (Ed.), Insights in decision making: A tribute to hillel j. Einhorn (pp. 129-153). Chicago, Illinois: University of Chicago Press. Payne, B. K., Lambert, A. J., & Jacoby, L. L. (2002). Best laid plans: Effect of goals on accessibility bias and cognitive control in race-based misperceptions of weapons. Journal of Experimental Social Psychology, 38, 384–396. Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 19, pp. 123–205). New York: Academic Press. Pinker, S. (1997). How the mind works. New York: W.W. Norton and Co. Plous, S. (1989). Thinking the unthinkable: The effect of anchoring on likelihood estimates of nuclear war. Journal of Applied Social Psychology, 19, 67-91. Poulton, E.C. (1994). Behavioral decision theory: A new approach. Cambridge: Cambridge University Press. Pronin, E, Gilovich, T, & Ross, L. (2004). Objectivity in the eye of the beholder: divergent perceptions of bias in self versus others, Psychological Review, 111, 781–799. Pyszczynski, T., & Greenberg, J. (1987). Toward an integration of cognitive and motivational perspectives on social inference: A biased hypothesis-testing model. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 20, pp.297-340). New York: Academic Press. Quattrone, G.A. (1982). Overattribution and unit formation: When behavior engulfs the person. Journal of Personality and Social Psychology, 42, 593-607. Quattrone, G.A., Lawrence, C.P., Finkel, S.E., & Andrus, D.C. (1981). Explorations in anchoring: The effects of prior range, anchor extremity, and suggestive hints. Unpublished manuscript, Stanford University, Stanford, CA. Rabin, M. (1993). Incorporating fairness into game theory and economics. American Economic Review, 83, 1281-1302. Rachlinski, J. (1998). A positive psychological theory of judging in hindsight. University of Chicago Law Review, 65, 571-625. Read, D. (2004). Intertemporal choice. D. Koehler, N. Harvey, eds. Blackwell Handbook of Judgment and Decision Making. (pp. 424-443). Blackwell, Oxford, UK. Read, D., Loewenstein, G., & Rabin, M. (1999). Choice Bracketing. Journal of Risk and Uncertainty, 19, 171-197.
127 Redelmeier, D. Koehler, D.J., Liberman, V., & Tversky, A. (1995). Probability judgment in medicine: Discounting unspecified alternatives. Medical Decision Making, 15, 227-230. Redelmeier, D.A., & Tversky, A. (1990). Discrepancy between medical decisions for individual patients and for groups. New England Journal of Medicine, 322, 1162–64. Regan, D. T., Williams, M., & Sparling, S. (1972). Voluntary expiation of guilt: A field experiment. Journal of Personality and Social Psychology, 24, 42–45. Regan, J. W. (1971). Guilt, perceived injustice, and altruistic behavior. Journal of Personality and Social Psychology, 18, 124–132. Risen, J.L., & Gilovich, T. (2007). Another look at why people are reluctant to exchange lottery tickets. Journal of Personality and Social Psychology, 93, 12-22. Risen, J.L, & Gilovich, T. (2008). Why people are reluctant to tempt fate. Journal of Personality and Social Psychology, 95, 293-307. Risen, J.L., Gilovich, T., & Dunning, D. (2007). One-shot illusory correlations and stereotype formation. Personality and Social Psychology Bulletin, 33, 1492-1502. Ross, L. (1977). The intuitive psychologist and his shortcomings. In L. Berkowitz (Ed.), Advances in experimental social psychology (pp. 173–220). New York: Academic Press. Ross, L., Greene, D., & House, P. (1977). The “false consensus effect”: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13, 279–301. Ross, L., Lepper, M.R., & Hubbard, M (1975). Perseverance in self perception and social perception: Biased attributional processes in the debriefing paradigm. . Journal of Personality and Social Psychology, 32, 880-892. Ross, L., & Nisbett, R.E. (1991). The person and the situation: Perspectives of social psychology. New York: McGraw-Hill. Ross, L., & Ward, A. (1996). Naive realism: Implications for social conflict and misunderstanding. In T. Brown, E. Reed, & E. Turiel (Eds.), Values and Knowledge (pp. 103-135). Hillsdale, NJ: Lawrence Erlbaum Associates. Ross, M., & Sicoly, F. (1979). Egocentric biases in availability and attribution. Journal of Personality and Social Psychology, 32, 880-892. Rothbart, M., Fulero, S., Jensen, C., Howard, J., & Birrell, P. (1978). From individual to group impressions: Availabilty heuristics in stereotype formation. Journal of Experimental Social Psychology, 14, 237-255. Rothman, A. J., & Schwarz, N. (1998). Constructing perceptions of vulnerability: Personal relevance and the use of experiential information in health judgments. Personality and Social Psychology Bulletin, 24, 1053–1064. Rottenstreich, Y., & Hsee, C.K. (2001). Money, kisses, and electric shocks: On the affective psychology of risk. Psychological Science, 12(3), 185-90. Rottenstreich, Y., & Tversky, A. (1997). Unpacking, repacking, and anchoring: Advances in support theory. Psychological Review, 104, 406–415. Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personality and Social Psychology Review, 5, 296–321. Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision-making. Journal of Risk and Uncertainty, 1, 7-59.
128 Sanfey, A.G., Rilling, J.K., Aronson, J.A., Nystrom, L.E., Cohen, J.D. (2002). The neural basis of economic decision-making in the ultimatum game. Science, 300, 1755–1758. Sanna, L. J., and Schwarz, N. (2006). Metacognitive experiences and human judgment - The case of hindsight bias and its debiasing. Current Directions in Psychological Science, 15, 172-176. Savage, L. J. (1954). Foundations of statistic reconsidered. Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability. Berkeley University Press. Savitsky, K., & Gilovich, T. (2003). The illusion of transparency and the alleviation of speech anxiety. Journal of Experimental Social Psychology, 39, 618-625. Schachter, S., & Singer, J. E. (1962). Cognitive, social, and physiological determinants of emotional state. Psychological Review, 69, 379–399. Schelling, T.C. (1981). Economic reasoning and the ethics of policy. Public Interest 63, 37–61. Schkade, D., & Kahneman, D. (1998). Does living in California make people happy? A focusing illusion in judgments of life satisfaction. Psychological Science, 9, 340-346. Schneider, W., & Shiffren, R.M. (1977). Controlled and automatic human information processing: Detection, search, and attention. Psychological Review, 84, 166. Schwartz, B. (2004). The paradox of choice. New York: HarperCollins. Schwarz, N. (1990). Feelings as information: Informational and motivational functions of affective states. In E.T. Higgins & R.M. Sorrentino (Eds.), Handbook of motivation and cognition (pp. 527-561). New York: Guilford Press. Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic. Journal of Personality and Social Psychology, 61, 195-202. Schwarz, N., & Clore, G.L. (1983). Mood, misattribution, and judgments of wellbeing: Informative and directive functions of affective states. Journal of Personality and Social Psychology, 45, 513-523. Schwarz, N., & Clore, G.L. (2003). Mood as information: 20 years later. Psychological Inquiry, 14, 296-303. Schwarz, N., & Vaughn, L. A. (2002). The availability heuristic revisited: Ease of recall and content of recall as distinct sources of information. In T. Gilovich, D. W. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 103–119). New York: Cambridge University Press. Schweitzer, M. (1994). Disentangling status quo and omission effects: Experimental analysis. Organizational Behavior and Human Decision Processes, 58(3), 457-476. Schweitzer, M. (1995). Multiple reference points, framing, and the status quo bias in health care financing decisions. Organizational Behavior and Human Decision Processes, 63, 69-72. Sechrist, G.B., Swim, J.K., & Mark, M.M. (2003). Mood as information in making attributions to discrimination. Personality and Social Psychology Bulletin, 29, 524-531.
129 Seta, J.J., McElroy, T., & Seta, C.E. (2001). To do or not to do: Desirability and consistency mediate judgments of regret. Journal of Personality and Social Psychology, 80, 861–870. Shafir, E., Simonson, I., & Tversky, A. (1993). Reason-based choice. Cognition, 49, 11-36. Shepperd, J. A., Ouellette, J. A., & Fernandez, J. K. (1996). Abandoning unrealistic optimism: Performance estimates and the temporal proximity of self-relevant feedback Journal of Personality and Social Psychology, 70, 844-855. Sherif, M. & Hovland, C. I. (1961). Social judgment: Assimilation and contrast effects in communication and attitude change. New Haven: Yale University Press. Sherman, D.K., & Kim, H.S. (2002). Affective perseverance: The resistance of affect to cognitive invalidation. Personality and Social Psychology Bulletin, 28, 224237. Sherman, S.J., Cialdini, R.B., Schwartzman, D.F., & Reynolds, K.D. (1985). Imagining can heighten or lower the perceived likelihood of contracting a disease: The mediating effect of ease of imagery. Personality and Social Psychology Bulletin, 11, 118-127. Shiv, B., Loewenstein, G., Bechara, A., Damasio, H., & Damasion, A. R. (2005). Investment behavior and the negative side of emotion. Psychological Science, 16, 435439. Simon, H.A. (1957). Models of man: Social and rational. New York: Wiley. Simon, J.L., & Smith, D.B. (1973). Change in location of a student health service: A quasi-experimental evaluation of the effects of distance on utilization. Medical Care, 11, 59-67. Simonson, I. (1989). Choice based on reasons—The case of attraction and compromise effect. Journal of Consumer Research, 16, 158-74. Skov, R.B., & Sherman, S.J. (1986). Information-gathering processes: Diagnosticity, hypothesis-confirmatory strategies, and perceived hypothesis confirmation. Journal of Experimental Social Psychology, 22, 93-121. Sloman, S.A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119, 3-22. Slovic, P., Finucane, M., Peters, E., & MacGregor, D.G. (2002). The affect heuristic. In T. Gilovich, D.W. Griffin, & D. Kahneman (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (pp. 397 – 420). Cambridge, U.K.: Cambridge University Press. Slovic, P., Fischoff, B., & Lichtenstein, S. (1982). Facts versus fears: Understanding perceived risk. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 463–489). New York: Cambridge University Press. Slovic, P., & Lichtenstein, S. (1968). Relative importance of probabilities and payoffs in risk-taking. Journal of Experimental Psychology Monograph, 78, 1-18. Slovic, P., & Peters, E. (2006). Risk perception and affect. Current Directions in Psychological Science, 15, 323-325. Small, D., & Loewenstein, G. (2003). Helping a victim or helping the victim: Altruism and identifiability. Journal of Risk and Uncertainty, 26, 5-16.
130 Small, D., & Loewenstein, G., & Slovic, P. (2007). Sympathy and callousness: the impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102, 143-153. Smith, C. A., & Ellsworth, P. C. (1985). Patterns of cognitive appraisal in emotion. Journal of Personality and Social Psychology, 48, 813-838. Snyder, M., & Swann, W.B. (1978). Hypothesis-testing in social interaction. Journal of Personality and Social Psychology, 36, 1202-1212. Sperber, D., Cara, F., & Girotto, V. (1995). Relevance and theory explains the selection task. Cognition, 57, 31-95. Sperber, D., Girotto, V. (2002). Use or misuse of the selection task? Cognition, 85, 277-290. Spranca, M., Minsk, E., & Baron, J. (1991). Omission and commission in judgment and choice. Journal of Experimental Social Psychology, 27, 76-105. Srull, T. K., & Wyer, R. S. (1979). The role of category accessibility in the interpretation of information about persons: Some determinants and implications. Journal of Personality and Social Psychology, 37, 1660-1672. Stanovich, K.E. (1999). Who is Rational? Studies of Individual Differences in Reasoning . Mahwah, New Jersey: Elrbaum. Stanovich, K.E. (2004). The robot’s rebellion: Finding meaning in the age of Darwin. Chicago: University of Chicago Press. Stanovich, K. E., & West, R. F. (2008). On the relative independence of thinking biases and cognitive ability. Journal of Personality and Social Psychology, 4, 672-695. Stone, V.E., Cosmides, L., Tooby, J., Kroll, N., & Knight, R.T. (2002). Selective impairment of reasoning about social exchange in a patient with bilateral limbic system damage. Proceedings of the National Academy of Sciences, 99, 11531-11536. Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology, 73, 437-446. Switzer, F. & Sniezek, J.A. (1991). Judgment processes in motivation: Anchoring and adjustment effects on judgment and behavior. Organizational Behavior and Human Decision Processes, 49, 208-229. Tajfel, H. (1969). Cognitive aspects of prejudice. Journal of Social Issues, 25, 79-97. Taylor, K. M., & Shepperd, J. A. (1998). Bracing for the worst: Severity, testing and feedback as moderators of the optimistic bias. Personality and Social Psychology Bulletin, 24, 915-926. Taylor, S.E., Lerner, J.S., Sage, R.M., Lehman, B.J., & Seeman, T.E. (2004). Early environment, emotions, response to stress, and health. Journal of Personality, 72, 1365-1393. Thaler, R.H. (1980). Toward a positive theory of consumer choice. Journal of Economic Behavior and Organization, 1, 39-60. Thaler, R.H (1988). Anomalies: The ultimatum game. Journal of Economic Perspectives, 2, 195–206. Thaler, R.H. (1999). Mental accounting matters. Journal of Behavioral Decision Making, 12(3), 183-206. Thaler, R.H., & Sunstein, C.R. (2003). Libertarian paternalism. American Economic Review, 93, 175-179.
131 Thaler, R. H., & Sunstein, C. R. (2008), Nudge: Improving Decisions About Health, Wealth, and Happiness, New Haven: Yale University Press. Thomas, W.I.. & Znaniecki, F. (1918). The Polish peasant in Europe and America. Chicago, Illinois: University of Chicago Press. Tiedens, L. Z., & Linton, S. (2001). Judgment under emotional certainty and uncertainty: The effects of specific emotions on information processing. Journal of Personality and Social Psychology, 81, 973-988. Tomb, I. Hauser, M. Deldin, P., & Caramazza, A. (2002). Do somatic markers mediate decisions on the gambling task? Nature Neuroscience, 5, 1102-1103. Tooby, J., & Cosmides, L. (1992). The psychological foundations of culture. In J. H. Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: Evolutionary psychology and the generation of culture (pp.19– 136). Oxford, England: Oxford University Press. Trope, Y., & Liberman, N. (2003). Temporal construal. Psychological Review, 110, 403–21. Tversky, A., & Kahneman, K. (1971). Belief in the law of small numbers. Psychological Bulletin, 76, 105-110. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131. Tversky, A., Khaneman, D. (1981). The Framing of Decisions and the Psychology of Choice. Science, 211, 453-458. Tversky, A., & Kahneman, D. (1982). Evidential impact of base rates. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases. (pp. 153-160). New York: Cambridge University Press. Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90, 293-315. Tversky, A., & Kahneman, D. (1986). Rational choice and the framing of decisions. Journal of Business, 59, 5251-5278. Tversky, A., & Koehler, D.J. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review, 101, 547–567. Tversky, A, Sattath, S., & Slovic, P. (1988). Contingent weighting in judgment and Choice. Psychological Review, 95, 371–384. Tversky, A., Slovic, P., & Kahneman, D. (1990). The causes of preference reversal, American Economic Review, 80, 204-17. Ubel, P.A., Spranca, M., DeKay, M., Hershey, J., & Asch, D.A. (1998). Public preferences for prevention versus cure: What if an ounce of prevention is worth only an ounce of cure? Medical Decision Making, 18, 141-148. Van Boven, L., Medvec, V., & Gilovich, T. (2003). The illusion of transparency in negotiations. Negotiation Journal, 19, 117-131. Van Dort, B. E., & Moos, R. H. (1976). Distance and the utilization of a student health center. Journal of the American College Health Association, 24, 159–162. Von Neumann, J. & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton. New Jersey: Princeton University Press. Vorauer, J. D., & Cameron, J. J. (2002). So close, and yet so far: Does collectivism foster transparency overestimation? Journal of Personality and Social Psychology, 83, 1344-1352.
132 Vorauer, J. D., & Ross, M. (1999). Self-awareness and transparency overestimation: Failing to suppress one's self. Journal of Experimental Social Psychology, 35, 415-440. Wagenaar, W.A. (1972). Generation of random sequences by human subject: A critical survey of literature. Psychological Bulletin, 77, 65–72. Wakslak, C., J., Nussbaum, S., Liberman, N., & Trope, Y. (2008). Representations of the self in the near and distant future. Journal of Personality and Social Psychology, 95, 757-773. Walster, E., Berscheid, E., Abrahams, D., & Aronson, V. (1967). Effectiveness of debriefing following deception experiments. Journal of Personality and Social Psychology, 6, 371-380. Wason, P. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12, 129-140. Wason, P. C. & Evans, J. St. B. T. (1975). Dual processes in reasoning? Cognition, 3, 141-154. Wason, P., & Johnson-Laird, P.N. (1972). Psychology of reasoning: Structure and content. London: Batsford. Wason, P.C. & Shapiro, D. (1971). Natural and contrived experience in a reasoning problem. Quarterly Journal or Experimental Psychology, 23, 63-71. Wilson, T. D. (2002). Strangers to ourselves: Discovering the adaptive unconscious. Cambridge, MA: Belknap Press of Harvard University Press. Wilson, T. D., & Gilbert, D. T. (2003). Affective forecasting. In M. Zanna (Ed.), Advances in experimental social psychology (pp. 345-411). New York: Elsevier. Wilson, T.D., Houston, C., Etling, K.M., & Brekke, N. (1996). A new look at anchoring effects: Basic anchoring and its antecedents. Journal of Experimental Psychology: General, 4, 387-402. Wilson, T.D., Wheatley, T., Meyers, J., Gilbert, D.T., & Axsom, D. (2000). Focalism: A source of durability bias in affective forecasting. Journal of Personality and Social Psychology, 78, 821–836. Winkielman, P., & Schwarz, N., & Belli, R. F. (1998). The Role of ease of retrieval and attribution in memory judgments: Judging your memory as worse despite recalling more events. Psychological Science, 9, 124– 126. Winkielman, P., & Schwarz, N. (2001). How pleasant was your childhood? Beliefs about memory shape inferences from experienced difficulty of recall. Psychological Science, 12, 176–179. Whittlesea, B.W., & Leboe, J.P. (2000). The heuristic basis of remembering and classification: Fluency, generation, and resemblance. Journal of Experimental Psychology: General, 129, 84–106. Word, C. O., Zanna, M. P., & Cooper, J. (1974). The nonverbal mediation of selffulfilling prophecies in interracial interaction. Journal of Experimental Social Psychology, 10, 109–120. Wright, W.F., & Anderson, U. (1989). Effects of situation familiarity and financial incentives on use of the anchoring and adjustment heuristic probability assessment. Organizational Behavior and Human Decision Processes, 44, 68-82.
133 Wright, W.F., & Bower, G.H. (1992). Mood effects on subjective probability assessment. Organizational Behavior and Human Decision Processes, 52, 276-291. Wyer, R.S., Srull, T.K. (1989). Memory and cognition in their social context. Hillside, New Jersey: Lawrence Erlbaum. Yachanin, S.A., & Tweney, R.D. (1982). The effect of thematic content on cognitive strategies in the four-card selection task. Bulletin of the Psychonomic Society, 19, 87-90. Zeelenberg, M., Van den Bos, K., Van Dijk, E., & Pieters, R. (2002). The inaction effect in the psychology of regret. Journal of Personality and Social Psychology, 82, 314–327. Zuckerman, M., Knee, CR, Hodgins, H.S, & Miyake, K. (1995). Hypothesis confirmation: The joint effect of positive test strategy and acquiescence response set. Journal of Personality and Social Psychology, 68, 52-60.
.
.
134 Table 1: Composition of the card decks used by Bechara, Damasio, & Damasio (2000). Deck A Deck B Deck C Deck D Reward $100 $100 $50 $50 Punishment $150-350 $1,250 $25-75 $250 P(Punishment) .5 .1 .5 .1 Expected Value -$25 -$25 +$25 +$25
135 Table 2. Attributes generally attributed to the two systems of human thought. System 1
System 2
Fast Automatic Associative Relatively effortless Parallel operations Concrete Non-conscious
Slow Deliberate Rule-based Effortful Serial operations Can be abstract Conscious
136 Figure 1: Organ donation rates in countries with explicit consent (opt-in) and presumed consent (opt-out) donation policies. Adapted from Johnson & Goldstein (2003).
137
Figure 2: A guiding model of naïve realism and construal in Social Psychology