1 Explanation as a Cognitive Process Zachary Horne

0 downloads 0 Views 458KB Size Report
people manage to come up with answers for why-questions? Most of the ..... Participants with lower working memory capacities (or who are temporarily under a cognitive load) are .... Our goal in this Opinion article is to highlight the value of a mechanistic, information- .... orbit the nucleus of the atom) ..... Lewis, D. (1983).
1

Explanation as a Cognitive Process

Zachary Horne1,*,†, Melis Muradoglu2,*,†, and Andrei Cimpian2,*,@

In press, Trends in Cognitive Sciences.

1

School of Social and Behavioral Sciences, Arizona State University, Phoenix, AZ, USA 2

*

Department of Psychology, New York University, New York, NY, USA

Correspondence: [email protected] (Z. Horne), [email protected] (M. Muradoglu), [email protected] (A. Cimpian)

@

Twitter: @AndreiCimpian



These authors contributed equally.

Acknowledgments The authors are grateful to Steve Flusberg, Micah Goldwater, Sunny Khemlani, Ryan Lei, Patricia Mirabile, Greg Murphy, Jussi Valtonen, Nadya Vasilyeva, Deena Weisberg, Jeff Zemla, and the Editor (Dr. Lindsey Drayton) for helpful comments on previous drafts of this manuscript.

2 Abstract Understanding how people explain is a core task for cognitive science. In this Opinion article, we argue that research on explanation would benefit from more engagement with how the cognitive systems involved in generating explanations (e.g., attention, long-term memory) shape the outputs of this process. Although it is clear that these systems do shape explanation, surprisingly little research has investigated how they might do so. We outline the proposed mechanistic approach to explanation and illustrate it with an example—the recent research that suggests explanations exhibit a bias toward inherent information. Taking advantage of what we know about the operating parameters of the human mind is likely to yield new insights into how people come up with explanations.

Keywords: explanation; attention; long-term memory; working memory; metacognition; inherence bias

3 Explaining Explanation via Its Interactions with Other Processes Why is the subway late? Why are there so few women in tech? Why is the universe expanding? From the mundane to the cosmic, why-questions such as these are a common feature of our psychological lives, prompted both by our species’ natural curiosity (e.g., [1-6]) and by the demands of everyday life (e.g., [7, 8]). Although they are common, why-questions are not exactly simple to answer. Under most circumstances, figuring out why something is the case (that is, generating an explanation) is akin to solving a complex, ill-defined problem. How do people manage to come up with answers for why-questions? Most of the recent research on this topic has proceeded along two lines. First, inspired by Bayesian approaches to modeling cognition, researchers have described the logic of the inferences people must perform when producing explanations and making causal attributions (e.g., [9, 10]). The extent to which people approximate the performance of a rational Bayesian reasoner provides insight into the high-level characteristics of the system that they are using to generate explanations. Second, inspired by the rich tradition of work on explanation in philosophy (for a review, see [11]), researchers have investigated what people consider a satisfactory answer to a why-question. The extent to which people favor answers with certain “virtues” (e.g., simplicity, ability to unify a range of observations; [12-17]) provides insight into the target of explanatory reasoning—the psychological “essence” of an explanation [18]. (Because our focus is specifically on the process by which people generate explanations, we will not discuss the downstream effects of this generation process on other judgments (e.g., [19, 20]), except to say that the source and scope of these effects may also be informed by the approach we propose here.)

4 Without a doubt, these lines of work have made substantial contributions to the cognitive science of explanation. However, our focus in this Opinion is different. We start from the premise that the question of how people generate explanations can be fruitfully investigated by looking to the specific cognitive processes that are engaged by why-questions. The past 60 years of research on attention, long-term memory storage and retrieval, working memory, judgment and decision-making, reasoning under uncertainty, and metacognition have revealed many general principles that are likely relevant to the process by which people generate explanations. Explanations depend on many of the cognitive processes investigated in this vast literature: When generating an explanation, people focus on specific aspects of the observation being explained (i.e., attention is involved); they retrieve content stored in long-term memory; they manipulate this information in working memory; and they metacognitively assess the quality of potential explanations. This much seems obvious. Surprisingly, although it is often acknowledged that the search for explanations depends on these cognitive processes, so far there has been relatively little research on what this might mean: How exactly do these processes shape how explanations are generated? The first part of our Opinion highlights some of the specific ways in which wellestablished facts about how our minds attend to and manipulate information (broadly speaking) might also shape the specific process of searching for explanations. This section will be largely speculative and forward-looking, but we will also mention relevant prior research where available. To illustrate the potential payoffs of this approach, in the second part of this Opinion we articulate and defend a specific claim about explanation—namely, that aspects of the cognitive processes involved in generating explanations bias their content toward inherent or

5 intrinsic features. Before proceeding, however, we provide a working definition of the term “explanation.” What Is an Explanation? If explanations are answers to why-questions, what sorts of answers are they? To provide a more specific definition, we need to first understand the structure of a why-question. According to a prominent view [21], every why-question consists of three elements: (i) the observation to be explained, also known as the explanandum (see Glossary) (e.g., that the subway is late), (ii) a contrast class of similar but non-observed states (e.g., the subway being on time), and (iii) a request for information that can differentiate the occurrence of the explanandum from the nonoccurrence of its contrast class (e.g., a mechanical issue with the train). An explanation is a statement that satisfies the request for information in (iii). Although this not the only way to conceptualize the structure of why-questions (e.g., [11]), this view is particularly suitable for a process-oriented account of explanation because it makes apparent the close link between explanation and other aspects of cognition—for example, the likely role of selective attention and long-term memory retrieval in constructing an appropriate contrast class, or the likely involvement of working memory in selecting information that differentiates the explanandum from its contrast class. What Cognitive Processes Guide the Search for Explanations, and How? In this section, we focus on four cognitive systems that are fundamental to reasoning and outline their role in generating explanations: (i) attention, (ii) long-term memory, (iii) working memory, and (iv) metacognition. In focusing specifically on these four systems, we follow in the footsteps of the rich body of work on reasoning under uncertainty (e.g., [22-26]). This literature suggests that people often anchor their judgments and decisions on information that captures

6 their attention or otherwise comes to mind readily and is easy to think about, failing to notice that they have overlooked other relevant considerations in the process (see Box 1). Our review highlights some of the ways in which this may be true of explanations as well. That is, we describe how these systems constrain what information is attended to when considering a whyquestion, influence the ease with which information is retrieved and manipulated when formulating an answer, and determine how potential answers are then evaluated. To be sure, other cognitive processes and structures are also relevant to our ability to generate explanations—for instance, the ability to generalize (e.g., [27]) and draw analogies (e.g., [28]) from previous experiences, or the rich intuitive theories that inform our explanations (e.g., [2931]). Ultimately, a satisfying process-oriented account of explanation will have to be comprehensive in scope, incorporating these other processes as well. However, we believe that our current focus on attention, long-term memory, working memory, and metacognition is justified by the progress that’s been made in understanding how people reason under uncertainty by attending to these four classes of phenomena. Constraints Imposed by Attentional Processes. Reasoning is guided by attention (e.g., [32]). What information people attend to when reasoning is in part a function of bottom-up, stimulus-driven processes, which tend to focus attention on information that is salient in the context. Because contextually salient information is not always relevant to a correct solution or optimal decision, its tendency to capture reasoners’ attention leads to what are known as focusing effects (e.g., [33]). These effects—whereby people consistently overweigh information that happens to be salient in the moment and underweigh considerations that aren’t evoked by the context—are ubiquitous in the literature on reasoning and decision making (e.g., [33-38]). For instance, when considering a certain course of action (e.g., buying an item), people seldom

7 consider alternative actions they could be pursuing (i.e., opportunity costs) and instead tend to focus almost exclusively on the features of the focal option (e.g., what they like and dislike about the particular item they are considering buying; [39]). To take another example, attentional focusing effects may be also part of the reason why people discount future (relative to immediate) rewards as steeply as they do (e.g., [40]): The immediate reward captures participants’ attention, and as a result they fail to attend sufficiently to the delayed reward. In line with this possibility, simply asking participants to think about the delayed reward first considerably reduces the extent to which they discount it, speaking to the role of attention in this phenomenon [41]. Focusing effects—and attentional processes more generally—are also likely to affect how people generate explanations. For instance, when considering a why-question, reasoners’ attention may be captured by the entities in the why-question itself (i.e., the explanandum). This attentional focus suggests a path along which to search for an answer, and as a result reasoners may not attend to other considerations that would also be relevant to an adequate answer. The work we will review in the second part of this Opinion provides some evidence for this claim, as does the literature on “person perception”—that is, explanations for others’ behavior. People routinely overestimate the extent to which the features of a person (e.g., their abilities, their personality) explain their behavior and underestimate the role of history and circumstances (e.g., [42, 43]). Consistent with the claim that this explanatory bias is attentional in nature, the bias reverses when the circumstances surrounding a behavior are made salient experimentally; that is, participants overweigh circumstances instead [44]. The presence of attentional focusing effects is also suggested by the evidence that people are unduly influenced by the order in which they encounter or retrieve information while generating an explanation. The closer to the time when

8 participants have to make a judgment that a bit of information is presented, the more likely it is that this information will drive their judgment, arguably because their attention is focused on it ([45, 46]; but see [47]). Attentional processes have been implicated in how people evaluate explanations as well. For example, attention is sometimes captured by irrelevant details in an explanation (e.g., irrelevant neuroscientific facts in a psychological explanation), which leads people to evaluate this explanation more positively or negatively than they should (e.g., [48, 49]; see also [50]). Although focusing effects are likely not the only way that attention shapes explanation, they are a good example of how theories of explanation could be enriched by considering the specific ways in which the cognitive processes invoked during explaining guide this act. Constraints Imposed by Long-term Memory Retrieval Processes. Attention tells us where to look for answers, but what these answers are depends crucially on memory. The content retrieved from long-term memory supplies the raw material out of which people construct decisions, preferences, and judgments (including explanations; e.g., [6, 51]). However, the information stored in memory—no matter how useful or relevant—is unevenly retrievable for purposes of reasoning, illustrating an important constraint imposed by memory processes. In fact, information that is accessible—that is, easily activated and thus used during reasoning—is only a small subset of the information that is in principle available in memory (for a review, see [52]). Moreover, the memory content that is accessible seems to differ in systematic ways from the less-accessible content. To illustrate, memory content that matches a retrieval cue at a superficial level is more likely to be activated than content that matches the cue at a deeper, more abstract level (e.g., [53-56]). When participants are asked to solve a probability problem, for instance, what comes to mind easiest is previous problems that match the surface features of the

9 current problem (e.g., it involved rolling dice) rather than problems that instantiate the same abstract mathematical principles (e.g., conditional probability; [55]). Memory content that matches a retrieval cue in valence is also particularly easy to activate (e.g., one negativelyvalenced memory facilitates recall of another; [57]), as is memory content that has been retrieved in the recent past (e.g., [58, 59]). Another dimension that is relevant to the accessibility of a memory is whether it consists of inherent information about the retrieval cue or extrinsic information about it (see Box 2). For example, when participants were asked to report what came to mind about a target concept (e.g., coin), they first retrieved inherent features of it (e.g., made of metal) and only later generated extrinsic information about its relations to other entities (e.g., found in pockets) [60]. Systematic differences in the accessibility of memory content are also likely to influence how we answer why-questions. Consistent with this possibility, the few studies that have investigated the role of accessibility gradients in the search for explanations suggest that they indeed guide this process (e.g., [61-66]). For instance, information that has been retrieved in the past for the purpose of explaining becomes more accessible for use in future explanations (e.g., [64, 66]), which is sometimes helpful (e.g., when prior retrieval reflects the base rates of certain causes; [66]) but can also interfere with the ability to generate adequate explanations (e.g., when the accessible information is irrelevant to the current explanandum; [64]). These findings and the process-oriented computational models they inspired (e.g., [61, 62]) are promising, but their influence on the broader literature on explanation has been more limited than would be expected given memory’s central role in explanation. Constraints Imposed by Working Memory Processes. Long-term memory supplies the raw material for reasoning and decision making, but it is in working memory that judgments and

10 decisions are formulated. Working memory is limited in capacity (for a review, see [67]), with the limiting factor being the number of elements whose relations can be processed at once (e.g., [68-70]). Although the precise limit of working memory is a matter of some debate, many researchers place the maximum number of elements or variables whose relations can be processed simultaneously somewhere around 3 or 4; the more variables one has to represent to formulate a judgment, the more effortful that judgment is (e.g., [68, 71]). Importantly, the same 3- or 4-element limit has been identified in a wide range of inductive and deductive reasoning tasks (for reviews, see [68, 69]). This basic constraint on human cognition has clear implications for the process of answering why-questions. To even begin to consider the answer to such a question, one needs to represent the explanandum in working memory, which will take up some of the already-limited capacity. In addition, as spelled out in an earlier section, answering a why-question requires that one represent not just the explanandum itself but also an inferred contrast class of events or states that did not occur; this may also be costly working-memory-wise. The constraints that working memory impose on explanation become even more salient when juxtaposed against the fact that, well, the world is complex. And while explanations can and should leave out factors that are only distantly related to the explanandum (e.g., the Big Bang; see [72] for more on so-called idealized explanations), it is still the case that adequate explanations often need to account for the interaction of several variables. Even though people are sometimes able to free up working memory strategically (e.g., by chunking; [68]), on balance the search for explanations will typically be under-resourced, as other have pointed out as well (e.g., [4, 73]). In turn, this means that the explanations people generate may often have to be relationally simpler than the simplest adequate explanations.

11 The implications of the vast literature on working memory for the process of generating explanations are underexplored. Some indirect evidence can be found in studies examining how working memory constrains the ability to evaluate the probability of a hypothesis (e.g., [74-77]). Participants with lower working memory capacities (or who are temporarily under a cognitive load) are generally able to retrieve fewer alternative hypotheses from long-term memory with which to compare the hypothesis of interest; in turn, this leads to less accurate probability judgments. Similar constraints are likely to operate when people are constructing an explanation anew (vs. evaluating an existing one), with working memory imposing a sharp limit on the amount of information that can be brought up from long-term memory and manipulated to arrive at a satisfactory explanation. In other words, taking seriously the fact that explanations are constructed in working memory—with all that it entails—is likely to lead to new insights about this process, as we will also illustrate in the second part of this Opinion. Constraints Imposed by Metacognitive Processes. Once generated, a judgment is typically “tagged” with a certain level of confidence in its adequacy; low confidence sometimes prompts reasoners to revisit a judgment and revise it. The act of evaluating (and, more generally, thinking about) one’s own thinking is known as metacognition and has been the subject of sustained research in cognitive psychology since the 1970s. Generally, this work has suggested that metacognitive evaluations have limited accuracy (for reviews, see [78, 79]). A key reason for this conclusion is that many of the cues used during metacognitive evaluation instill subjective confidence in the judgment or decision being considered but don’t track its actual validity. For example, people are more confident in a judgment if they feel that the information used to formulate it came to mind readily, or if they feel an overall sense of ease of processing (i.e., fluency) when evaluating it (e.g., [80-84]). For instance, participants in one study [83] were

12 more confident in the judgments that they generated more quickly (i.e., that were more fluent), even though fluency was unrelated to validity. How does this evidence bear on explanation? The process of generating explanations arguably includes an evaluation step; people must do a rough plausibility check before committing to an explanation. To the extent that (i) this evaluation step is influenced by factors such as accessibility or fluency, and that (ii) the accessibility of information in long-term memory or the fluency with which it is processed in working memory don’t track accuracy per se (but see [62]), then reasoners may overestimate the accuracy of their explanations—particularly in cases where these explanations come to mind readily, without effort. That is, people may (mis)interpret their ease in coming up with some answer to a why-question (e.g., why is the subway late?) as evidence that this answer is correct. Evidence consistent with this claim is provided by the work on so-called illusions of explanatory depth (e.g., [85-87]). When asked how well they know how devices such as zippers or flush toilets work, people’s initial responses overestimate their understanding by a considerable margin—that is, they seem to experience a metacognitive illusion that their explanations for these devices are better than they really are. In summary, we believe that theories of the process by which people generate explanations would be enriched by incorporating some of the general principles that characterize metacognitive judgments. A tighter integration between these literatures is likely to provide explanation researchers with a better understanding of the circumstances under which people are overconfident in the quality of their explanations, as well as with new insights into how people decide to adopt (or reject) an explanation they are in the process of generating. A Case Study: The Inherence Bias in Explanation

13 The goal of the previous section was to argue that the study of explanation would benefit from deeper engagement with the cognitive processes triggered by why-questions. To be clear, this engagement can take many forms; the content of the previous section by no means exhausts the possible points of contact between the work on explanation and the various cognitive processes and structures that are likely to affect explanatory reasoning. Our goal in the present section is to provide one detailed example of the sort of integration we are advocating. We lay out how several of the constraints identified in the prior section give rise to systematic bias in the content of the explanations generated under ordinary circumstances—specifically, a bias toward inherent information [88-90]. Motivating the Inherence Bias Account. Why would explanation show an inherence bias? In this section, we provide an account of why one might in principle expect to see such a bias, given the constraints that reasoners operate within. The empirical evidence that this bias is in fact present—and a product of these constraints—will be detailed in the next section. In motivating why one might expect an inherence bias, we will loosely follow the order of the steps involved in generating an explanation, starting with where reasoners’ attention is directed when they are first considering a why-question: If explanation is susceptible to attentional focusing effects (e.g., [33]), then the elements of the explanandum may capture reasoners’ attention as they set out to answer a why-question (e.g., why is the subway late?). These elements (e.g., the subway train) will then serve as the primary retrieval cues in the next step, often preempting a broader search of long-term memory. During the course of retrieval, certain types of information are likely to come to mind particularly readily. Although there are multiple gradients of accessibility in long-term memory, an important one is that which favors inherent information (e.g., subways have mechanical parts) over extrinsic information (e.g.,

14 subways stop at signals) [60] (see also Box 2). As a result of this accessibility difference, the content retrieved from memory will on average tilt toward inherent or intrinsic properties. This tilt may then be exacerbated by the limitations of working memory, where explanations are formulated. Why might this be? We note that extrinsic information pertains to relations between the elements of the explanandum and other entities (e.g., the signals), whereas inherent information does not. As a result, extrinsic information may introduce additional relational complexity into a system that is already taxed by the demands of representing the explanandum and its contrast class. Thus, even if a reasoner were to consider an extrinsic factor as a candidate answer (despite its lower likelihood of being retrieved), the reasoner may run up against the processing bottleneck of working memory. To the extent that consideration of extrinsic information in answers to why-questions sometimes exceeds what can be easily represented in working memory, the cognitive system may default to using inherent information for this purpose. The accessibility of inherent properties and their relational simplicity—which translates into greater fluency (e.g., [69])—is likely to matter at the evaluation stage as well. People use the accessibility and fluency of an answer as a cue to its accuracy, regardless of actual accuracy (e.g., [83, 84]). Thus, explanations that use inherent information will seldom raise red flags when they are being evaluated, even though the process by which they were constructed took nontrivial shortcuts at one or more points along the way. In this way, several basic features of the cognitive system seem to push explanations toward inherence. We should note that the constraints or shortcuts discussed above (e.g., focusing effects, accessibility of inherent information) may not be individually sufficient to cause an inherence bias in explanation. For example, many extrinsic explanations aren’t complex

15 enough to cause trouble in working memory (e.g., “because of signal failure”); however, such explanations may still be less likely to occur because of the other constraints we described. The argument is instead that these constraints jointly lead reasoners to use inherent explanations more often than they would have otherwise. Also—and this may be obvious—even jointly, these constraints do not guarantee that an inherent explanation will be produced. People are perfectly capable of generating extrinsic explanations, and do generate such explanations. It is also important to clarify that we use the term “bias” largely in a descriptive sense— as a way of referring to the fact that certain aspects of the process of generating explanations seem to favor one type of information (inherent) over another (extrinsic). It is less clear whether the inherence bias is a bias in the normative sense of the term as well—that is, whether it constitutes a departure from the norms of rational judgment. Insofar as the processes we are outlining here lead people to overlook extrinsic information that would have otherwise been relevant to (some of) their explanations, it may be justified to view the inherence bias as a bias in the normative sense as well. However, assessing the inherence bias as a normative claim is a complex task that is separate from assessing the inherence bias as a descriptive claim about how explanations are generated. Our focus here is on the latter. The Evidence for an Inherence Bias. We now go on to present, in some detail, the evidence that has accumulated in support of the claim of an inherence bias in explanation (but see the Outstanding Questions as well). To begin with, the hypothesized inherence bias is indirectly supported by the evidence that people overuse traits (an inherent factor) and overlook the situation (an extrinsic factor) when explaining others’ behavior (e.g., [42], but see [91]). This correspondence bias [92] exhibits several signatures of the constraints described above. Not only does it reverse when

16 participants’ attention is focused on the situation [44], as previously described, but it also increases when participants are under a cognitive load, which exacerbates the working memory bottleneck (e.g., [93, 94]). Although social psychologists have typically thought of this phenomenon as being unique to social perception (but see [95]), our argument here raises the intriguing possibility that it is simply an instantiation of a broader inherence bias in explanation—a bias that applies equally to why-questions about people and subway trains. Indeed, not only are inherent explanations common across a range of domains (e.g., [60, 96-102]), but they are even more common when the relevant constraints on attention, memory, etc., are heightened—either chronically or situationally. Three types of evidence support this last claim. First, research on individual differences suggests that inherent explanations are more commonly endorsed by those who, either by choice or by necessity, tend to take shortcuts in their reasoning. For example, people who score higher on a measure of inherence bias also tend to prefer low-effort reasoning [103], as indicated by (i) lower need for cognition (that is, a dislike of effortful cognitive activity; [104]), (ii) higher need for closure (that is, a preference for simple, unambiguous judgments; [105]), or (iii) lower scores on the Cognitive Reflection Test [106, 107], which measures the tendency to engage in reflective, analytic reasoning (for similar evidence, see [100, 101, 108, 109]). Higher levels of inherence bias are also present in individuals with lower cognitive resources (e.g., fluid intelligence), who may more often need to—rather than just prefer to—take shortcuts while reasoning [101, 103]. A second approach to these issues is developmental. The constraints that operate on the process of generating explanations (e.g., limited attention and working memory capacity) are particularly pronounced early in childhood; the developmental course of the relevant cognitive resources is protracted (e.g., [110-114]). This suggests that inherent explanations should be more

17 common the earlier in development one looks. This prediction has received support from studies that investigated children’s inherent explanations across the range from 4 to 7 years of age ([101, 115], but see [100]), as well as studies that compared children and adults [108]. Finally, the bias toward inherent explanations is particularly prominent when the constraints that affect explanation are experimentally magnified. In one study [60], participants were asked to generate explanations for a range of observations (e.g., why are wallets often made of leather?) in one of two conditions: either under time pressure (which imposes a cognitive load) or not. Inherent properties (e.g., “leather is durable and can withstand the regular bending”) were frequent in participants’ explanations regardless of load condition, arguably because these properties come to mind easily and are fluent. However, extrinsic information (e.g., “tradition; cow hide was plentiful and easy to process”) was significantly less common in the explanations generated under time pressure, exacerbating the inherence bias. An examination of response latencies in the no-load condition provided converging evidence for this point. Inherent information was present in participants’ explanations regardless of whether they answered quickly or took their time, whereas extrinsic information became more common with greater latencies. This is as expected if retrieving and using extrinsic information incurs a greater cognitive cost. In summary, the evidence just reviewed suggests that explanation exhibits an inherence bias and that this bias is due to the cognitive processes involved in constructing an explanation. Looking to the Future Our goal in this Opinion article is to highlight the value of a mechanistic, informationprocessing framework for studying explanation; spelling out the full details of this framework is beyond what we can accomplish here. However, we do recommend an overall approach to future

18 research within this framework. Specifically, we see the benefits of a systematic analysis of each step in the search for an explanation in terms of its information-processing characteristics and its inputs from other cognitive systems. For instance, what are a person’s goals when generating an explanation [116], and how do these goals shape how they search for explanations? The explanations they generate might look quite different if they are, say, trying to teach someone less knowledgeable [117] vs. win an argument [118]. Relatedly, how does the communicative common ground [119] that a person shares with their interlocutor influence their interpretation of a why-question (e.g., the contrast class they infer) and the explanation generated? It would also be important to know whether explananda differ systematically in the types of contrast classes they bring to mind, even when holding constant the communicative context. Consider, for example, the two why-questions below: (i) Why did she kill him? (ii) Why did she kiss him? We might expect why-questions about immoral acts, such as (i), to be typically interpreted by reference to a contrast class that omits those acts (e.g., “Why did she kill him, as opposed to not killing him?”; [120]). In contrast, morally-neutral explananda, such as (ii), may be compatible with a broader range of contrast classes (e.g., “Why did she kiss him, as opposed to kissing someone else?”). Further, once a contrast class is identified, we might ask how people compare it to the explanandum. Does this comparison involve a process of structural alignment (e.g., [121123])? If so, how does the output of this alignment process shape the course of the subsequent search for an explanation (e.g., memory retrieval)? With respect to memory, we would want to know, for instance, what accessibility gradients guide retrieval for the purpose of explaining

19 (beyond inherent/extrinsic). And how much working memory capacity do the explanandum and its contrast class actually take up (on average)? The answer to this question would tell us how much capacity is left over to construct an explanation, and thus how relationally complex a spontaneous explanation can be in the absence of working-memory-saving strategies (e.g., chunking, recoding). Regarding these strategies, which of them do reasoners use when generating explanations, and how does their frequency of use change with development and with the context (e.g., classroom vs. informal)? As these open questions hopefully reveal, there is much to be learned about explanation via a systematic exploration of its interactions with basic cognitive systems. Concluding Remarks Explanations are the primary means by which people construct an understanding of the world. Explaining explanation is thus a critical task for cognitive science. Our main point in this Opinion is simple: The ability to generate explanations depends on a range of other cognitive processes whose own operating characteristics will impinge on how and what explanations are generated. Research on explanation should not operate in isolation from the rest of cognitive psychology. Although understanding the general computational characteristics of the system that generates explanations and the virtues that people expect to find in an explanation (which are two major foci of current work on this topic) is undoubtedly valuable, a greater emphasis on understanding the implications of prior work on attention, memory, metacognition, and other such cognitive processes will substantially advance our knowledge of how people explain.

20 Box 1. The “bounded rationality” perspective on human cognition Our review of relevant literature is guided by the “bounded rationality” perspective introduced by Herbert Simon in the 1950s (e.g., [124]). This perspective, along with its intellectual successors (e.g., [25, 125]), is one of the pillars of cognitive scientists’ current understanding of human reasoning. The key insight of this perspective is as follows: Because most everyday judgments are generated by fallible cognitive processes in a limited amount of time, they are not fully rational or optimal, even for questions that seem simple on the surface (e.g., why the subway is late). By necessity, people take heuristic shortcuts—they have to look for satisfactory answers rather than optimal ones. This is typically not a conscious strategy. People don’t deliberately cut corners when reasoning but rather do so unwittingly, overusing accessible information and overlooking other relevant (but less accessible) information (e.g., [2226]). Although this general perspective on human reasoning and decision-making has generated an enormous amount of research over the past four decades and is now taken for granted by many cognitive scientists, some of the more detailed accounts inspired by it are not without critics. For example, there is currently no consensus about whether there is a single (type of) cognitive process that supplies heuristic content for our judgments or whether this content can originate from a variety of sources depending on the problem under consideration, as well as the specific circumstances and the reasoner (e.g., [126, 127]). Some researchers have also questioned whether people’s reliance on heuristic shortcuts warrants the conclusion that they are less than rational (e.g., [128, 129]). One alternative to this claim is that people make rational use of the limited cognitive resources at their disposal—that is, perhaps people are “resourcerational” (e.g., [129, 130]).

21 Importantly, however, none of these debates pertains to the fundamental point that the complexity of many everyday cognitive tasks leads resource-limited reasoners to rely on information that is easy to access and manipulate. This point, we believe, is uncontroversial. Thus, our review does not address the debates regarding the specific architecture of heuristic thought or the ways in which it is or is not rational. We focus instead on well-established findings about basic cognitive processes and constraints that may shape the process of finding answers to why-questions by making certain types of information particularly accessible and easy to use.

22 Box 2. The distinction between inherent and extrinsic properties Inherent or intrinsic properties concern an entity as it is, in isolation: “A thing has its intrinsic properties in virtue of the way that thing itself, and nothing else, is” ([131], p. 197; see also [132-134]). In contrast, extrinsic properties necessarily involve other entities in some way. Two intuitive tests can help the reader differentiate between the inherent and the extrinsic properties of an entity: A property of an entity is inherent (i) if changing it would change the entity as well or, alternatively, (ii) if it is necessarily present in a perfect duplicate of the entity. For example, a subway train’s engine is an inherent property of it because changing the engine would cause a change to the train itself and because the engine would be present in a perfect duplicate of the train. In contrast, the fact that the train is stopped at a signal is an extrinsic fact about it because changing this fact doesn’t change the train itself; similarly, duplicating the train would not necessarily preserve its being stopped at a signal. (To clarify, even though extrinsic properties are relational, the inherent/extrinsic distinction is not reducible a nonrelational/relational distinction. Inherent properties can also be relational: e.g., the fact that a person is left-handed or that an object is symmetrical.) The metaphysical validity of the distinction between inherent and extrinsic properties is still a matter of some philosophical debate (e.g., is an object’s shape an inherent property of it, or is it a function of the space around the object and thus extrinsic? [134]). Nevertheless, this distinction—even if rough around the edges—can be still be useful for psychological theorizing. Our cognitive systems represent inherent and extrinsic properties in ways that make the former (inherent) more accessible than the latter (extrinsic) and also easier to manipulate in working memory. In turn, this difference has implications for what is ultimately incorporated into an explanation; we will detail this argument in the second part of the present Opinion.

23 It is also noteworthy that people have no trouble applying the inherent/extrinsic distinction reliably, even with minimal instruction. In one study [60], after naive participants were supplied with a brief definition of inherence and several practice trials, over 80% of these participants agreed on whether each of a large set of properties was inherent. This level of agreement, especially given the minimal training, speaks to the psychological reality of this distinction.

24 Glossary Accessible: this term refers to information that is easily activated from long-term memory Available: this term refers to information that is stored in long-term memory, regardless of its accessibility Contrast class: a class of states that are similar to the explanandum but did not occur Correspondence bias: the tendency to explain behavior (e.g., Kayla is fidgeting) by reference to the corresponding traits (e.g., Kayla is an anxious person) rather than circumstances (e.g., Kayla is about to take an exam) Explanandum: the observation to be explained (that is, the content of a why-question) Extrinsic: an extrinsic property of an object pertains to that object’s interactions with the world Fluency: the subjective experience of ease in processing a bit of information Focusing effects: effects whereby people overweigh information that happens to be salient in the moment and underweigh considerations that aren’t evoked by the context Inherent: an inherent property of an object pertains to how that object—and nothing else—is Structural alignment: the process of matching structurally similar relations between elements across analogous cases (e.g., just like planets orbit the sun in the solar system, electrons orbit the nucleus of the atom)

25 Outstanding Questions

How do we reconcile the evidence of bias in explanation with the evidence suggesting that causal reasoning is generally rational or even optimal? Could part of the discrepancy be methodological? In many tests of rational models of causal induction, the number of plausible candidate causes for a given effect is constrained to a “focal set.” The additional complexity involved in open-ended explanatory contexts may force reasoners to take shortcuts.

How does long-term memory retrieval proceed in response to a why-question? Is this process wholly unconstrained, or is it guided by a sense of what might be “relevant” to an explanation? If the latter, what informs a reasoner’s general sense of relevance to an explanation in the early stages of the explanation-generation process, before an explanation is actually generated?

What is the scope of the inherence bias in explanation? One possibility is that this bias applies mostly to why-questions that reasoners would have to engage in effortful processing to answer. This would exclude several types of why-questions from its scope (e.g., ones to which an immediate answer is suggested by domain-specific, “core” causal frameworks).

Does the inherence bias vary in strength across cultures? For example, some cultures seem to emphasize the role of context (an extrinsic factor), which might reduce the attentional focusing effects we discussed and/or make extrinsic facts more accessible in long-term memory. Would these potential differences vary across development, perhaps being present later—but not earlier—in development?

26

Students’ learning is often filtered through their own intuitive ideas. Current theories of many physical and social phenomena involve extrinsic components (e.g., things fall because of the Earth’s gravity, things burn in the presence of oxygen). If students’ initial, intuitive understanding of these phenomena is inherent, would this make it more difficult for them to learn the correct theories?

27 Highlights



In the present Opinion, we argue that research on the cognitive inputs to explanation should be much more prominent in the cognitive science of explanation than it currently is. How do the characteristics of attention, memory, metacognition, etc., shape the process by which people generate explanations?



We sketch several lines along which this research could proceed, and then we provide an example of work inspired by this approach. Specifically, we detail recent research suggesting that, due to several information-processing constraints, explanations exhibit a bias toward inherent or intrinsic features.

28 References 1. 2.

3. 4. 5. 6.

7.

8. 9. 10.

11. 12.

13.

14. 15.

16. 17. 18. 19.

Stahl, A. E., & Feigenson, L. (2015). Observing the unexpected enhances infants’ learning and exploration. Science, 348(6230), 91-94. Frazier, B. N., Gelman, S. A., & Wellman, H. M. (2009). Preschoolers’ search for explanatory information within adult–child conversation. Child Development, 80(6), 15921611. Gopnik, A. (1998). Explanation as orgasm. Minds and Machines, 8(1), 101-118. Keil, F. C. (2006). Explanation and understanding. Annual Review of Psychology, 57, 227254. Kidd, C., & Hayden, B. Y. (2015). The psychology and neuroscience of curiosity. Neuron, 88(3), 449-460. Lombrozo, T. (2012). Explanation and abductive inference. In K. Holyoak & R. Morrison (Eds.), Oxford handbook of thinking and reasoning (pp. 260–276). New York: Oxford University Press. Anderson, C. A., Krull, D. S., & Weiner, B. (1996). Explanations: Processes and consequences. In E. T. Higgins & A. W. Kruglanski (Eds.), Social psychology: Handbook of basic principles (pp. 271-296). New York: Guilford Press. Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological Review, 92(4), 548-573. Griffiths, T. L., & Tenenbaum, J. B. (2009). Theory-based causal induction. Psychological Review, 116(4), 661-716. Griffiths, T. L., Kemp, C., & Tenenbaum, J. B. (2008). Bayesian models of cognition. In R. Sun (Ed.), Cambridge handbook of computational psychology (pp. 59–100). New York, NY: Cambridge University Press. Salmon, W. C. (2006). Four decades of scientific explanation. Pittsburgh, PA: University of Pittsburgh Press. Johnston, A. M., Johnson, S. G., Koven, M. L., & Keil, F. C. (2017). Little Bayesians or little Einsteins? Probability and explanatory virtue in children's inferences. Developmental Science, 20(6), 1-14. Johnston, A. M., Sheskin, M., Johnson, S. G., & Keil, F. C. (2017). Preferences for explanation generality develop early in biology but not physics. Child Development. doi:10.1111/cdev.12804 Kitcher, P. (1981). Explanatory unification. Philosophy of Science, 48(4), 507-531. Lipton, P. (2001). Is explanation a guide to inference? A reply to Wesley C. Salmon. In G. Hon and S. S. Rakover (eds.), Explanation: Theoretical Approaches and Applications (pp. 93-120). Dordrecht, The Netherlands: Kluwer. Lombrozo, T. (2016). Explanatory preferences shape learning and inference. Trends in Cognitive Sciences, 20(10), 748-759. Pacer, M, & Lombrozo, T. (2017). Ockham’s razor cuts to the root: Simplicity in causal explanation. Journal of Experimental Psychology: General, 146(12), 1761-1780. Wilson, R. A., & Keil, F. (1998). The shadows and shallows of explanation. Minds and Machines, 8(1), 137-159. Chi, M. T., De Leeuw, N., Chiu, M. H., & LaVancher, C. (1994). Eliciting selfexplanations improves understanding. Cognitive Science, 18(3), 439-477.

29 20. 21. 22.

23.

24.

25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35.

36. 37. 38.

39. 40.

Walker, C. M., Lombrozo, T., Legare, C. H., Gopnik, A. (2014). Explaining prompts children to privilege inductively rich properties. Cognition, 133(2), 343-357. Van Fraassen, B. C. (1980). The scientific image. Oxford, England: Clarendon. Frederick, S. (2002). Automated choice heuristics. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 548– 558). New York: Cambridge University Press. Gilbert, D. T. (2002). Inferential correction. In T. Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp.167–184). New York: Cambridge University Press. Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 49–81). New York: Cambridge University Press. Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux. Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: An effort-reduction framework. Psychological Bulletin, 134(2), 207-222. Markman, E. M. (1989). Categorization and naming in children: Problems of induction. Cambridge, MA: MIT Press. Gentner, D., & Markman, A. B. (1997). Structure mapping in analogy and similarity. American Psychologist, 52(1), 45-56. Gelman, S. A. (2003). The essential child: Origins of essentialism in everyday thought. Oxford, UK: Oxford University Press. Horne, Z. & Khemlani, S. (2018). Conceptual constraints on generating explanations. Proceedings of 40th Annual Cognitive Science Society. Spelke, E. S., & Kinzler, K. D. (2007). Core knowledge. Developmental Science, 10(1), 8996. Chun, M.M., Golomb, J.D., and Turk-Browne, N.B. (2011). A taxonomy of external and internal attention. Annual Review of Psychology, 62, 73–101. Legrenzi, P., Girotto, V., & Johnson-Laird, P. N. (1993). Focussing in reasoning and decision making. Cognition, 49(1-2), 37-66. Del Missier, F., Ferrante, D., & Costantini, E. (2007). Focusing effects in predecisional information acquisition. Acta Psychologica, 125(2), 155-174. Johnson, E. J., Häubl, G., & Keinan, A. (2007). Aspects of endowment: a query theory of value construction. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(3), 461-474. Love, R. E., & Kessler, C. M. (1995). Focusing in Wason's selection task: Content and instruction effects. Thinking & Reasoning, 1(2), 153-182. Murphy, G. L., Chen, S. Y., & Ross, B. H. (2012). Reasoning with uncertain categories. Thinking & Reasoning, 18, 81-117. Tor, A., & Bazerman, M. H. (2003). Focusing failures in competitive environments: Explaining decision errors in the Monty Hall game, the acquiring a company problem, and multiparty ultimatums. Journal of Behavioral Decision Making, 16(5), 353-374. Spiller, S. A. (2011). Opportunity cost consideration. Journal of Consumer Research, 38(4), 595-610. Frederick, S., Loewenstein, G., & O’Donoghue, T. (2002). Time discounting and time preference: A critical review. Journal of Economic Literature, 40(2), 351-401.

30 41.

42.

43. 44. 45.

46.

47.

48. 49. 50. 51. 52.

53. 54. 55. 56.

57.

58.

Weber, E. U., Johnson, E. J., Milch, K. F., Chang, H., Brodscholl, J. C., & Goldstein, D. G. (2007). Asymmetric discounting in intertemporal choice: A query-theory account. Psychological Science, 18(6), 516-523. Gawronski, B. (2004). Theory-based bias correction in dispositional inference: The fundamental attribution error is dead, long live the correspondence bias. European Review of Social Psychology, 15, 183-217. Gilbert, D. T., & Malone, P. S. (1995). The correspondence bias. Psychological Bulletin, 117(1), 21-38. Trope, Y., & Gaunt, R. (2000). Processing alternative explanations of behavior: Correction or integration? Journal of Personality and Social Psychology, 79(3), 344-354. Lange, N. D., Thomas, R. P., & Davelaar, E. J. (2012). Temporal dynamics of hypothesis generation: The influences of data serial order, data consistency, and elicitation timing. Frontiers in Psychology, 3(215), 1–16. Sprenger, A., & Dougherty, M. R. (2012). Generating and evaluating options for decision making: The impact of sequentially presented evidence. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38(3), 550–575. Lange, N. D., Thomas, R. P., Buttaccio, D. R., Illingworth, D. A., & Davelaar, E. J. (2013). Working memory dynamics bias the generation of beliefs: The influence of data presentation rate on hypothesis generation. Psychonomic Bulletin & Review, 20(1), 171176. Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470-477. Baker, D. A., Ware, J. M., Schweitzer, N. J., & Risko, E. F. (2017). Making sense of research on the neuroimage bias. Public Understanding of Science, 26(2), 251-258. Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage: A theory of cognitive interest in science learning. Journal of Educational Psychology, 90(3), 414-434. Cimpian, A., & Keil, F. (2017). Preface for the special issue on The Process of Explanation. Psychonomic Bulletin & Review, 24(5), 1361-1363. Higgins, E. T. (1996). Knowledge activation: Accessibility, applicability, and salience. In E. T. Higgins & A. W. Kruglanski (Eds.), Social psychology: Handbook of basic principles (pp. 133–168). New York: Guilford Press. Gentner, D., Rattermann, M. J., & Forbus, K. D. (1993). The roles of similarity in transfer: Separating retrievability from inferential soundness. Cognitive Psychology, 25(4), 524-575. Novick, L. R. (1988). Analogical transfer, problem similarity, and expertise. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14(3), 510-520. Ross, B. H. (1984). Remindings and their effects in learning a cognitive skill. Cognitive Psychology, 16(3), 371-416. Ross, B. H. (1987). This is like that: The use of earlier problems and the separation of similarity effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13(4), 629-639. Gomes, C. F., Brainerd, C. J., & Stein, L. M. (2013). Effects of emotional valence and arousal on recollective and nonrecollective recall. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(3), 663-677. Darley, C. F., & Murdock, B. B. (1971). Effects of prior free recall testing on final recall and recognition. Journal of Experimental Psychology, 91(1), 66-73.

31 59. 60. 61. 62. 63.

64.

65.

66.

67. 68. 69.

70. 71. 72. 73.

74.

75. 76.

77.

Roediger III, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249-255. Hussak, L. J., & Cimpian, A. (2018). Memory accessibility shapes explanation: Testing key claims of the inherence heuristic account. Memory & Cognition, 46(1), 68-88. Hiatt, L. M., Khemlani, S. S., & Trafton, J. G. (2012). An explanatory reasoning framework for embodied agents. Biologically Inspired Cognitive Architectures, 1, 23-31. Thomas, R. P., Dougherty, M. R., Sprenger, A. M., & Harbison, J. (2008). Diagnostic hypothesis generation and human judgment. Psychological Review, 115(1), 155-185. Thomas, R., Dougherty, M. R., & Buttaccio, D. R. (2014). Memory constraints on hypothesis generation and decision making. Current Directions in Psychological Science, 23(4), 264-270. Dougherty, M. R., & Sprenger, A. (2006). The influence of improper sets of information on judgment: How irrelevant information can bias judged probability. Journal of Experimental Psychology: General, 135(2), 262-281. Mehlhorn, K., Taatgen, N. A., Lebiere, C., & Krems, J. F. (2011). Memory activation and the availability of explanations in sequential diagnostic reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(6), 1391-1411. Weber, E. U., Böckenholt, U., Hilton, D. J., & Wallace, B. (1993). Determinants of diagnostic hypothesis generation: Effects of information, base rates, and experience. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19(5), 1151-1164. Cowan, N. (2005). Working memory capacity. Hove, England: Psychology Press. Halford, G. S., Cowan, N., & Andrews, G. (2007). Separating cognitive capacity from knowledge: A new hypothesis. Trends in Cognitive Sciences, 11(6), 236-242. Halford, G. S., Wilson, W. H., & Phillips, S. (1998). Processing capacity defined by relational complexity: Implications for comparative, developmental, and cognitive psychology. Behavioral and Brain Sciences, 21(6), 803-831. Wilhelm, O., Hildebrandt, A. H., & Oberauer, K. (2013). What is working memory capacity, and how can we measure it? Frontiers in Psychology, 4, 433. Halford, G. S., Baker, R., McCredden, J. E., & Bain, J. D. (2005). How many variables can humans process? Psychological Science, 16(1), 70-76. Strevens, M. (2008). Depth: An account of scientific explanation. Harvard University Press. Korman, J., & Khemlani, S. (2018). How people detect incomplete explanations. In C. Kalish, M. Rau, T. Rogers, & J. Zhu (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Dougherty, M. R. P., & Hunter, J. E. (2003). Probability judgment and subadditivity: The role of working memory capacity and constraining retrieval. Memory & Cognition, 31, 968-982. Dougherty, M. R. P., & Hunter, J. E. (2003). Hypothesis generation, probability judgment, and individual differences in working memory capacity. Acta Psychologica, 31, 263-282. Sprenger, A., & Dougherty, M. R. (2006). Differences between probability and frequency judgments: The role of individual differences in working memory capacity. Organizational Behavior and Human Decision Processes, 99(2), 202-211. Sprenger, A., Dougherty, M., Atkins, S., Franco-Watkins, A., Thomas, R., Lange, N., & Abbs, B. (2011). Implications of cognitive load for hypothesis generation and probability judgment. Frontiers in Psychology, 2, 129.

32 78. 79.

80. 81. 82. 83. 84.

85. 86.

87. 88.

89. 90.

91. 92. 93. 94.

95. 96.

Alter, A. L., & Oppenheimer, D. M. (2009). Uniting the tribes of fluency to form a metacognitive nation. Personality and Social Psychology Review, 13(3), 219-235. Koriat, A. (2007). Metacognition and consciousness. In P. D. Zelazo, M. Moscovitch, & E. Thompson (Eds.), The Cambridge handbook of consciousness (pp. 289–326). Cambridge, NY: Cambridge University Press. Begg, I., Duft, S., Lalonde, P., Melnick, R., & Sanvito, J. (1989). Memory predictions are based on ease of processing. Journal of Memory and Language, 28(5), 610-632. Koriat, A. (1993). How do we know that we know? The accessibility model of the feeling of knowing. Psychological Review, 100(4), 609-639. Morewedge, C. K., Giblin, C. E., & Norton, M. I. (2014). The (perceived) meaning of spontaneous thoughts. Journal of Experimental Psychology: General, 143(4), 1742-1754. Thompson, V. A., Turner, J. A. P., & Pennycook, G. (2011). Intuition, reason, and metacognition. Cognitive Psychology, 63(3), 107-140. Thompson, V. A., Turner, J. A. P., Pennycook, G., Ball, L. J., Brack, H., Ophir, Y., & Ackerman, R. (2013). The role of answer fluency and perceptual fluency as metacognitive cues for initiating analytic thinking. Cognition, 128(2), 237-251. Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26(5), 521-562. Alter, A. L., Oppenheimer, D. M., & Zemla, J. C. (2010). Missing the trees for the forest: A construal level account of the illusion of explanatory depth. Journal of Personality and Social Psychology, 99(3), 436-451. Sloman, S., & Fernbach, P. (2018). The knowledge illusion: Why we never think alone. New York: Penguin. Cimpian, A., & Salomon, E. (2014). The inherence heuristic: An intuitive means of making sense of the world, and a potential precursor to psychological essentialism. Behavioral and Brain Sciences, 37(5), 461–480. Cimpian, A., & Salomon, E. (2014). Refining and expanding the proposal of an inherence heuristic in human understanding. Behavioral and Brain Sciences, 37(5), 506–527. Cimpian, A. (2015). The inherence heuristic: Generating everyday explanations. In R. Scott & S. Kosslyn (Eds.), Emerging Trends in the Social and Behavioral Sciences (pp. 1–15). Hoboken, NJ: John Wiley and Sons. Malle, B. F. (2006). The actor-observer asymmetry in attribution: A (surprising) metaanalysis. Psychological Bulletin, 132(6), 895-919. Jones, E. E. (1979). The rocky road from acts to dispositions. American Psychologist, 34(2), 107-117. D’Agostino, P. R., & Fincher-Kiefer, R. (1992). Need for cognition and the correspondence bias. Social Cognition, 10(2), 151-163. Gilbert, D. T., Pelham, B. W., & Krull, D. S. (1988). On cognitive busyness: When person perceivers meet persons perceived. Journal of Personality and Social Psychology, 54(5), 733-740. Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs, NJ: Prentice-Hall. Cimpian, A., & Cadena, C. (2010). Why are dunkels sticky? Preschoolers infer functionality and intentional creation for artifact properties learned from generic language. Cognition, 117(1), 62-68.

33 97.

98.

99.

100. 101. 102.

103. 104. 105.

106. 107.

108.

109.

110.

111.

112. 113.

Cimpian, A., & Markman, E. M. (2009). Information learned from generic language becomes central to children’s biological concepts: Evidence from their open-ended explanations. Cognition 113(1), 14-25. Cimpian, A., & Markman, E. M., (2011). The generic/nongeneric distinction influences how children interpret new information about social others. Child Development, 82(2), 471-492. Horne, Z., & Khemlani, S. (2018). Conceptual constraints on generating explanations. In C. Kalish, M. Rau, T. Rogers, & J. Zhu (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Hussak, L. J., & Cimpian, A. (2015). An early-emerging explanatory heuristic promotes support for the status quo. Journal of Personality and Social Psychology, 109(5), 739-752. Sutherland, S. L., & Cimpian, A. (2015). An explanatory heuristic gives rise to the belief that words are well suited for their referents. Cognition, 143, 228-240. Sutherland, S. L., & Cimpian, A. (2019). Developmental evidence for a link between the inherence bias in explanation and psychological essentialism. Journal of Experimental Child Psychology, 177, 265-281. Salomon, E., & Cimpian, A. (2014). The inherence heuristic as a source of essentialist thought. Personality and Social Psychology Bulletin, 40(10), 1297-1315. Cacioppo, J. T., Petty, R. E., & Kao, C. F. (1984). The efficient assessment of need for cognition. Journal of Personality Measurement, 48, 306-307. Kruglanski, A. W., Webster, D. M., & Klem, A. (1993). Motivated resistance and openness to persuasion in the presence or absence of prior information. Journal of Personality and Social Psychology, 65, 861-876. Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19, 25-42. Pennycook, G., Cheyne, J. A., Koehler, D. J., & Fugelsang, J. A. (2016). Is the cognitive reflection test a measure of both reflection and intuition? Behavior Research Methods, 48(1), 341-348. Cimpian, A., & Steinberg, O. D. (2014). The inherence heuristic across development: Systematic differences between children’s and adults’ explanations for everyday facts. Cognitive Psychology, 75, 130-154. Hussak, L. J., & Cimpian, A. (2018). Investigating the origins of political views: Biases in explanation predict conservative attitudes in children and adults. Developmental Science, 21, e12567. Akshoomoff, N., Beaumont, J. L., Bauer, P. J., Dikmen, S. S., Gershon, R. C., Mungas, D., et al (2013). VIII. NIH Toolbox Cognition Battery (CB): Composite scores of crystallized, fluid, and overall cognition. Monographs of the Society for Research in Child Development, 78, 119-132. Carlozzi, N. E., Tulsky, D. S., Kail, R. V., & Beaumont, J. L. (2013). VI. NIH Toolbox Cognition Battery (CB): Measuring processing speed. Monographs of the Society for Research in Child Development, 78, 88-102. Kail, R. (1991). Developmental change in speed of processing during childhood and adolescence. Psychological Bulletin, 109(3), 490-501. Williams, B. R., Ponesse, J. S., Schachar, R. J., Logan, G. D., & Tannock, R. (1999). Development of inhibitory control across the life span. Developmental Psychology, 35(1), 205-213.

34 114. Zelazo, P. D., Anderson, J. E., Richler, J., Wallner-Allen, K., Beaumont, J. L., & Weintraub, S. (2013). II. NIH Toolbox Cognition Battery (CB): Measuring executive function and attention. Monographs of the Society for Research in Child Development, 78, 16-33. 115. Tworek, C. M., & Cimpian, A. (2016). Why do people tend to infer “ought” from “is”? The role of biases in explanation. Psychological Science, 27(8), 1109-1122. 116. Vasilyeva, N., Wilkenfeld, D., & Lombrozo, T. (2017). Contextual utility affects the perceived quality of explanations. Psychonomic Bulletin & Review, 24(5), 1436-1450. 117. Gelman, S. A., Ware, E. A., Manczak, E. M., & Graham, S. A. (2013). Children’s sensitivity to the knowledge expressed in pedagogical and nonpedagogical contexts. Developmental Psychology, 49(3), 491-504. 118. Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57-74. 119. Clark, H. H. (1996). Using language. New York: Cambridge University Press. 120. Phillips, J., Luguri, J. B., & Knobe, J. (2015). Unifying morality’s influence on non-moral judgments: The relevance of alternative possibilities. Cognition, 145, 30-42. 121. Hoyos, C., & Gentner, D. (2017). Generating explanations via analogical comparison. Psychonomic Bulletin & Review, 24(5), 1364-1374. 122. Markman, A. B., & Gentner, D. (1993). Structural alignment during similarity comparisons. Cognitive Psychology, 25(4), 431-467. 123. Thibodeau, P. H., Crow, L., & Flusberg, S. J. (2017). The metaphor police: A case study of the role of metaphor in explanation. Psychonomic Bulletin & Review, 24(5), 1375-1386. 124. Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99-118. 125. Gilovich, T., Griffin, D., & Kahneman, D. (Eds.). (2002). Heuristics and biases: The psychology of intuitive judgment. New York: Cambridge University Press. 126. Evans, J. S. B., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8(3), 223-241. 127. Melnikoff, D. E., & Bargh, J. A. (2018). The mythical number two. Trends in Cognitive Sciences, 22(4), 280-293. 128. Gigerenzer, G., Todd, P. M., & the ABC Research Group. (1999). Simple heuristics that make us smart. New York: Oxford University Press. 129. Griffiths, T. L., Lieder, F., & Goodman, N. D. (2015). Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in Cognitive Science, 7(2), 217-229. 130. Lieder, F., Griffiths, T. L., Huys, Q. J., & Goodman, N. D. (2018). The anchoring bias reflects rational use of cognitive resources. Psychonomic Bulletin & Review, 25(1), 322349. 131. Lewis, D. (1983). Extrinsic properties. Philosophical Studies, 44, 197-200. 132. Barr, R. A., & Caplan, L. J. (1987). Category representations and their implications for category structure. Memory & Cognition, 15, 397-418. 133. Caplan, L. J., & Barr, R. A. (1991). The effects of feature necessity and extrinsicity on gradedness of category membership and class inclusion relations. British Journal of Psychology, 82, 427-440.

35 134. Weatherson, B., & Marshall, D. (2018). Intrinsic vs. extrinsic properties. In E. Zalta (Ed.), The Stanford encyclopedia of philosophy. Retrieved from http://plato.stanford.edu/entries/intrinsic-extrinsic/