This article was downloaded by: [University of California, Los Angeles (UCLA)] On: 10 June 2014, At: 11:46 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Educational Psychologist Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/hedp20
Developing Children's Early Competencies to Engage With Science a
b
c
William A. Sandoval , Beate Sodian , Susanne Koerber & Jacqueline Wong
a
a
Graduate School of Education & Information Studies University of California, Los Angeles b
Department of Psychology Ludwig-Maximilians-University, Munich, Germany
c
Department of Psychology Freiburg University of Education, Freiburg, Germany Published online: 19 May 2014.
To cite this article: William A. Sandoval, Beate Sodian, Susanne Koerber & Jacqueline Wong (2014) Developing Children's Early Competencies to Engage With Science, Educational Psychologist, 49:2, 139-152, DOI: 10.1080/00461520.2014.917589 To link to this article: http://dx.doi.org/10.1080/00461520.2014.917589
PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions
EDUCATIONAL PSYCHOLOGIST, 49(2), 139–152, 2014 Copyright Ó Division 15, American Psychological Association ISSN: 0046-1520 print / 1532-6985 online DOI: 10.1080/00461520.2014.917589
Developing Children’s Early Competencies to Engage With Science
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
William A. Sandoval Graduate School of Education & Information Studies University of California, Los Angeles
Beate Sodian Department of Psychology Ludwig-Maximilians-University, Munich, Germany
Susanne Koerber Department of Psychology Freiburg University of Education, Freiburg, Germany
Jacqueline Wong Graduate School of Education & Information Studies University of California, Los Angeles
Science educators have long been concerned with how formal schooling contributes to learners’ capacities to engage with science after school. This article frames productive engagement as fundamentally about the coordination of claims with evidence, but such coordination requires a number of reasoning capabilities to evaluate the strength of evidence, critique methods, and other factors upon which evidence evaluation rests, evaluating sources and potential biases, and so on. Although the general discourse on education commonly suggests students are bad at such things, we review cognitive development research that demonstrates children display a variety of capabilities, even at early ages, that can be productively built upon by formal science instruction. We use this research to suggest some possibilities for formal schooling to develop children’s capacities for evaluating claims within the pursuit of personally meaningful goals. We conclude with observations of useful directions our analysis opens to research.
Should you vaccinate your child? How do you weigh the benefits and risks of vaccination? Do vaccines really cause autism or other disorders? Many parts of the world have laws that require vaccination prior to the start of public schooling, but many also permit parents to avoid this requirement. Is this good public policy? What risks do unvaccinated children pose to public health? What about the resurrection of extinct species, like the mammoth or Correspondence should be addressed to William A. Sandoval, Graduate School of Education & Information Studies, University of California, Los Angeles, 2339 Moore Hall, Box 951521, Los Angeles, CA 900951521. E-mail:
[email protected]
carrier pigeon; is that a good idea? What should be the limits, if any, on genetic testing? Science educators have long been concerned with how formal schooling contributes to students’ capacities to engage with such science-related questions after they leave school. The history of this concern can be traced in the history of conceptualizations of scientific literacy and the means to develop it in learners (Laugksch, 2000). Scientific literacy has always been a complex, even vague, idea. Yet there is a general shift in science education away from a view of scientific understanding as composed of bits of knowledge and toward being able to act in ways seen as scientific. This shift is manifested in the United States through
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
140
SANDOVAL, SODIAN, KOERBER, WONG
a focus on scientific and engineering practices as key outcomes in new science education standards (National Research Council [NRC], 2012), and in European calls for a science education that enables participation in society (Bos et al., 2003). What we mean by scientific literacy now focuses more on what people should be able to do and less on what they know. As a parent, vaccination is not a scientific issue, or even a medical one. It is a highly personal issue intimately grounded in your ideas of how to best care for your children. Coming to a reasoned decision about vaccination comes down to evaluating competing claims and their evidence. We view the overarching practice of basic importance to this issue and other examples of “citizen thinking” (Jenkins, 1999) to be evaluating claims, or the coordination of claims with evidence (Kuhn, 1993). As citizens, we are confronted all the time with claims about the world. It is a given that many of these claims, such as about the causes, magnitude, and possible effects of climate change, are not directly testable by most of us. We lack the background conceptual knowledge and the means to generate relevant data. Instead, we have to figure out how to evaluate other people’s arguments in order to come to our own conclusions. We encounter a wide range of science-related circumstances in our everyday lives from the seemingly personally trivial (Is it safe to get a Brazilian blowout?) to the globally consequential (What should we do about climate change?). In such situations, most of us are not trying to do science in the sense that scientists do; we are not trying to advance scientific knowledge. Instead, we are trying to use such knowledge to figure out what to do about something. Consequently, assertions that children are little scientists or that science is the refinement of everyday thinking notwithstanding, thinking as a scientist and thinking scientifically are not the same thing. The metaphor of child as scientist has been used to characterize conceptual change in childhood by drawing an analogy between theory change in the history of science and the child’s developing understanding of foundational domains like biology (Carey, 1985). Whether the metaphor applies to practices of evidence evaluation has been disputed (Kuhn, 1989). The metaphor of the child as scientist acknowledges that the seeds of scientific reasoning lie in what appear to be fundamental human capacities for cognition but obscures two very different aspects of the social nature of scientific thinking. First, scientists do their thinking within highly developed communities designed for the purpose, including welldeveloped social infrastructures, technological machinery, and expertise. To call this sociotechnical infrastructure a refinement of everyday thinking grossly underestimates its role and value in the production of scientific knowledge and could hardly explain the success of science as a cultural institution. It also underestimates the ways in which individuals become socialized into particular scientific communities and their reasoning practices (Knorr-Cetina, 1999).
Second, to think “scientifically” in everyday settings is to employ certain reasoning practices, such as claim-evidence evaluation, in ways that scientists may employ, but typically for different goals and with access to a very different configuration of social and material resources. These shifts in goals and resources change the form of “scientific” reasoning laypeople engage in everyday settings. We are interested here in this everyday approximation of “scientific thinking” with respect to evaluating science claims within the social situations in which they might arise, and how formal science education might make such thinking more productive. A claim can be an assertion about a state of the world (the earth is round) or about a causal relation between variables (smoking causes cancer). In science, claims are typically derived from and embedded in theories. Theories are coherent sets of interrelated concepts and propositions that provide an explanatory framework for a domain of phenomena. Typically, alternative theories differ not just in the causal mechanism they propose but also in the way they conceptualize a domain. Historically, key concepts of an old theory often cannot be expressed in the conceptual framework of the contemporary theory. Evidence is a tricky term to define. In psychological and science education research, as we discuss next, evidence is used typically to mean data. We see data as an important kind of evidence, especially for science, but not the only kind of evidence people, including scientists, consider when they evaluate claims. Indeed, in everyday life we are all but forced to evaluate claims without access to data, and thus rely on ways to evaluate the testimony adduced to argue for or against particular claims (cf. Bromme, Kienhues, & Porsch, 2010; Chinn, Buckland, & Samarapungavan, 2011). Coordinating claims and evidence requires a number of reasoning practices to evaluate the strength of evidence, critique methods and other factors upon which evidence evaluation rests, evaluate sources and potential biases, and so on. Although the general discourse on education commonly suggests students are bad at such things, or at least fail to learn them in school, developmental research on cognition suggests children display a variety of capabilities, even at early ages, that can productively link to these scientific literacy goals. The practice-based view we present here reflects historical shifts in scholarly views of science, and of cognition. Science studies over the last several decades demonstrate the profoundly social character of scientific practice and scientific knowledge. In psychology, emergent Vygotskian perspectives highlight the socially situated character of thinking and learning. The convergence of these perspectives and their implications for science education are discussed by Duschl (2008). We acknowledge the influence of these perspectives here because in what follows we focus deliberately on the individual thinker, the individual citizen who must evaluate claims for him- or herself. Clearly,
EARLY COMPETENCIES IN SCIENCE
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
individuals harness a variety of material and social resources (books, websites, other people, etc.) to evaluate the science claims they encounter in everyday activity. People learn cognitive and material ways and means for evaluating claims through their participation in cultural activities and their appropriation of culturally meaningful practices (Gutierrez & Rogoff, 2003; John-Steiner & Mahn, 1996). Much of the developmental research we present next does not come from this explicitly situated perspective, but it does make clear that children acquire a range of capabilities for evaluating claims and evidence, even at an early age.
PRACTICES OF CLAIM EVALUATION What is involved in evaluating claims? What does it mean to coordinate claims with evidence? Two kinds of strategies can be distinguished. One is to provide a plausible mechanism (smoking causes cancer through the ingestion of tar full of chemical carcinogens). Plausibility judgments critically depend upon a person’s knowledge of the phenomena in question. Another strategy is to justify a claim by providing empirical evidence relevant to its evaluation. The kind of empirical evidence that can be provided to evaluate a claim depends on the type of claim. Propositions about a determinate state of affairs can be evaluated by inspecting the relevant state(s) of the world (although this can be extremely difficult to do). Causal claims cannot be proven right or wrong by appealing to single states of affairs (data points). Rather, they are rendered more or less plausible typically by inspecting patterns of covariation between presumed cause and effect (the rise in average global temperature corresponds to a rise in atmospheric CO2). Research on scientific reasoning in children and lay adults has focused predominantly on laypersons’ understanding of the relation of causation and covariation (Koslowski, 1996; Zimmerman, 2000). The following is a set of reasoning practices that we argue compose the evaluation of claims. To reiterate, we are focusing on the effort of individuals to make sense of scientific claims in their everyday activity and make no claims to represent the social and institutional practices of science. We orient our discussion of these practices from the perspective of one having to evaluate claims made by another, what might be called the “critic” role rather than “constructor” role (Ford & Forman, 2006). We consider that both the claim itself must be evaluated, and evidence is evaluated in relation to the claim. That is, the coordination of claims and evidence is a practice of justifying our belief in particular claims. What we mean by evidence includes data, per se, but also includes evaluations of methods to produce data, the sources of claims and evidence, and how evaluations of these several aspects must be coordinated. We argue that coordination of claims and evidence requires their separation: We must be able to distinguish the claim from the evidence for or against it.
141
Our view of the practice of claim evaluation skews toward the methodological and epistemic. Views of science now accept that concepts, evidence, and methods are tightly interrelated (cf. Duschl, 2008). Yet the layperson, almost by definition, is routinely going to be short on relevant conceptual knowledge, and must make up this gap with useful understanding of the epistemic criteria to which methods and the claims they produce are held. This implies that school must provide people with opportunities to interrogate claims about the world in relation to the evidence for those claims and the methods by which such evidence has been produced. Such opportunities clearly cannot be divorced from what science educators typically call “content”—the facts, concepts, and theories of science. Our views of claim evaluation are thus aligned with new reform efforts in this way, but we think it likely that the understanding of scientific practices that may result from these reforms has most leverage for those situations where people encounter science “content” they have not learned. Judging Plausibility of Mechanism A key aspect to judging the credibility of a claim is making a judgment of the plausibility of potential causal mechanisms that might explain the claim. That is, when we come across a claim that “A causes B,” we tend to wonder how, and we find such claims more believable when we can come up with a plausible mechanism to explain the proposed relation. Developmental research we summarize next shows this to be the case from early childhood. Indeed, the apparent implausibility of many scientific claims is partially what makes them so hard for children to learn. Effective plausibility judgments require some relevant conceptual knowledge. How much conceptual knowledge might be required to enable a justified inference of plausibility is clearly quite specific to the situation one might be trying to reason about. It is possible to suspect the claim that vaccines cause autism simply because evidence for such a link is weak, without understanding how vaccines work or how mercury might affect brain chemistry and development. At the same time, it is obvious that such conceptual understanding enables more accurate judgments of plausibility, and can even point toward the limits of current scientific understanding (e.g., of the causes of autism). The layperson, however, can often assume her or his conceptual knowledge is insufficient to enable an independent judgment of plausibility. Therefore we seek other sources of justification for plausibility judgments, including evaluations of evidence, assessments of methods used to produce evidence, and considerations of the sources of claims. Evaluating Evidence There are two broad aspects involved in the evaluation of evidence as it may pertain to a particular claim. First we might wonder if we have the right evidence to evaluate a
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
142
SANDOVAL, SODIAN, KOERBER, WONG
claim, a judgment of relevance. Second, we might ask if we have enough evidence to evaluate the claim, a judgment of sufficiency. Relevance and sufficiency both require conceptual knowledge to judge effectively, but in the absence of such knowledge we employ other resources to evaluate the relevance and sufficiency of evidence. One resource is the extent to which an argument for a claim provides persuasive justifications to link evidence to claims, including discounting alternatives. Part of what makes evidence compelling for or against a given claim is the extent to which the purported links between claim and evidence can be explained. That is, explicit justifications linking a claim to evidence render resources for making the claim more plausible. This partially explains how frauds like the vaccine–autism link get published in the first place: They provide a compelling story linking observed data to plausible causal explanations of that data. In the sciences, the guards against such frauds are social and institutionalized. An important aspect of developmental research to consider, therefore, is how children make sense of justification, and how science instruction can help children understand scientific means of justification and evidence evaluation. Another resource for evaluating evidence is an aspect we highlight separately: the assessment of the methods used to produce evidence. We consider methods separately because of their importance in judging evidence as being “scientific” or not. Assessing Methods Judgments of the quality of evidence include evaluation of the methods used to produce them. Issues of experimental control, sample size and selection, and so on, are used to evaluate the quality of evidence. There is also the issue of whether the methods used to produce evidence are appropriate to the claim being evaluated. A judgment of appropriateness of method may require a considerable amount of specific, topical knowledge (e.g., about climate change), although it may be that such judgments can be made reasonably well with what we might call an epistemic understanding of methods somewhat independent of particular topical domains. For instance, Cook and Sinha (2006) distinguished between experimental methods that tend to produce causal descriptions (that X causes Y), rather than causal explanations (how X causes Y). Such knowledge of the strengths and weaknesses of different methods can be brought to bear even when one’s understanding of the topic at hand is limited. That said, it seems likely this sort of understanding of methods develops only through a good deal of experience with using or evaluating empirical methods to make and evaluate specific claims. One issue that may make the assessment of methods particularly difficult for the layperson is the sheer diversity of methods in the natural sciences. Indeed, methodological
debates in education research highlight that views of science and scientific method are easily oversimplified (cf. Rudolph, 2014). The research we review next suggests interrogations of the relations between methods and the evidence they produce is an important area of need in science instruction. Considering Sources of Claims Independent of the evidence for or against a claim, the source of any given claim is an important attribute to be evaluated. Source evaluation includes efforts to detect biases that may influence what kind of evidence is adduced in support of a claim, and what evidence is ignored, discounted, or distorted. Another consideration is the degree of expertise attributed to a source, in relation to the problem at hand. Distinct from considerations of both expertise and bias is a notion of trust one might attach to a source, either in general or in a specific situation. Adults appear to be strongly influenced by a variety of linguistic markers in texts when evaluating sources of information and evidence, as discussed by Britt, Richter, and Rouet (this issue). These four aspects clearly intertwine in the evaluation of any particular claim. There may, as well, be other aspects one might consider important that we have not mentioned. Our intent here is not to lay out an epistemologically complete description of the processes of claim evaluation. We simply assert these aspects are central to the practice of evaluating claims and suggest important aspects of cognition to study and to take advantage of in science instruction. In the next section we selectively summarize developmental research on children’s cognition with respect to these aspects of claim evaluation. We then connect that summary to relevant work in science education to make our own claims about how science instruction through secondary school might be organized to promote people’s capacities to evaluate scientific claims when they encounter them in adulthood, regardless of what further science education they may pursue. We focus on the lay adult, although there are no doubt interesting questions one could ask about how trained scientists evaluate claims outside their own fields of expertise.
THE DEVELOPMENT OF CLAIM EVALUATION PRACTICES Developmental psychologists have framed practices of evaluating claims as at the heart of scientific thinking, and research has examined both how children evaluate their own claims and those of others. A good deal of this research suggests children develop crucial underpinnings to mature scientific reasoning by the time they enter school. Some of this research is reviewed in the NRC’s (2007) report on elementary science education. People make causal inferences
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
EARLY COMPETENCIES IN SCIENCE
from infancy and employ a range of deductive and inductive reasoning strategies (Gopnik & Wellman, 2012). Thus, from very early in childhood people are engaged in justifying claims, to themselves or others, and in some fashion coordinating claims and evidence for or against them. In this section, we review recent research that highlights children’s emergent capabilities. We use the terms claim, hypothesis, and theory interchangeably here, because in our view what developmental psychologists call hypotheses and theories are usually at the grain size of what we call a claim: a single causal proposition. Despite these early competencies in causal reasoning, in the developmental literature children’s scientific reasoning skills, defined as their abilities for intentional knowledge seeking (Kuhn & Franklin, 2006) as well as their understanding of the nature of scientific knowledge, have been described as severely deficient (Kuhn, 2010; Kuhn & Franklin, 2006; Dunbar & Klahr, 2012; Zimmerman, 2007). This view is based on a large body of research on preadolescent children’s ability to coordinate causal claims with covariation data, indicating that children typically do not test hypotheses in a systematic way, trying to produce an effect rather than aiming at understanding its causes. Children often fail to control for confounding variables, and they distort or ignore empirical evidence if the data do not fit their prior beliefs (Kuhn, Amsel, & O’Laughlin, 1988). Based on such findings it has been argued that children and even some lay adults may lack a metaconceptual understanding of “theories, hypotheses, claims,” on one hand, and “data,” “empirical evidence,” on the other hand (Kuhn, 1989). They may well be able to use empirical observations (e.g., covariation of events) to form causal beliefs, but they may be unable to reflect on the relation of their belief and the evidence on which it is based. Rather, the only claim evaluation practice that may be accessible to children and some adults is plausibility judgment. Plausibility Judgment Deanna Kuhn (2001) has noted that plausible explanation is the “clear victor” (p. 1) over empirical evidence in how people tend to justify claims. People throughout their life span seek mechanistic explanations to justify causal attributions, to the extent that covariation evidence matching a causal claim is severely discounted if plausible mechanisms to explain that covariation cannot be generated (Brem & Rips, 2000; Cheng & Novick, 1992; Koslowski, 1996). Kuhn (2001) reviewed research suggesting a broad developmental trend from a reliance on plausibility judgments toward an evaluation of evidence (i.e., drawing inferences from data). How people assess plausibility has been neglected as a factor in most research on children’s and adults’ scientific reasoning. Notably, Koslowski’s (1996) work focused on the effects of plausibility on children’s and adults’
143
reasoning with data. Of importance, both children and adults tended to neglect or discount patterns of evidence when they could not generate a plausible causal explanation for a pattern of data, or when given explanations were not seen as plausible. Koslowski (2012) argued that what might seem to indicate an inability to distinguish theory from evidence is that both children and adults offer explanations to justify an opinion, especially when evidence is unavailable, incomplete, or inconsistent. Information is often not recognized as evidentially relevant unless it can be incorporated into a broader causal framework that, in turn, affects the information people will gather. Furthermore, Koslowski (2012) pointed out that theories, including alternative theories, affect the way in which anomalous data are detected and evaluated. In children, prior knowledge or theory may lead to a failure to detect anomalous data at all (Chinn & Malhotra, 2002). Thus, theory- (plausibility) based reasoning and data evaluation are intricately related, and should be studied in conjunction. Recent developmental research has started to do so. When children are encouraged, for instance, to explain anomalous data (inconsistent evidence), they tend to exhibit exploratory, hypothesis-testing behavior. In a within-subject design, Legare (2012) encouraged 2- to 6-year-old children to explain consistent and inconsistent outcomes after having been familiarized with the effects of different objects on a novel apparatus (light boxes). Independent of age, children tended to prefer causal explanations in the inconsistent over the consistent condition, which in turn was related to increased amount of exploratory behavior. Thus, it appears that the study of theory-based reasoning in children may be equally important to the field of scientific reasoning as the work on their inferences from data. Recent work on causal learning and the theory view of cognitive development has made great progress in identifying the mechanisms of theory formation in young children (Gopnik & Wellman, 2012). Even infants use statistical regularities to make inferences about causal structure. Preschoolers make valid causal inferences from the outcomes of interventions, and they integrate their prior knowledge with new evidence in forming causal theories. A recent study found even advantages of preschoolers’ learning over adults’ learning of causal (conjunctive) relations suggesting that children are less biased by prior assumptions. In contrast to adults, they tended to more readily consider unlikely possibilities and to pay attention to current evidence (Lucas, Bridgers, Griffiths & Gopnik, 2014). This Bayesian inference, however, appears to be implicit and unconscious. It is unclear when and how children begin to engage in explicit theory-based reasoning in the context of claim evaluation. Plausibility judgments are based on domain-specific knowledge. What appears to be a plausible physical mechanism will not be a plausible one in psychological or biological reasoning. Thus, the development of claim evaluation
144
SANDOVAL, SODIAN, KOERBER, WONG
skills is deeply intertwined with the development of domain-specific conceptual knowledge. Although the research on theory formation and plausibility judgments has generally used familiar domains to study scientific reasoning independent of conceptual understanding, the interaction between children’s conceptual development and their ability to evaluate theory-laden claims has rarely been addressed in developmental work (e.g., Lazonder, Hagemans, & De Jong, 2010; Schauble, Glaser, Raghavan, & Reiner, 1991).
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
Evidence Evaluation It has been argued that children fail to distinguish claims from evidence because they lack a metaconceptual distinction between them. This argument has been critically evaluated in recent developmental research. One reason to doubt that children lack such a fundamental metaconceptual differentiation is that research on young children’s developing understanding of the mind clearly indicates by the age of 4 or 5 years, children differentiate beliefs from reality and understand how false beliefs originate from false or deficient information (see Wellman, 2010, for a review). Thus, in their preschool years, children acquire an understanding of their own and other people’s mental representations, and they differentiate mental representations from states of reality. Does the child who represents a story character’s false belief about the location of a piece of chocolate—“the chocolate is in the red box”—from reality—“the chocolate is in the green box”—understand “the chocolate is in the red box” as a hypothesis that is falsified by the evidence that the chocolate is in the green box? Sodian, Zaitchik, and Carey (1991) argued that false belief understanding requires a differentiation of a mental representation from reality, whereas hypothesis–evidence differentiation requires an understanding of the inferential relation between the two. To frame a belief as a hypothesis requires imagining alternative states of the world and judging whether each is consistent with the hypothesis or not. Conclusive evidence allows one to distinguish between alternative hypotheses. The hypothesis tester has to specify how alternative states of the world would bear on the truth or falsity of the hypothesis in question, that is which inferences would be warranted on the basis of a specific piece of evidence. (p. 754)
Sodian et al. (1991) presented first and second graders with a task that tested their understanding of the inferential relation between a piece of evidence and a hypothesis. Children were told a story about two brothers who had a mouse in their house that they had not been able to see. They held contradictory hypotheses about the size of the mouse: One thought it was a big mouse, and the other one thought it was a small one. They came up with the idea to leave a piece of cheese in a box at night to see whether it was gone
the next morning. The box could have either a large opening such that both a big and a little mouse could fit in or a small opening such that only a small mouse could fit in. Children were presented with two tasks, an effect production task (to feed the mouse) and a hypothesis-testing task (to test the hypotheses about its size). Fifty-five percent of the first graders and 86% of the second graders passed both tasks, that is, they distinguished between effect production and hypothesis testing, they chose a conclusive test over an inconclusive one, and they reasoned correctly about the inferences that could be drawn from the outcome of the conclusive test. Thus, in early elementary school, children can specify how alternative states of the world (cheese gone, cheese still there) bear on the truth or falsity of a hypothesis (mouse is small vs. mouse is big). These findings indicate that by elementary school age, children begin to engage in practices of claim evaluation, at least when the claim is about a single determinate state of affairs and the evidence is conclusively sufficient. Such deterministic reasoning is less typical for scientific reasoning than probabilistic reasoning, as is required in the evaluation of claims about causal relations between variables. Ruffman, Perner, Olson, and Doherty (1993) showed that preschoolers around the age of 5 years were able to separate a causal claim (e.g., chewing gum is bad for your teeth) from patterns of covariation data relevant to evaluating the claim. Children were able to correctly infer a story figure’s causal belief (e.g., chewing gum is good for your teeth) from the evidence available to the story figure (instances of healthy and unhealthy teeth in persons who had or had not eaten chewing gum), even when they themselves knew the evidence was “fake,” that is, when the story figure was led to entertain a false belief about a causal relationship, by manipulating the evidence available to her. In a similar study, Koerber, Sodian, Thoermer, and Nett (2005) found that even most 4-year-olds could infer a story figure’s belief from perfect and imperfect patterns of covariation data, independently of their own belief. Preschoolers were able to interpret patterns of noncovariation evidence when they were prompted to entertain a noncausal belief (“Peter thinks it does not matter for the health of the teeth whether or not one chews gum”). Thus, it appears that even preschoolers possess a basic understanding of causal claim evaluation. It remains unclear how deep their understanding of covariation evidence is at this early age, as the findings on the interpretation of imperfect covariation are contradictory. It is possible that young children represent perfect patterns of covariation as conclusive evidence for a claim in a deterministic manner (Piekny & M€ahler, 2013). Do young children understand what it means to test a causal belief? Recent research on causal learning indicates that by the age of 4 years children distinguish between confounded and unconfounded interventions and recognize implicitly, in their exploratory behaviors, that confounded ones may not be causally informative (Kushnir & Gopnik,
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
EARLY COMPETENCIES IN SCIENCE
2005). Yet even in elementary school, children’s explicit understanding of hypothesis testing appears to be deficient since they do not spontaneously produce adequate experimental strategies, most notably the control-of-variables strategy. Yet, when presented with a choice task, some third and most fourth graders were able to distinguish a confounded from a controlled experiment, and to justify their choice (Bullock, Sodian, & Koerber, 2009; Bullock & Ziegler, 1999). Moreover, most third and fourth graders spontaneously generated some planned comparison to obtain relevant evidence, even though they did not control for confounds, and therefore did not produce sufficient evidence. There was large individual variation in the onset of spontaneous production of the control-of-variables strategy, with the majority of students only producing it in adolescence. An early intuitive understanding of interpretive frameworks as well as an understanding of experimental design features (assessed independently in interviews) proved to be an important predictor for spontaneous strategy production in the longitudinal study by Bullock et al. (2009). Given that even elementary school students possess some understanding of hypothesis testing and evidence evaluation practices, it appears strange that adolescents and adults often fail to generate relevant evidence when asked to evaluate a real-world theory of their own (Kuhn, 1991). Because adults may understand the relevant evidence to evaluating such theories as very complex, they may be reluctant to generate ideas they know are not accurate. In a study of theory-based argumentation in 6- and 11-year-old children, Sodian and Barchfeld (2011) tested whether children possess a rudimentary understanding of the theory–evidence relation in (a) conceiving of alternatives to one’s own theory, and (b) recognizing empirical evidence as relevant to theory evaluation. Surprisingly, most children had no difficulty generating alternatives to their own theory. When asked to explain why some students are aggressive, participants would propose, for instance, that they catch it from their siblings; when asked whether another person could have a different explanation, they would speculate that one could believe that playing video games made children aggressive. It has been shown that young children take alternative explanations into account in causal reasoning tasks (Schulz & Bonawitz, 2007). Sodian and Barchfeld’s (2011) findings indicate that elementary school children can also generate alternative explanations spontaneously, if they are familiar with a content domain. However, as expected, children of both age groups were poor at generating evidence relevant to testing their theory. In the 6-year-old group, most answers could not be coded as theory elaborations, as observed with high frequency in adults by Kuhn (1991); rather, children produced sets of facts or observations that they were unable to relate to their theory (such as typical behaviors produced by aggressive children). In 11-year-olds, however, about 60% of the participants provided some ideas about how to gather data relevant to evaluating their theory such as the
145
suggestion to film children on the playground with a hidden camera, or to “just ask them why they are doing this.” Although such responses do not meet the criteria for “genuine evidence” as defined by Kuhn (1991), they indicate that children understand empirical data are relevant to theory evaluation. Most responses indicated epistemologically na€ıve beliefs in a direct and unproblematic access to the truth (e.g., through direct observation), and/or na€ıve beliefs about the content domain (i.e., causes of behavior). Thus, children may not be able to specify the kinds of data that would be relevant to evaluating their theory, because their causal knowledge is vague and lacks a precise understanding of plausible mechanisms, but they do appear to understand that empirical data can be brought to bear on real-world claims of their own about a familiar phenomenon. Again, the work on children’s understanding of the hypothesis-evidence relation highlights the importance of children’s domain-specific theories for scientific reasoning skills to be applied in an adequate way. Methods Assessment Children’s understanding of reliable and valid scientific methods has been most intensively investigated concerning their understanding of experiments, most notably concerning their ability to produce and understand the control-ofvariables strategy (Chen & Klahr, 1999). As described earlier, even third and fourth graders were able to generate planned comparisons, and they were even better when they had to recognize (and justify) the best answer in a choice task than when they had to spontaneously produce the answers (Bullock & Ziegler, 1999). An early intuitive understanding of interpretive frameworks as well as an understanding of experimental design features (assessed independently in interviews) proved to be an important predictor for spontaneous strategy production in the longitudinal study by Bullock et al. (2009). In this longitudinal study the participants were also tested on detecting and explaining different flaws in an experimental design (e.g., lack of a control group, lack of experimental validity or the missing variation of the focal variable). It could be shown that at age 12 (the youngest age at which this test was administered), one third to half of the children identified and articulated experimental errors. The performance level varied depending on the kind of the error, with the identification of a missing control group as being the hardest. Not only can elementary school children distinguish between a controlled and a confounded experiment in a choice task (Bullock et al., 2009; see earlier), with instruction upper elementary aged children can also learn to control variables to isolate causal effects (Kuhn, Schauble, & Garcia-Mila, 1992; Schauble, Glaser, Duschl, Schulze, & John, 1995). Metz (2004, 2011) has engaged very young children in sustained science investigations over the course of an entire school year and assessed their conceptions of certainty
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
146
SANDOVAL, SODIAN, KOERBER, WONG
regarding their own claims. These children, in kindergarten and first grade, express several sources of uncertainty in their own empirical work, including possible measurement errors, experimental confounds, and others. Perhaps not surprisingly, they often fail to generate strategies to resolve such uncertainty, but the fact they are aware of it suggests an initial grasp of the tentative nature of making and evaluating claims stemming from inherent methodological complexity. Extensive work by Lehrer and Schauble demonstrates that by fifth and sixth grades, children can develop sophisticated capacities to reason about experimental design and measurement in relation to particular research questions when such practices are pursued through sustained instruction (Lehrer & Schauble, 2004; Lehrer, Schauble, & Lucas, 2008). Koslowski’s (1996) studies of claim evaluation included experiments to assess people’s sensitivity to issues like sample size. By adolescence, children find larger samples more trustworthy than small samples. A study by Masnick and Morris (2008) showed that by the age of 9 years, children take sample size as well as within- and between-group variability into account when drawing inferences from data. In contrast, younger children tend to draw inferences from single instances and do not take sample size into account. On the whole, although interest in children’s ideas about methods for generating data, beyond controlled experimentation, is increasing, there remains a good deal of room for research in this area. Issues of experimental design and data evaluation are not domain independent in science. Developmental research in this area has conceptualized “the scientific method” in a simplified way as a domain-independent tool of wide application. This may help to disentangle dimensions of reasoning in some ways, but it is a simplification of scientific reasoning. For example, judging the appropriateness of a sample, in both size and composition, stands in relation to the question being asked and the kinds of measurements or interventions that might be used in an experiment. Sources of Justification The research reviewed so far makes clear that by school age, children can interrogate claims in relation to evidence that might bear on them. In the scientific reasoning area, very little research has addressed children’s judgments of the epistemic status of different sources of justification. However, research on the development of trust in young children (Harris, 2012) has revealed source evaluation abilities in very young children. For instance, preschool children tend to prefer a stranger’s claim over their mother’s when the available perceptual evidence is clearly consistent with the stranger’s and not the mother’s claim (Corriveau et al., 2009). Work on claim evaluation in science tasks carries this one step further: Do children perceive an authoritative source as a justification for a claim over available evidence or a plausible explanatory mechanism? Sandoval and Cam ¸
(2011) found that third- and fourth-grade students preferred covariation data over authoritative sources, with plausible mechanisms being preferred only when data did not show clear correlations. Students’ reasons for preferring data over other sources of justification, however, centered mainly on the credibility of having data, rather than features of the data or its collection, per se. Similarly, Koerber, Osterhaus and Sodian (2013) found that second graders— when they were asked to evaluate a (wrong) claim they strongly supported—relied more readily on data when changing their prior belief than on some plausible mechanism. Specifically, second graders who held a common misconception (e.g., “eating carrots are better for good vision than eating spinach”) and who were presented with a contrary statement (“eating both, carrots and spinach, improves vision”) changed their belief more often when this new claim was accompanied by counterevidence (data displayed in graphs) presented by a fictitious authority (researcher) than when it was accompanied by an explanation of a plausible mechanism (“eating both, carrots and spinach, leads to the production of a certain substance in our body, namely vitamin A, which itself is good for improving vision.”). Although there is a good deal of research available now on how adults evaluate source information when reading texts (Britt et al., this issue), a good deal more research is needed on how children evaluate different sources of justification for claims, and how such source evaluations interact with evaluations of evidence and plausible mechanism. Our summary highlights children’s emerging capacities for evaluating claims scientifically, but there is abundant evidence that children and adults often do not evaluate claims properly (cf. Dunbar & Klahr, 2012). What explains this? One feature to note is that children tend to do well on claim evaluation tasks in domains they are familiar with—a feature not shared with many science learning situations. Corollary to this is that the causal situations in these studies are deterministic and relatively simple, especially compared to causal explanations in the sciences (Perkins & Grotzer, 2005). Moreover, as previously alluded to, causal mechanisms offered by science are often intuitively implausible, or at least hard to comprehend. Scharrer, Bromme, Britt, and Stadtler (2012) recently showed that adults interpreted the same causal accounts differently depending on the comprehensibility of the text, preferring simpler texts of less technical jargon. Moreover, it is not just the causal mechanisms in these studies that are relatively simplistic; they also focus on a very narrow band of methods and kinds of data (easily controlled experiments and their covariation data).
DEVELOPING CLAIM EVALUATION PRACTICES THROUGH FORMAL SCHOOLING Our review of the developmental research, like others (cf. NRC, 2007), suggests that children’s practices of claim
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
EARLY COMPETENCIES IN SCIENCE
evaluation depend upon both some level of conceptual understanding of the topic being reasoned about and some means to evaluate evidence as related to that topic. This is a view of scientific thinking that is aligned with current views of professional science and, consequently, of new science education reforms. One might think, then, that the latest reforms, if allowed to flourish, will naturally lead people to be better able to engage science in their everyday lives, in terms of competent claim evaluation. Such optimism should be tempered, however, by a consideration of the evidence available from research on instructional approaches aligned with new reforms. From our analysis, improvements in people’s capacity to evaluate claims derives from being able to make good judgments about plausible mechanism and to evaluate the relevance and sufficiency of available evidence in relation to the methods and sources of its production. It is probably also the case, although the developmental research we have reviewed does not speak clearly to this, that why people are evaluating claims is likely to influence how they do it. Thus, besides finding evidence that science instruction improves students’ practices of claim evaluation, evidence is needed that students see such practices as relevant to their interests outside of the classroom. The last few decades of educational research have spent considerable effort in understanding how to bring students’ prior interests and knowledge into the classroom, but not very much in examining how what then gets learned in school gets taken back out into everyday life. This is more than a simple issue of transfer; it has to do with the perception of scientific practices as being relevant to everyday concerns. Even the newest science education reform documents provide little articulation between school and everyday life but continue to rely on assertions of relevance and value that are mostly untested. We see three trends in science education with at least the potential, and some evidence, that they can productively develop children’s capabilities to evaluate science claims in ways they could use outside of school.
147
Such interventions are directly aimed at practices of coordinating claims and evidence, but argumentation and modeling research as it is typically done in science education may not necessarily lead to productive claim evaluation outside of school. First, these approaches are aimed at typical school purposes of learning particular science, without any necessary connection to outside aims or interests. Second, the current boom in argumentation and modeling research focuses on students’ construction of arguments or models under conditions in which the resources for getting relevant evidence are made available somehow, as is access to plausible causal mechanisms (via the science concepts to be learned). These idealized conditions are hardly what most of us find when we encounter some science-related question in our everyday lives, encounters where we must evaluate arguments based on imperfect knowledge and possibly a lack of information or evidence. Third, science educators have begun to raise concern that argumentation as a pedagogical strategy may be narrowly routinized in ways that actually inhibit an understanding of science (McDonald & Kelly, 2011). These concerns taken together emphasize the importance of instruction aimed at the explicit epistemics of argumentation and modeling, including not only current efforts to explicate their role in science but extending those to considerations of the value and use of such practices in everyday situations. Especially given the difficulties of understanding empirical methods in relation to the claims they advance, explicit considerations of the epistemic aspects of data production and evaluation are probably needed. Ryder (2001) described, for example, how people’s encounters with medical research are often hampered by ignorance of the meaning and purposes of methodological features like double-blind studies, placebos, and so on. Students need experiences that, if not focused on these specific features, at least consider how methodological features are related to data and the inferences one can draw from them (e.g., Lehrer et al., 2008). Instruction in Socioscientific Issues
Argumentation and Modeling The inquiry reforms in science education have morphed in the last decade into more specific foci on scientific argumentation and modeling. Both argumentation and modeling involve making and evaluating claims about some state of the world, and adducing forms of evidence for or against those claims; students’ empirical investigations are oriented toward making arguments or models. Much of this burgeoning research focuses on the difficulties of organizing productive argumentation and modeling in classrooms. Learning to do these things well takes considerable time spent in well-organized learning experiences, but such learning appears within reach of elementary students (e.g., Lehrer et al., 2008; Metz, 2011; Ryu & Sandoval, 2012).
One area of school science instruction that explicitly tries to connect school science to personal or political issues outside of school is known as instruction in socioscientific issues (SSI). SSI are the kinds of problems we sketched at the start of this article: personal or social issues that are not necessarily scientific but upon which science can be helpfully brought to bear. Where argumentation research typically focuses on the means to achieve the best explanations of nature, SSI instruction more explicitly frames the evaluation of science claims in relation to solving everyday issues. Researchers interested in SSI reasoning have explicitly examined how research participants, typically secondary or university students, attend to causal mechanism and evidence in news reports or other scenarios constructed by
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
148
SANDOVAL, SODIAN, KOERBER, WONG
researchers. The general pattern of findings suggests students are inconsistent in their attention to available explanatory mechanisms and evidence for them (e.g., Kolstø, 2001; Kolstø et al., 2006; Korpan, Bisanz, Bisanz, & Henderson, 1997; Ratcliffe, 1999). These and related studies show that sometimes people evaluate causal claims and the evidence for or against them, and other times they do not. We still know very little about the causes of this kind of variation. Recent work by Nielsen (2012a, 2012b) confirms what we might have suspected: People co-opt science to advance their own positions, often with little attention to evaluating the science claims themselves. As with research on argumentation and modeling, the bulk of SSI research has documented the difficulties students have in reasoning scientifically about such issues. From our perspective on claim evaluation, SSI instruction represents an underexploited resource. Because SSI entail both science and other kinds of claims and evidence, the explicit contrast between scientific arguments and other kinds could be a means to explicate the epistemics of scientific claim evaluation and a context for students to directly engage the value of using science to consider personally and socially relevant issues. Recently, Lederman, Antink, and Bartos (2014) described an approach to teaching genetics through SSI with an explicit attention to epistemic issues that suggests the kind of instruction we envision. Community-Oriented Science A third line of research relevant to our concerns of claim evaluation is what we call, for lack of a better phrase, community-oriented science instruction. We mean a range of instructional interventions that explicitly aim to move science instruction out of the classroom and into the community. These efforts explicitly place science in a subordinate role in relation to solving everyday problems or promoting sociopolitical action (e.g., Aikenhead, 2006; Roth & Desautels, 2002). Roth and Lee (2004), for example, described middle school students working with adults and scientists in their community to preserve a local creek. In this case, students’ science learning was driven by their specific, individual interests in the community problem. Such projects espouse a view of science learning that is highly localized, grounded in particular problems of particular communities, and intimately about engaging students with science in public ways. Such place-based pedagogies seem to hold a great deal of promise for making science more meaningful to students (Feinstein, Allen, & Jenkins, 2013). Our view is that such work is currently marginalized in science education, perhaps because the local nature of such work seems to reject normative conceptions of learning. The impetus for this sort of community-based work is community action, and students’ empowerment within such action. Consequently, this research rarely attempts to assess learning or competence in ways familiar to developmental
psychologists or mainstream science educators. This is a drawback, because we do not really know how such instruction may or may not really develop practices of claim evaluation. This does not seem necessarily inherent to a community-oriented approach; we simply observe this has been the approach taken in such work to this point. Clearly we are only scratching the surface of the ways in which formal schooling could develop the competencies children display for evaluating claims scientifically, and there are other ways of framing the issue that lead to different, albeit related, recommendations (e.g., Feinstein et al., 2013). New reforms move toward the kinds of practices of interest to us here, in large part because they are derived from the same corpus of developmental work. Still, a great deal of research is needed to find out how students apply such school instruction outside of school.
OPEN QUESTIONS AND FUTURE DIRECTIONS FOR PSYCHOLOGICAL RESEARCH From our analysis, we see three areas of open questions that psychological research can helpfully pursue and whose answers could generate real improvement in practices of science education. These are all areas of current research, but not necessarily framed in terms of claim evaluation as we have described it here. Greater attention is needed to how people assess plausibility, evidence, methods, and sources across a variety of contexts and throughout the lifespan. Resource Use in Everyday Science Reasoning Generating a more extensive empirical base of knowledge about how people actually reason with and about science in everyday settings is crucial to articulating a clearer relation between school science and everyday life (Feinstein, 2011). It is not that there is no such empirical base. Ryder (2001) synthesized a broad range of case studies of people’s everyday encounters with science and found that people generally needed to understand more of the epistemic bases of science than particular science concepts: They needed to understand methods and their relations to particular kinds of claims, to how scientists judge uncertainty, and so on. More of this kind of research is needed, and we suggest it must pay detailed attention to the kinds of resources people seek and how they use them, particularly in relation to their own goals. We mean the material and social resources people can access as they reason through a real science-related issue. Material resources include things like media reports, and an enormous array of Internet sources of varying credibility and information. Some of these sources may provide something close to primary data, the use of which is the main focus of reform efforts, but we suggest an important
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
EARLY COMPETENCIES IN SCIENCE
question concerns the extent to which people seek and evaluate primary data relevant to their everyday concerns. Feinstein (2012) provided one example of how such research might be conducted. He examined how parents of children newly diagnosed on the autism spectrum used material (books, Internet) and social (doctors, other parents with special needs children) resources to learn about the disorder and its management. His analysis shows how the resources that people seek are directly tied to questions they are trying to answer, only some of which are directly science questions. The analyses in these examples, however, are not cognitive analyses of claim evaluation. In terms of how people evaluate science claims within such complex, real concerns like caring for a child with autism, it would be useful to have finer-grained accounts of how people identify and evaluate claims and evidence within their encounters with different types of material and social resources, and how the goals pursued within situated activity influence claim evaluation. It would be important to understand, further, when people perceive themselves to be engaged with scientific claims, contrasted with other sorts of claims. How does the science related to crop production or nutrition inform people’s choices to buy organic foods, for example? Do people even think about that choice in these terms? Of course, the SSI research pointed to earlier addresses science-related everyday contexts. It would be quite productive for such research to link more directly with cognitive development research in order to examine in detail how students evaluate plausibility, evidence, methods, and sources in SSI contexts. When reading about genetically modified foods, for example, how do students identify claims and evidence, and how do they evaluate them? Do students tend to discount scientific information in such contexts because they find available scientific mechanisms implausible? Are they sensitive to inconsistencies in evidence? SSI research has not so far been framed in these kinds of terms, but doing so would provide crucial links to developmental research, and suggest pathways of instruction. Everyday Science Reasoning Across the Life Span Very little research within education or psychology has explored how people’s reasoning about science in their everyday lives might change over the lifespan. Bricker and Bell (2012) provided one of the few studies to look at children’s reasoning about science, including how they think about evidence, in both school and everyday contexts. The elementary children studied in their sample identified school as one source of ideas about evidence, whereas popular television shows (e.g., crime dramas) provided another source for thinking about the role of evidence in arguments (cf. Maier, Rothmund, Retzbach, Otto, & Besley, this issue). Bricker and Bell also found, consistent with our review, that children used a range of linguistic markers to
149
identify evidence in relation to claims in their arguments, in and out of school. Similarly, Porsch, Bromme, and Pollmeier (2010) found that elementary school children differentiate between different types of sources, depending on the school subject and task difficulty. It would be very helpful to have more detailed information about how children make judgments about what counts as good evidence, including both relevance and sufficiency, in everyday reasoning contexts related to science, and to relate such reasoning to the science learned in school. Research on adults’ everyday science reasoning seems to occur primarily within the field of science communications. This work often explores how linguistic features of media reports influence people’s assessments of trustworthiness. For example, people find media reports more trustworthy when claims are hedged, and especially when the scientists responsible for the research provide the hedges (Jensen, 2008). An emerging body of work on multiple document comprehension is related to our interests in claim evaluation (Britt et al., this issue; Goldman & Scardamalia, 2013). A major challenge for research and education is the amount of work relevant to understanding how people evaluate science claims that occurs across different scholarly fields that are mostly not in contact with each other. This special issue is derived from an effort to promote such contact, but it remains the case that promoting connections between related areas of study would benefit our understanding of everyday practices of science claim evaluation and thereby improve science education. Judgments of trustworthiness, for example, include facets of source evaluation, like hedges, but also include more particular evaluations of plausibility and evidence. We currently seem to know very little about how these facets of evaluation interact. Research on these things tends to proceed in very small steps, with one aspect of evaluation being varied at one time, such as hedging, or the type of argument support (Scharrer et al. 2012), and usually with large samples that enable measurement of patterns of response but not processes of reasoning. Research on reasoning processes could productively take some of the examples from this sort of work but look in more depth at how children and adults process claims and evidence. Cognitive Consequences of Practice-Oriented Instruction The consequences of the shift toward practice-oriented instruction in science are profound for both science education and science education research. The conceptual change models that have dominated science education research pay very little attention to the kinds of practices described in the latest standards, whereas on the other hand the sociocultural perspectives on science and on learning that drive much of these latest reforms typically pay little attention to
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
150
SANDOVAL, SODIAN, KOERBER, WONG
concepts as cognitive structures. We might put the question thus: What do individuals come to know and be able to do as a result of their engagement in particular science practices in school? In science education, research from the practice-oriented perspective has tended to examine changes in collective (i.e., classroom) discourse, such as the appropriation of argument discourse (Palincsar, Anderson, & David, 1993; Rosebery, Warren, & Conant, 1992). Recently, Ryu and Sandoval (2012) showed how individual children’s improvement constructing and evaluating arguments was tied to their sustained engagement in specific classroom practices of argument and justification. A great deal more such work seems necessary to show what students learn from their engagement in new practice-oriented science education reforms. Longitudinal studies of such instruction and the developmental trajectories they support for children would help to disentangle what Metz (2011) referred to as “robust developmental constraints” from instructional experiences and their consequences. Further, we need work that connects such school learning to everyday encounters with science. Ryu and Sandoval (2012), for example, had no evidence that children’s increased competence at evaluating arguments would extend to out of school contexts, or even to socioscientific issues related to the science they learned. Without assessments that promising instructional approaches actually lead to everyday competence, educational improvement is likely to be haphazard.
CONCLUSION Some have argued that scientific literacy for most people is practically unattainable (e.g., Shamos, 1995). Our review of the developmental research related to the evaluation of science claims suggests to us the opposite—that a productive competence to engage with science is within the reach of formal schooling through high school, as children display a range of such competence even prior to entering school. The issue, as we see it, is that typical science instruction appears to do little to build upon the competencies children bring to school. New reforms seem pointed in the right direction, informed as they are by the research reviewed here and elsewhere (e.g., NRC, 2007). We argue these reforms must include explicit efforts to connect formal science knowledge and practice to everyday reasoning contexts to which they can be brought to bear, including socioscientific issues and community-oriented projects. These connections may require researchers to reconsider what normative practice should look like and what might count as effective uses of science. Psychological research that can describe variations in productive engagement with science will improve both our accounts of the development of so-called “scientific thinking” and educational efforts to promote it.
ACKNOWLEDGMENTS We thank Susan R. Goldman, Rainer Bromme, Clark Chinn, and two anonymous reviewers for their helpful feedback on this article.
FUNDING This article emerged from a conference on public engagement with science jointly funded by the German Research Foundation (DFG, award BR 1126/5-1) and the U.S. National Science Foundation (NSF, award 1065967), and organized by Rainer Bromme, Dorothe Kienhues, Susan R. Goldman, Anne Britt, and William Sandoval. The preparation of this article was partially supported by DFG awards to Sodian (SO 213/31-3) and Koerber (KO 2276/ 4-3), and by an NSF award to Sandoval (0733233). The views expressed here are ours, of course, and do not represent the official views of either agency.
REFERENCES Aikenhead, G. S. (2006). Science education for everyday life. New York, NY: Teachers College Press. Bos, W., Lankes, E. M., Prenzel, M., Schwippert, K., Walther, G., Valtin, R., & Voss, A. (2003). To which questions does a combined interpretation of the results yielded by both PISA and IGLU provide well grounded answers? Zeitschrift Fur Padagogik, 49, 198–212. Brem, S. K., & Rips, L. J. (2000). Explanation and evidence in informal argument. Cognitive Science, 24, 573–604. Bricker, L., & Bell, P. (2012). Argumentation and reasoning in life and in school: Implications for the design of school science learning environments. In M. S. Khine (Ed.), Perspectives on scientific argumentation: Theory, practice, and research (pp. 117–133). Dordrecht, The Netherlands: Springer. Britt, M. A., Richter, T., & Rouet, J.-F. (this issue). Scientific literacy: The role of goal-directed reading and evaluation in understanding scientific information. Educational Psychologist, 49. Bromme, R., Kienhues, D., & Porsch, T. (2010). Who knows what and who can we believe? Epistemological beliefs are beliefs about knowledge (mostly) to be attained from others. In L. D. Bendixen & F. C. Feucht (Eds.), Personal epistemology in the classroom: Theory, research, and implications for practice (pp. 163–193). Cambridge, England: Cambridge University Press. Bullock, M., Sodian, B., & Koerber, S. (2009). Doing experiments and understanding science: Development of scientific reasoning from childhood to adulthood. In W. Schneider & M. Bullock (Eds.), Human development from early childhood to early adulthood: Findings from a 20 year longitudinal study (pp. 173–197). New York, NY: Psychology Press. Bullock, M., & Ziegler, A. (1999). Scientific reasoning: Developmental and individual differences. In F. E. Weinert & W. Schneider (Eds.), Individual development from 3 to 12: Findings from the Munich Longitudinal Study. Cambridge, England: Cambridge University Press. Carey, S. (1985). Conceptual change in childhood. Cambridge, MA: MIT Press. Chen, Z., & Klahr, D. (1999). All other things being equal: Acquisition and transfer of the control of variables strategy. Child Development, 70, 1098–1120.
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
EARLY COMPETENCIES IN SCIENCE Cheng, P. W., & Novick, L. R. (1992). Covariation in natural causal induction. Psychological Review, 99, 365–382. Chinn, C. A., Buckland, L. A., & Samarapungavan, A. (2011). Expanding the dimensions of epistemic cognition: Arguments from philosophy and psychology. Educational Psychologist, 46, 141–167. Chinn, C. A., & Malhotra, B. A. (2002). Children’s responses to anomalous scientific data: How is conceptual change impeded? Journal of Educational Psychology, 94, 327–343. Cook, T. D., & Sinha, V. (2006). Randomized experiments in educational research. In J. Green, G. Camilli & P. B. Elmore (Eds.), Handbook of complementary methods in education research (pp. 551–566). Washington, DC: American Educational Research Association. Corriveau, K. H., Harris, P. L., Meins, E., Fernyhough, C., Arnott, B., Elliott, L., . . ., & De Rosnay, M. (2009). Young children’s trust in their mother’s claims: Longitudinal links with attachment security in infancy. Child Development, 80, 750–761. Dunbar, K., & Klahr, D. (2012). Scientific thinking and reasoning. In K. J. Holyoak & R. G. Morrison (Eds.), The Oxford handbook of thinking and reasoning (pp. 701–718). Oxford, England: Oxford University Press. Duschl, R. A. (2008). Science education in 3-part harmony: Balancing conceptual, epistemic and social goals. Review of Research in Education, 32, 268–291. Feinstein, N. (2011). Salvaging science literacy. Science Education, 95, 168–185. Feinstein, N. (2012). Making sense of autism: Progressive engagement with science among parents of young, recently diagnosed autistic children. Public Understanding of Science. Advance online publication. doi:10.1177/0963662512455296 Feinstein, N., Allen, S., & Jenkins, E. W. (2013). Outside the pipeline: Reimagining science education for nonscientists. Science, 340, 314–317. Ford, M., & Forman, E. A. (2006). Redefining disciplinary learning in classroom contexts. Review of Research in Education, 30, 1–32. Goldman, S. R., & Scardamalia, M. (2013). Managing, understanding, applying, and creating knowledge in the information age: Next-generation challenges and opportunities. Cognition and Instruction, 31, 255–269 Gopnik, A., & Wellman, H. M. (2012). Reconstructing constructivism: Causal models, Bayesian learning mechanisms, and the theory. Psychological Bulletin, 138, 1085–1108. doi:10.1037/a0028044 Gutierrez, K. D., & Rogoff, B. (2003). Cultural ways of learning: Individual traits or repertoires of practice. Educational Researcher, 32, 19–25. Harris, P.L. (2012) Trusting what you’re told. How children learn from others. Cambridge, MA: Harvard University Press. Jenkins, E. W. (1999). School science, citizenship and the public understanding of science. International Journal of Science Education, 21, 703–710. Jensen, J. D. (2008). Scientific uncertainty in news coverage of cancer research: Effects of hedging on scientists’ and journalists’ credibility. Human Communication Research, 34, 347–369. John-Steiner, V., & Mahn, H. (1996). Sociocultural approaches to learning and development: A Vygotskian framework. Educational Psychologist, 31, 191–206. Knorr-Cetina, K. (1999). Epistemic cultures: How the sciences make knowledge. Cambridge, MA: Harvard University Press. Koerber, S., Osterhaus, C., & Sodian, B. (2013, April). Evidence-based reasoning in the light of contrary beliefs. Paper presented at the biennial conference of the Society of Research in Child Development, Seattle, WA. Koerber, S., Sodian, B., Thoermer, C., & Nett, U. (2005). Scientific reasoning in young children: Preschoolers’ ability to evaluate covariation evidence. Swiss Journal of Psychology/Schweizerische Zeitschrift F€ ur Psychologie/Revue Suisse De Psychologie, 64, 141–152. Kolstø, S. D. (2001). “To trust or not to trust. . ."—Pupils’ ways of judging information encountered in a socio-scientific issue. International Journal of Science Education, 23, 877–901.
151
Kolstø, S. D., Bungum, B., Arnesen, E., Isnes, A., Kristensen, T., Mathiassen, K., . . ., & Ulvik, M. (2006). Science students’ critical examination of scientific information related to socioscientific issues. Science Education, 90, 632–655. Korpan, C. A., Bisanz, G. L., Bisanz, J., & Henderson, J. M. (1997). Assessing literacy in science: Evaluation of scientific news briefs. Science Education, 81, 515–532. Koslowski, B. (1996). Theory and evidence: The development of scientific reasoning. Cambridge, MA: MIT Press. Koslowski, B. (2012). Scientific reasoning: Explanation, confirmation bias and scientific practice. In G. Feist & M. Gorman (Eds.), Handbook of the psychology of science and technology (pp. 151–192). Dordrecht, The Netherlands: Springer. Kuhn, D. (1989). Children and adults as intuitive scientists. Psychological Review, 96, 674. Kuhn, D. (1991). The skills of argument. Cambridge, England: Cambridge University Press. Kuhn, D. (1993). Science as argument: Implications for teaching and learning scientific thinking. Science Education, 77, 319–337. Kuhn, D. (2001). How do people know? Psychological Science, 12, 1–8. Kuhn, D. (2010). What is scientific thinking and how does it develop? In U. Goswami (Ed.), Handbook of childhood cognitive development (2nd ed., pp. 472–534). Oxford, England: Blackwell. Kuhn, D., Amsel, E., & O’Loughlin, M. (1988). The development of scientific thinking skills. Orlando, FL: Academic. Kuhn, D., & Franklin, S. (2006). The second decade. What develops (and how). In D. Kuhn & R. S. Siegler (Vol. Eds.), Handbook of child psychology: Vol. 2. Cognition, perception and language (pp. 953–993). Hoboken, NJ: Wiley. Kuhn, D., Schauble, L., & Garcia-Mila, M. (1992). Cross-domain development of scientific reasoning. Cognition and Instruction, 9, 285–327. Kushnir, T., & Gopnik, A. (2005). Young children infer causal strength from probabilities and interventions. Psychological Science, 16, 678–683. Laugksch, R. C. (2000). Scientific literacy: A conceptual overview. Science Education, 84, 71–94. Lazonder, A. W., Hagemans, M. G., & De Jong, T. (2010). Offering and discovering domain information in stimulation-based inquiry learning. Learning and Instruction, 20, 511–520. Lederman, N. G., Antink, A., & Bartos, S. (2014). Nature of science, scientific inquiry, and socio-scientific issues arising form genetics: A pathway to developing a scientifically literate citizenry. Science & Education, 23, 285–302. Legare, C. H. (2012). Exploring explanation: Explaining inconsistent evidence informs exploratory, hypothesis-testing behavior in young children. Child Development, 83, 173–185. doi:10.111/j.14678624.2011.01691.x Lehrer, R., & Schauble, L. (2004). Modeling natural variation through distribution. American Educational Research Journal, 41, 635–679. Lehrer, R., Schauble, L., & Lucas, D. (2008). Supporting development of the epistemology of inquiry. Cognitive Development, 23, 512–529. Lucas, C. G., Bridgers, S., Griffiths, T. L., & Gopnik, A. (2014). When children are better (or at least more open-minded) learners than adults: Developmental differences in learning the forms of causal relationships. Cognition, 131, 284–299. Maier, M., Rothmund, T., Retzbach, A., Otto, L., & Besley, J. C. (this issue). Informal learning through science media usage. Educational Psychologist, 49. Masnick, A. M., & Morris, B. J. (2008) Investigating the development of data evaluation: The role of data characteristics. Child Development, 79, 1032–1048. McDonald, S., & Kelly, G. (2011). Beyond argumentation: sense-making discourse in the science classroom. In M. S. Khine (Ed.), Perspectives on scientific argumentation: Theory, practice, and research (pp. 265– 281). Dordrecht, The Netherlands: Springer.
Downloaded by [University of California, Los Angeles (UCLA)] at 11:46 10 June 2014
152
SANDOVAL, SODIAN, KOERBER, WONG
Metz, K. E. (2004). Children’s understanding of scientific inquiry: their conceptualization of uncertainty in investigations of their own design. Cognition and Instruction, 22, 219–290. Metz, K. E. (2011). Disentangling robust developmental constraints from the instructionally mutable: Young children’s epistemic reasoning about a study of their own design. Journal of the Learning Sciences, 20, 50–110. National Research Council. (2007). Taking science to school: Learning and teaching science in grades K-8. Washington, DC: National Academy Press. National Research Council. (2012). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas. Washington, DC: National Academy Press. Nielsen, J. A. (2012a). Co-opting science: A preliminary study of how students invoke science in value-laden discussions. International Journal of Science Education, 34, 275–299. Nielsen, J. A. (2012b). Science in discussions: An analysis of the use of science content in socioscientific discussions. Science Education, 96, 428–456. Palincsar, A. S., Anderson, C., & David, Y. M. (1993). Pursuing scientific literacy in the middle grades through collaborative problem solving. Elementary School Journal, 93, 643–658. Perkins, D. N., & Grotzer, T. A. (2005). Dimensions of causal understanding: the role of complex causal models in students’ understanding of science. Studies in Science Education, 41, 117–166. Piekny, J., & M€ahler, C. (2013) Scientific reasoning in early and middle childhood: The development of domain-general evidence evaluation, experimentation, and hypothesis generation skills. British Journal of Developmental Psychology, 31, 153–179. Porsch, T., Bromme, R., & Pollmeier, J. (2010). Was muss man tun, um sicher die richtige L€ osung zu finden? Quellenpr€aferenzen von Grundschulkindern in verschiedenen Fachkontexten [What needs to be done to find the right answer? Source preferences of elementary school children dealing with tasks from different school subjects]. Zeitschrift f€ ur Entwicklungspsychologie und P€ adagogische Psychologie, 42, 90–98. doi:10.1026/0049-8637/a000009 Ratcliffe, M. (1999). Evaluation of abilities in interpreting media reports of scientific research. International Journal of Science Education, 21, 1085–1099. Rosebery, A. S., Warren, B., & Conant, F. R. (1992). Appropriating scientific discourse: findings from language minority classrooms. Journal of the Learning Sciences, 2, 61–94. Roth, W.-M., & Desautels, J. (2002). Science education as/for sociopolitical action: Charting the landscape. In W.-M. Roth & J. Desautels (Eds.), Science education as/for sociopolitical action (pp. 1–16). New York, NY: Peter Lang. Roth, W.-M., & Lee, S. (2004). Science education as/for participation in the community. Science Education, 88, 263–291.
Rudolph, J. L. (2014). Why understanding science matters: The IES research guidelines as a case in point. Educational Researcher, 43, 15–18. Ruffman, T., Perner, J., Olson, D. R., & Doherty, M. (1993). Reflecting on scientific thinking: children’s understanding of the hypothesis-evidence relation. Child Development, 64, 1617–1636. Ryder, J. (2001). Identifying science understanding for functional scientific literacy. Studies in Science Education, 36, 1–44. Ryu, S., & Sandoval, W. A. (2012). Improvements to elementary children’s epistemic understanding from sustained argumentation. Science Education, 96, 488–526. Sandoval, W. A., & Cam, ¸ A. (2011). Elementary children’s judgments of the epistemic status of sources of justification. Science Education, 95, 383–408. Scharrer, L., Bromme, R., Britt, M. A., & Stadtler, M. (2012). The seduction of easiness: How science depictions influence laypeople’s reliance on their own evaluation of scientific information. Learning and Instruction, 22, 231–243. Schauble, L., Glaser, R., Duschl, R. A., Schulze, S., & John, J. (1995). Students’ understanding of the objectives and procedures of experimentation in the science classroom. Journal of the Learning Sciences, 4, 131–166. Schauble, L., Glaser, R., Raghavan, K., & Reiner, M. (1991). Causal models and experimentation strategies in scientific reasoning. The Journal of the Learning Sciences, 1, 201–238. Schulz, L., & Bonawitz, E.B. (2007). Serious fun: Preschoolers play more when evidence is confounded. Developmental Psychology, 43, 1045–1050. Shamos, M. H. (1995). The myth of scientific literacy. New Brunswick, NJ: Rutgers University Press. Sodian, B., & Barchfeld, P. (2011). Development of cognitive flexibility and epistemological understanding in argumentation. In J. Elen, E. Stahl, R. Bromme, & G. Clarebout (Eds.), Links between beliefs and cognitive flexibility: lessons learned (pp. 141–156). Berlin, Germany: Springer. Sodian, B., Zaitchik, D., & Carey, S. (1991). Young children’s differentiation of hypothetical beliefs from evidence. Child Development, 62, 753–766. Wellman, H. M. (2010). Understanding the psychological world: Developing a theory of mind. In U. Goswami (Ed.), Blackwell handbook of childhood cognitive development (2nd ed., pp. 167–187). London, England: Blackwell. Zimmerman, C. (2000). The development of scientific reasoning skills. Developmental Review, 20, 99–149. Zimmerman, C. (2007). The development of scientific thinking skills in elementary and middle school. Developmental Review, 27, 172–223. doi:10.1016/j.dr.2006.12.001