Belfer Center for Science & International Affairs

4 downloads 13536 Views 60KB Size Report
Center for International Affairs, and Harvard's Environmental Information Center. .... future those things we call causes and those things we call impacts?
Belfer Center for Science & International Affairs

Impacts Working Group Theme Paper: Assessing Impacts: Framing, Communication, Credibility Sheila Jasanoff, Clark Miller and James McCarthy

Draft for Comments

June 1998

Global Environmental Assessment Project Environment and Natural Resources Program

CITATION, CONTEXT, AND REPRODUCTION PAGE This draft paper is a work in progress. No further citation is allowed without permission of the author(s). Comments are welcome and may be directed to author Sheila Jasanoff, Department of Science and Technology Studies, Cornell University, 632 Clark Hall, Ithaca, NY 14853, telephone (607) 2556049, fax (607) 255-6044 and email [email protected]. The Global Environmental Assessment (GEA) project is a collaborative team study of global environmental assessment as a link between science and policy. The Team is based at Harvard University. The project has two principal objectives. The first is to develop a more realistic and synoptic model of the actual relationships among science, assessment, and management in social responses to global change, and to use that model to understand, critique, and improve current practice of assessment as a bridge between science and policy making. The second is to elucidate a strategy of adaptive assessment and policy for global environmental problems, along with the methods and institutions to implement such a strategy in the real world. The GEA Project is supported by a core grant from the National Science Foundation (Award No. SBR9521910) for the “Global Environmental Assessment Team.” Additional support is provided by the Department of Energy (Award No. DE-FG02-95ER62122) for the project, “Assessment Strategies for Global Environmental Change,” the National Institute for Global Environmental Change (Awards No. 901214-HAR, LWT 62-123-06518) for the project “Towards Useful Integrated Assessments,” the Center for Integrated Study of the Human Dimensions of Global Integrated Assessment Center at Carnegie Mellon University (NSF Award No. SBR-9521914) for “The Use of Global Environmental Assessments," the Belfer Center for Science and International Affairs at Harvard University's Kennedy School of Government, the International Institute for Applied Systems Analysis, Harvard’s Weatherhead Center for International Affairs, and Harvard’s Environmental Information Center. The views expressed in this paper are those of the author and do not imply endorsement by any of the supporting institutions. Publication abstracts of the GEA Project can be found on the GEA Web Page at http://environment.harvard.edu/gea. Further information on the Global Environmental Assessment project can be obtained from the Project Associate Director, Nancy Dickson, Belfer Center for Science and International Affairs, Kennedy School of Government, Harvard University, 79 JFK Street, Cambridge, MA 02138, telephone (617) 496-9469, telefax (617) 495-8963, Email [email protected].

ABSTRACT

Impact assessments have emerged as one of the most theoretically engaging as well as practically challenging topics in global environmental assessment. As the salience of global environmental issues rises, demand grows for high-quality impact assessments that will enable policymakers to make more informed, efficient, and equitable policy choices. Considerable intellectual and societal resources accordingly are being invested in impact assessments, at both national and global levels. Yet, experience suggests that impact assessments are often deeply contested and routinely fall short in their efforts to inform, let alone meaningfully steer, policy decisions. This workshop seeks to advance our current understanding of the problems besetting impact assessment by drawing on theoretical resources from the GEA project and related areas of the social sciences and by applying these to actual assessment contexts. In the first part of the workshop, our primary focus will be on the issues of framing, knowledge communication, and credibility. More specifically, we will explore how impacts are framed so as to highlight certain aspects as being worthy of greater attention than others, and how such frames are redefined over time. We will ask how impact framings come to be communicated among disparate social actors and how they are institutionalized in social practices and belief systems. Finally, we will examine why impact assessments become controversial and how credibility is secured in processes of impact assessment. In the second part of the workshop, we will explore some of the practical consequences of the previous discussion for the design and conduct of impact assessments. Specifically, we will consider: (1) methods of valuation in impact assessment, and how they incorporate or reproduce tacit problem framings; (2) procedural mechanisms that institutionalize particular ways of framing or conceptualizing impacts; and (3) processes through which impact assessments are communicated to, and taken up by, wider publics.

TABLE OF CONTENTS WORKSHOP PLAN AND ORGANIZATION ......................................................................................1 DEFINING IMPACTS: FRAMING AND METHODS..........................................................................3 FRAMING ...............................................................................................................................................3 METHODS OF VALUATION .......................................................................................................................4 IMPACTS IN POLICY: DYNAMICS AND INSTITUTIONALIZATION ..........................................5 COMMUNICATING IMPACTS: CREDIBILITY AND UPTAKE......................................................6 QUESTIONS TO CONSIDER ..............................................................................................................9 METHODS AND TECHNIQUES OF VALUATION ............................................................................................9 INSTITUTIONALIZATION ..........................................................................................................................9 POLICY UPTAKE AND PUBLIC UNDERSTANDING.........................................................................................9 SELECTED BIBLIOGRAPHY OF BACKGROUND READINGS ..................................................10 ISSUE FRAMING ....................................................................................................................................10 KNOWLEDGE DYNAMICS .......................................................................................................................10 TRUST AND CREDIBILITY ......................................................................................................................11 REFERENCES......................................................................................................................................13

ACRONYM LIST CFC DOE EPA GEA IPCC

chlorofluorocarbons Department of Energy (US) Environmental Protection Agency Global Environmental Assessment (Project) Intergovernmental Panel on Climate Change

IMPACTS WG - WHAT’S AN IMPACT?

WORKSHOP PLAN AND ORGANIZATION In the first part of the workshop, we intend to explore several theoretical issues that grow out of our work on GEA and that bear more specifically on questions of impact assessment. We will do this through three working sessions, each of half-day length, organized around a theme that emerged as important in last year’s GEA research, and a background reading that elaborates on each theme. The first theme is issue framing, which we will explore through an in-depth discussion of William Cronon, “A Place for Stories: Nature, History, and Narrative.” The second theme is knowledge dynamics, which we will explore through J. Fairhead and Melissa Leach, Rethinking the Forest-Savanna Mosaic: Colonial Science and Its Relics in West Africa.” The third theme is trust and credibility, which we will explore through Sheldon Krimsky and Alonzo Plough, “The Release of Genetically Engineered Organisms into the Environment: The Case of Ice Minus.” A brief introduction to each reading, and a list of questions to ponder while reading them, may be found at the end of this paper. In the second part of the workshop, we intend to explore some of the more practical implications of the discussions held in the first part of the workshop. To establish a common knowledge base, beyond the GEA research papers, we will open the second half of the workshop with detailed discussions by one or two practitioners of impact assessments. We will then examine in subsequent sessions three issues related to the design and conduct of such assessments: (1) methods of valuation in impact assessment, (2) the institutionalization of impact assessment in decision-making processes, and (3) the credibility and uptake of impact assessment findings by various audiences. In each session, a social scientist will open the proceedings, followed by a general discussion. The second half of the workshop will close with a discussion of the future of impact assessments carried out under the auspices of the IPCC. Our goal in this workshop is not to provide recommendations to the IPCC or to limit our own thoughts and conclusions to this particular case. Nevertheless, given the timeliness of our project, we hope that the discussions at Bar Harbor will prove fruitful in stimulating the thoughts of those who go on to design the IPCC’s activities. To that purpose, James McCarthy will lead the final session with an elaboration of the IPCC’s plans for its next report. We will conclude with a discussion of the kinds of goals the IPCC might set for itself for the medium- to long-term. INTRODUCTION: CONTEXTUALIZING IMPACT ASSESSMENT How do human societies decide which problems of environmental change are worthy of concern and which can be addressed through policy? How, in particular, do we select out from the continuous, overlapping, simultaneous streams of events that descend from the past into the future those things we call causes and those things we call impacts? What kinds of narratives do we use to bound environmental problems, so as to render them intelligible as objects meriting societal intervention? These somewhat theoretical questions have very practical implications for processes of impact assessment. Decisions about how to define impacts can have dramatic consequences for the choices we make about how to evaluate and respond to 1

GEA WORKSHOP DRAFT - NOT FOR DISTRIBUTION

environmental change. These definitions influence both the analytic choices (e.g., which data and models are relevant and appropriate) and the deliberative mechanisms (e.g., which groups should be involved and by what means) that constitute the assessment process. For illustration, let us turn briefly to the example of stratospheric ozone depletion. A half century ago, when chlorofluorocarbons and halons were being developed and introduced into a host of industrial processes as refrigerants, solvents, fire retardants, and more, little thought was given to their possible impact on the atmosphere. Over time, however, people became more interested in environmental protection, and, in the early 1970s, a small group of scientists became concerned that the rapidly increasing use of CFCs might be damaging the Earth’s protective ozone layer. Two decades later, world leaders signed the London Amendments to the Montreal Protocol and agreed to eliminate the production and use of these chemicals. The history of CFCs illustrates the interplay of framing, institutionalization, and credibility that occurs whenever environmental impacts appear on the policy agenda. For fifty years, CFCs were framed as safe, highly useful compounds. That framing was taken up and institutionalized into the production and operation of refrigerators, home and automobile air conditioners, electronic devices, styrofoam containers, spray cans, and a host of other uses that helped maintain the profitability of some of the world’s largest corporations. Existing methodologies of risk assessment, focusing primarily on human health effects, likewise helped to institutionalize CFCs as safe. It is little wonder, then, that it turned out to be difficult to establish the credibility of the alternative framing of the physical impacts of CFCsas a threat to stratospheric ozone. It was necessary, first, to change the beliefs and preferences of such diverse audiences as the chemical industry, national regulatory bodies, and vast publics in developing countries desiring conveniences such as an inexpensive refrigerator in every home. Relatively quick agreement on the status of CFCs as a global hazard obviated the need to obtain international consensus on the socioeconomic impacts of ozone depletion. The case of climate change is proving to be very different. Here the social and economic dimensions of impact assessment are very much on the policy agenda. Under the Global Change Research Act of 1990, US National Assessments are required to consider the effects of global change on “the natural environment, agriculture, energy production and use, land and water resources, transportation, human health and welfare, social systems and institutions, and biological diversity.” In a similar vein, IPCC Working Group II will assess “the scientific, technical, environmental, economic and social aspects” of the vulnerability of, and impacts for, ecological systems, socioeconomic sectors, and human health. To what extent might theoretical ideas developed in the GEA project offer useful starting points for these inquiries? The question is timely because, despite the importance that policymakers and concerned publics attach to the task of impact assessment, there are many signs that this remains an intensely problematic enterprise. Although impact assessments are regularly commissioned, it is hard to document cases in which they have triggered environmental actions in orderly fashion. Prior work in the GEA project provides tantalizing hints of possible explanations. For example, impacts that are indeterminate or controversial, such as extreme events, may simply be left out of the assessment process (Patt 1997). Other GEA work (Long and Iles 1997) suggests that impact assessments should be seen less as policy instruments than as sites for knowledge organization 2

IMPACTS WG - WHAT’S AN IMPACT?

and social self-definition. Through these processes, societies interactively and dynamically define their notions of agency, causation, and responsibility in relation to the environment. This workshop aims to build on prior GEA work with a view to elucidating the factors that contribute to the contested character of impact assessment and their indeterminate utility for policy.

DEFINING IMPACTS: FRAMING AND METHODS Framing To investigate how actors and institutions define causes and impacts in certain ways and not others, the GEA project has adopted the concept of framing from recent work in sociology, policy analysis, and science studies. For our purposes here, framing refers to the process of selecting out and making sense of particular, salient phenomena to be considered as impacts from among the wide array of biophysical and social processes taking place at any given time. Any such selection not only singles out certain aspects of a problem for attention, but also connects it with explicit or implicit theories of how things happen (causation) and what things matter (salience). Framing is thus related to the organization of knowledge that people have about the world in the light of their underlying attitudes toward key social values (e.g., risk, nature, freedom), their notions of agency and responsibility (e.g., individual autonomy, corporate responsibility), and their judgments about the reliability, relevance, and weight of competing knowledge claims (Miller, Jasanoff et al. 1997). Impact framing involves drawing boundaries between “cause”, “effect”, and “solution” in environmental discourses. By drawing these boundaries in particular ways and not others, assessments can help shape how societies respond to environmental problems. For example, framing impacts in terms of the effects of human actions on valued natural systems (e.g. the encroachment of development on pristine wilderness) easily lends itself to the conclusion that the human actions causing the problem must be stopped. By contrast, framing impacts in terms of the effects of environmental changes on human societies (e.g., the Dust Bowl or the possible impacts of ozone on skin cancer rates) lends itself more easily to the recognition that such effects have complex, possibly uncontrollable causes and to recommendations for strengthening the adaptability of communities. Framing the impacts of stratospheric ozone depletion in terms of the ozone hole, for example, as opposed to skin cancer or the agricultural effects of increasing UV radiation, was consistent with a broad public view that the primary threat was to the stability of nature on a global scale. Few people—scientists, government officials, or citizens—needed any more specific information on impacts once the global threat was established. On the other hand, if the issue had been successfully reframed in terms of skin cancer, it might have been perceived as a matter of individual choice. Anyone going outside without adequate protection from UV could have been viewed as accepting a voluntary risk. A framing that stressed agricultural effects, by contrast, might have fostered more attention to local and regional adaptability rather than the elimination of CFCs.

3

GEA WORKSHOP DRAFT - NOT FOR DISTRIBUTION

Similarly, the 1990 Intergovernmental Panel on Climate Change drew implicit boundaries by dividing its labor into three working groups—science, impacts, and response strategies. These three categories included, respectively: (science) the effects of greenhouse gas emissions on longterm weather patterns through changes in the climate system; (impacts) the effects of changing weather patterns on human societies and natural ecosystems and possible adaptation strategies; and (response strategies) possible strategies for reducing greenhouse gas emissions. By framing impacts as the effects on human societies and their possible adaptation responses, the IPCC contributed to a narrative which viewed greenhouse gas emissions as the problem, and reductions in those emissions as the primary solution. By contrast, there is pressure from several directions to adopt an alternative framing that views greenhouse gases as only one of multiple causes (natural and anthropogenic) leading to changing local and regional weather patterns and to disruptions of human resource management strategies. From this perspective, strengthening local, national, regional, and global institutions for coping with the effects of inevitable flood, famine, drought, and desertification may be the most appropriate form of response to climate change. Another aspect of impact framing involves the classification and categorization of impacts. Different categories of impacts may reflect different perceived uncertainties in how impacts are evaluated. For example, Fankhauser and Tol differentiate between market and non-market impacts and between impacts for which a willingness-to-pay method can be used and those for which it cannot largely on their view of the perceived uncertainties of different economic methods (Fankhauser and Tol 1996). Different categories of impacts may also reflect perceived differences in how policy formulation and implementation should treat different kinds of environmental and social outcomes. The frequent subdivision of impacts into those affecting natural systems and those affecting human systems also reflects differences in the degree of objectivity attached to the natural and social sciences in contemporary policymaking. In general, the ability of social scientists to evaluate the impacts of environmental changes on human societies is recognized as a value-laden process. Which changes matter, in other words, is always recognized as a potentially contested issue. On the other hand, the ability of natural scientists to evaluate the impacts of human actions on natural systems is typically viewed as unproblematic, even though, as Tony Patt, Alex Farrell, and Terry Keating point out in their research for GEA, natural science impacts assessments must also make value judgments about what changes matter (Patt 1997; Farrell and Keating 1998; Keating and Farrell 1998; Patt 1998). To come at this from another direction, nature, to a much greater degree than society or the economy, has been bounded off in much of contemporary policymaking as a domain about which experts are qualified to speak (Jasanoff 1990). Thanks to this narrowing of expertise, assessments of natural impacts can often achieve a higher degree of coherence and authority in public discourse than assessments of social impacts—even when similar assumptions pervade the underlying data (Patt 1998). Such differences in authority may influence not only the definition of impacts but also the techniques for analyzing them (see below).

Methods of valuation While previous GEA research and last year’s summer workshop suggested the importance of framing as a conceptual tool, they also revealed the difficulty that can arise in trying to grasp just 4

IMPACTS WG - WHAT’S AN IMPACT?

what constitutes differences in frame and how frames can influence the design of assessment processes. To refine our approach to this issue, we intend to focus during the second half of this year’s workshop on the specific question of how framing is embedded in specific analytic choices, such as methods for valuing impacts. The assessment of environmental risk necessarily involves some “method” of assigning value to changes in natural or human systems. That, in important respects, is what the concept of risk (or precaution) implies—something valued is (or may be) in danger of change, possibly for the worse. The method used to assign value may not be quantitative, it may not be formalized, it may not even be formalizable, but it is there. By excavating the methods used to value change in impact assessment, we may gain important insights into underlying conceptions of what changes matter, in short, how frames and assessments fit together. Valuation begins with the selection of particular impacts. At the most basic level, relative valuations of different kinds of impacts can be expressed in the objects of valuation exercises. Keating and Farrell note, for example, in their research for GEA, that U.S. and European impact assessments for tropospheric ozone pollution focus almost exclusively on very different kinds of impacts: U.S. assessments on human health; European assessments on vegetation degradation (Keating and Farrell 1998). Iles (1998) illustrates similar dynamics in his study of acidification impact frames in Europe. The selection of particular impacts can also reflect judgments about the reliability of preexisting analytic methods. Pascal Bader notes, for example, in his work for GEA, that the availability of economic valuation techniques tends to give greater weight to market impacts over non-market impacts. Both Bader and Patt also note that formal natural science methods for assessing impacts on nature tend to be given greater weight in public discourses than formal social science methods for assessing impacts on society (Bader 1998; Patt 1998). As noted above, this tends to reflect the greater perceived objectivity of the natural than the social sciences. By the same token, inexplicit or non-formal techniques of valuation (e.g., legislative decisions to attach transcendental value to human life or health, as discussed, for example, by Sagoff 1988) receive least weight in formal assessment processes. Methods of valuation may be linked to broader social framings through the methods used to calculate (analytically or otherwise) change. What assumptions, for example, are made about the baseline scenario (or status quo) from which such change is to be measured and about the direction of change? A relevant example is that the impacts of climate change on the global economy may be valued quite differently if expressed as a 2% decrease in global GNP in the year 2050 (which is calculated from a baseline scenario that grows each year) or as the difference between a 70% and 72% increase in global GNP in the year 2050 (calculated from current global GNP levels). IMPACTS IN POLICY: DYNAMICS AND INSTITUTIONALIZATION Institutionalization is essential to the orderly working of policy systems. As complex communications processes, assessments in particular critically depend on the assessors’ institutional capacity to produce, validate, and authoritatively transmit information produced 5

GEA WORKSHOP DRAFT - NOT FOR DISTRIBUTION

through assessments. This capacity is also essential to processes of collective learning: without institutionalized ways of responding to experience, we would lack the ability to interpret or reflect on circumstances. In an uninstitutionalized world, each signal would be a random, hence meaningless, event. Yet, as our introductory example of CFCs suggests, institutionalized problem definitions are by their nature resistant to critical investigation and redefinition. The unexplained death of bald eagles, a chemical explosion in India, the eruption of ethnic violence in the Punjab, low-yielding rice crops in Indonesia, and the repeated failures of policies to curb deforestation and desertification in West Africa: recent writings suggest that these very different “outcomes” can all be linked to uncritically institutionalized framings of environmental issues and problems (Lansing 1991; Shiva 1991; Jasanoff 1994; Brockington and Homewood 1996; Fairhead and Leach 1996). Institutionalization thus appears as one of the most important issues for investigation in theoretical work on impact assessment. In thinking about institutions for impact assessment, it is important to go beyond the formal mandates of bodies such as the US Environmental Protection Agency (EPA) or the IPCC, and to ask how, in their day-to-day practices, such bodies define, analyze, learn about or reformulate particular framings of environmental impacts. For example, different local or national traditions of decision-making may influence the degree to which nonexperts are given voice in proceedings that are bounded off as “scientific” or “technical”; further, procedural choices, such as the degree of access to decision-making documents and bodies, may constrain the ways in which impact framings are questioned or reviewed. Work on decision-making cultures has shown that the connections between procedural choices and decision outcomes can be quite subtle. Thus, although US decision processes provide exceptionally generous latitude for public participation, the commitment to particular modes of rational discourse (e.g., quantitative risk assessment) in effect excludes certain viewpoints, and may foreclose certain possibilities for reframing issues (Jasanoff 1986, 1990). Another contribution that assessment institutions make is to characterize and bound uncertainty in ways that enable action. As Miller et al. point out in their work for GEA last year, uncertainty can emerge dynamically from numerous different parts of the communication processes that constitute modern assessments. Even the assessment of natural impacts, for example, necessitates the design of decision-making processes in ways that will be viewed as broadly legitimate by relevant communities (Miller et al. 1997). In our discussions this year, it will be important to discuss how the legitimating practices of assessment institutions, such as public and scientific review procedures, address the multiple forms of possible uncertainty in impact assessments. To what extent do assessment institutions appear capable of learning from past failings and institutionalizing new responses?

COMMUNICATING IMPACTS: CREDIBILITY AND UPTAKE Viewing assessments as dynamic processes for communicating about the risks of environmental change leads naturally to issues of trust and credibility. How are trust and credibility assured in particular communications contexts? How can assessments enhance the credibility of knowledge and contribute to the building of trust among scientists, policymakers, and citizens? 6

IMPACTS WG - WHAT’S AN IMPACT?

Last year, considerable GEA research highlighted the importance of recognizing that the credibility of scientific knowledge could not be assumed to be identical to the certainty that scientists attach to it (Miller et al. 1997). As the historian of science Steven Shapin has noted, credibility and trust are secured very differently among experts, between experts and policymakers, and between experts and the lay public (Shapin 1996). Credibility and trust may also be secured quite differently among different social groups and in different legal and political cultures (Jasanoff 1986; Nelkin 1992). For instance, Kandlikar and Sagar showed in their GEA work on Indian climate science last year that knowledge gained in one cultural setting may not be transferable without loss of credibility to another (Kandlikar and Sagar 1997). This year, GEA research has expanded upon last year’s findings in two important ways: first, by exploring in more detail what is meant by trust and credibility; and, second, by exploring some of the specific dimensions of assessments that affect the credibility of scientific knowledge as it “travels” from one culture to another. What does it mean for scientific knowledge to be credible? And how is credibility secured? GEA research suggests that answers to these questions are always situationally specific in several ways. First, the determinants of credibility depend, in part, on who is communicating with whom. We have already noted that the credibility of communications among experts, between experts and policymakers, and between experts and the lay public may involve very different criteria. Thus the same assessment may be viewed quite differently by different groups. Clark Miller, in his research for GEA, found that the same audiences may judge the credibility of assessments carried out by different groups according to different criteria. Brazilian scientists predicting seasonal climatic variability were held to much higher standards of accuracy than so-called “rain prophets” (Miller 1998). The willingness of the Brazilian public to give greater latitude for error and uncertainty to the rain prophets than to the scientists reflects another very important aspect of trust and credibility revealed by this year’s GEA research: the level of credibility attached to a particular knowledge claim in a particular circumstance will depend on the risk assumed by the recipient in either accepting or rejecting that knowledge. Considerable latitude for error may be more acceptable in situations where little is at stake. By contrast, greater certainty may be required before knowledge is allowed to affect decisions that could influence highly valued concerns. Or, as VanDeveer points out in his research for GEA, policymakers may be willing to accept (or at least appear to accept) scientific knowledge produced by others if they fear that rejecting those knowledge claims may lead to adverse consequences. In Eastern Europe, this has led many countries to rapidly adopt Western framings of environmental issues—even when there may be clear awareness of their inappropriateness for local circumstances—in order to help secure Western environmental aid and/or accession to the European Union (VanDeveer 1998). Previous writing on the credibility of scientific knowledge has focused mainly on the processes by which that knowledge was produced and communicated to user communities. Less time has been spent examining the ability of audiences to critically assess knowledge claims made by others for their reliability and relevance to those communities’ decision-making contexts and goals. In GEA discussions, however, scientists and technical experts have regularly noted the importance of their own ability to evaluate new knowledge claims against basic physical criteria, already known 7

GEA WORKSHOP DRAFT - NOT FOR DISTRIBUTION

facts, and familiar processes of peer review. GEA work on the uptake of Western environmental assessments and the extension of assessment communities to developing countries suggests that this kind of critical engagement with knowledge claims may also play an important role among policymakers and citizens, as well as scientists (Kandlikar and Sagar 1997; Miller 1998; VanDeveer 1998). Such findings are consistent with recent research on the interactions between local and supralocal or translocal knowledge communities. For example, the ability of citizens to manipulate data themselves and to reconcile the production of scientific claims with their own knowledge about local dimensions of the issue can play an important role in their choices of whether or not to accept knowledge claims made by others, as well as their satisfaction with the choices of others based on that knowledge (Fischhoff 199?; Cash 1998; Moser 1998; Wynne 1995). Seen in the light of this research, the uptake by communities of new knowledge claims, new methods for assessing impacts, and new frames poses as many challenges to assessors as the production of knowledge and the definition of methods. One approach that assessors are increasingly using to respond to these challenges is to increase the participation of user communities in the production and validation of knowledge in assessment processes. Here again, new GEA research, building upon last year’s findings, has begun to identify some of the ways in which participation can enhance trust and credibility. Farrell and Keating, for example, found in their research on tropospheric ozone regulation in the U.S. and Europe that the personal involvement of various individuals in the production of knowledge claims helped secure the belief that those claims did not reflect biases against those individuals’ interests (Farrell and Keating 1998). Botcheva, in her research on Eastern Europe, found that the participation of various individuals helped secure the credibility of particular assessments by mobilizing social networks in which those individuals were already trusted and credible members (Botcheva 1998). Miller, in his research on developing countries, found that participation helped secure trust and credibility by allowing participants to work out standards and practices for mutually credible communication (Miller 1998). Participation led in this way to the formation of new “knowledge communities” or “communities of belief” (Miller, Jasanoff et al. 1997; Jasanoff and Wynne 1998). Overall, then, assignments of trust and credibility should be seen as provisional, not final judgments. They may change over time as various conditions change. Judgments of credibility can also involve significant gradations, from outright rejection to acceptance under duress to acceptance for low-risk but not high-risk decisions to full belief. These judgments depend on (and may change as a result of changes in) such factors as: (1) characteristics of the process of knowledge production and validation; (2) the social relationship between producer and user— e.g., how will rejection or acceptance of knowledge claims affect that relationship; (3) the uses to which the knowledge is to be put; (3) the capacity (perceived or real) of the user to verify or critique knowledge claims; and (4) the participation of users in knowledge production and validation. These findings suggest a number of considerations that we would like to explore further in connection with impact assessment processes. First, they suggest the importance of long-term relationships. Trust and credibility are built up and maintained over long periods of time as 8

IMPACTS WG - WHAT’S AN IMPACT?

impact frames and results are institutionalized into decision-making processes. This may have both advantages and disadvantages. Impact assessments may gain credibility more rapidly by playing into already existing frames and broad patterns of establishing trust and credibility. In so doing, however, they may be constrained in such a fashion that they do not adequately reflect the full nature of the problem at hand or do not enroll all of the relevant social groups. Second, they suggest the importance of participation to the success of assessments—although again with a caveat. Participation can enhance trust and credibility if the modes of participation adequately reflect the expectation of those participating. If participants view either their own participation or someone else’s as having been inappropriate, then backlash can occur. Finally, they suggest that credibility is neither static nor single-valued. In particular, apparent acceptance of knowledge claims need not reflect shared consensus about the underlying science or the underlying views of nature and society. Rather, it may simply reflect an unwillingness, inability, lack of opportunity, or lack of interest in making a critique (Wynne 1995).

QUESTIONS TO CONSIDER We describe below some of the questions that we hope to address over the course of the workshop. We also propose three background readings which, in conjunction with this theme paper, will, we hope, help orient the week’s discussions. We look forward to seeing you in Bar Harbor.

Methods and techniques of valuation What techniques and methods are used to value environmental impacts? Who uses them? What assumptions, if any, do they incorporate about social organization or function? How do they come to appear objective or taken for granted? How are choices among valuation techniques made? Whose values do dominant valuation techniques represent? Can we institutionalize processes by which the biases, values, and social assumptions embedded in various choices are made more transparent and subjected to periodic review and reassessment?

Institutionalization How do institutions charged with responding to impacts create persuasive causal narratives (frames) out of complex historical processes to guide their actions? How in particular do they select relevant participants and audiences, and how do they organize deliberation and debate? What are the consequences of making such choices for impact assessment? How receptive are institutions to new knowledge about impacts and what mechanisms do they incorporate for testing and redefining impact frames? Can learning about impact assessment lead to more robust, equitable, and value-sustaining or enhancing social responses to environmental change? How might such learning be promoted?

Policy uptake and public understanding Here we propose to discuss two sets of questions concerning the communication of impact assessments. The first set concerns communicating about impact assessments amongst scientists, social scientists, and government officials. The second set concerns communicating about impacts among assessors, users, and broader publics.

9

GEA WORKSHOP DRAFT - NOT FOR DISTRIBUTION

How are discussions about impact assessments conducted among scientists and social scientists, and among assessors and those who study assessments? What is the value of such discussion? Can this value be better achieved and institutionalized over time, and, if so, how? How should communities of assessors and users be communicating about impacts? What are the different ways the IPCC might cast its impact assessments? What are the different techniques and methods available? What are the potential needs for knowledge of impacts and how are these needs identified? How might the IPCC process be generalized or serve as a model for broader impact response processes? What do these alternatives imply about who should be involved in communicating about the IPCC assessments, their form, their content, and their use?

SELECTED BIBLIOGRAPHY OF BACKGROUND READINGS The first part of the workshop will involve three in-depth discussions of background readings designed to help the group elicit and illuminate the relevance of some of the more theoretical questions raised by various social science literatures to how we think about impact assessment. The following is an annotated bibliography of the three readings giving a brief introduction to each set of questions raised above. We plan to use these to initiate the discussions at Bar Harbor. Participants should feel free to raise additional themes and questions of interest as the discussion proceeds.

Issue framing Cronon, W. (1992). “A Place for Stories: Nature, History, and Narrative.” Journal of American History March: 1347-1376. In this essay on the historiography of the Great Plains, Cronon presents and critiques alternative “stories” that historians have told about the relationship between human and natural potentialities in that region. A story, as discussed by Cronon, is in effect a particular framing of events, incorporating tacit ideas of causation and agency. For instance, was the Dust Bowl of the 1930s a natural disaster that tested individual and community spirit, or was it only the most recent version of a cyclical progression that has required human beings in the region to adapt continually to nature’s vagaries? While remaining sensitive to the possibility of multiple framings, Cronon eschews relativism and explores how the meaningless events of natural reality acquire ordered, social reality through story-telling. His essay is thus especially relevant to our own thinking about impact assessment as a form of structured, policy-relevant story-telling.

Knowledge dynamics Fairhead, J. and M. Leach (1996). Rethinking the Forest-Savanna Mosaic: Colonial Science and its Relics in West Africa. The Lie of the Land: Challenging Received Wisdom on the African Environment. M. Leach and R. Mearns. London, International African Institute. In this paper, anthropologists James Fairhead and Melissa Leach challenge existing narratives of deforestation in West Africa. Drawing on ethnographic and ecological research, they argue that forest patches around villages, which scientists have generally taken to be remnants of a 10

IMPACTS WG - WHAT’S AN IMPACT?

previously more extensively forested landscape, actually reflect local cultivation of forests in areas that are naturally savannas. Their account of how Western scientists and policymakers arrived at and maintained their belief in widespread deforestation illustrates the dynamics of knowledge uptake, use, and change in scientific and social institutions. It is especially relevant to our discussions as a case study of contact among disparate knowledge communities, and the problems of knowledge transfer.

Trust and credibility Krimsky, S. and A. Plough (1988). Environmental Hazards: Communicating Risks as a Social Process. Dover, MA, Auburn House. Chapter 3, The Case of Ice Minus. This paper is a case study of the first deliberate release of a genetically modified organism in the United States. The case squarely raises issues of trust and credibility because it aroused intense public opposition, including successful legal action, in spite of virtually unanimous expert agreement that the release was “safe.” In their analysis, the authors call attention to the different conceptions and expectations of risk analysis and risk communication held by the various parties to the controversy. A contrast is drawn between “scientific” and “cultural” rationality in responding to risk information. Whereas scientific rationality stresses formal, objective analysis, cultural rationality builds on historical experience and possibly broader framings of potential hazards. The paper also looks at the role of the media in communication about risk. It is relevant to our discussions in that it points to the possible coexistence of competing interpretive frameworks, each with its own rationality.

11

IMPACTS WG—WHAT’S AN IMPACT?

REFERENCES Bader, P. forthcoming. Targets and Strategies: The Role of Economic Assessments in European Climate Policy. 1998 ENRP Discussion Paper, Harvard University. Botcheva, L. forthcoming. Doing is Believing: Participation and Use of Economic Assessments in the Approximation of EU Environmental Legislation in Eastern Europe. 1998 ENRP Discussion Paper, Harvard University. Brockington, D. and K. Homewood (1996). Wildlife, Pastoralists, and Science: Debates Concerning the Mkomazi Game Reserve, Tanzania. The Lie of the Land: Challenging Received Wisdom on the African Environment. M. Leach and R. Mearns. London, International African Institute. Cash, D. forthcoming. Assessing and Addressing Cross-Scale Environmental Risks: Information and Decision Systems for the Management of the High Plains Aquifer. 1998 ENRP Discussion Paper, Harvard University. Fairhead, J. and M. Leach (1996). Rethinking the Forest-Savanna Mosaic: Colonial Science and its Relics in West Africa. The Lie of the Land: Challenging Received Wisdom on the African Environment. M. Leach and R. Mearns. London, International African Institute. Fankhauser, S. and R. tol (1996). “Climate Change Costs: Recent Advancements in the Economic Assessment.” Energy Policy 24(7): 665-673. Farrell, A. and T. Keating forthcoming. Multi-Jurisdictional Air Pollution Assessment: A Comparison of the Eastern United States and Western Europe. 1998 ENRP Discussion Paper, Harvard University. Fischhoff, B. (1996). “Public Values in Risk Assessment.” Annals of the American Academy of Political and Social Science 545: 75-84. Iles, A. forthcoming. The Evolution of Acidification Impact Frames in Europe: Interactions Between Regional and National Assessment of Forest Conditions. 1998 ENRP Discussion Paper, Harvard University. Jasanoff, S. (1986). Risk Management and Political Culture. Jasanoff, S. (1990). The Fifth Branch: Science Advisers as Policymakers. Cambridge, MA, Harvard University Press. Jasanoff, S. (1994). Learning from Disaster: Risk Management After Bhopal. Philadelphia, PA, University of Pennsylvania Press. Jasanoff, S. and B. Wynne (1998). Science and Decision-making. Human Choice and Climate Change: The Societal Framework. S. Rayner and E. Malone. Columbus, OH, Battelle Press. 1. Kandlikar, M. and A. Sagar (1997). Climate Change Science and Policy: Lessons from India. Cambridge, MA, GEA, Harvard University.

13

GEA WORKSHOP DRAFT - NOT FOR DISTRIBUTION

Keating, T. and A. Farrell. forthcoming. Problem Framing and Model Formulation: The Regionality of Tropospheric Ozone in the US and Europe. 1998 ENRP Discussion Paper, Harvard University. Lansing, J. S. (1991). Priests and Programmers: Technologies of Power in the Engineered Landscape of Bali. Princeton, NJ, Princeton University Press. Long, M. and A. Iles (1997). Assessing Climate Change Impacts: Co-Evolution of Knowledge Communities and Methodologies. Cambridge, MA, GEA, Harvard University. Miller, C. forthcoming. Extending Assessment Communities to Developing Countries. 1998 ENRP Discussion Paper, Harvard University. Moser, S. forthcoming. Talk Globally, Walk Locally: The Cross-Scale Influence of Global Change Information on Coastal Zone Management in Maine and Hawai'i. 1998 ENRP Discussion Paper, Harvard University. Miller, C., S. Jasanoff, et al. (1997). Shaping Knowledge, Defining Uncertainty: The Dynamic Role of Assessments. Cambridge, MA, GEA, Harvard University. Nelkin, D. (1992). Controversy: Politics of Technical Decisions. Newbury Park, CA, Sage Press. Patt, A. (1997). Assessing Extreme Outcomes: The Strategic Treatment of Low Probability Impacts of Climate Change. Cambridge, MA, GEA, Harvard University. Patt, A. forthcoming. Analytic Frameworks and Politics: The Case of Acid Rain in Europe. 1998 ENRP Discussion Paper, Harvard University. Sagoff, M. (1988). The Economy of the Earth. Cambridge, Cambridge University Press. Shapin, S. (1996). “Cordelia's Love: Credibility and the Social Studies of Science.” Perspectives on Science: Historical, Philosophical, Social 3(3): 255-275. Shiva, V. (1991). The Violence of the Green Revolution: Third World Agriculture, Ecology, and Politics. London, Zed Books. VanDeveer, S. forthcoming. European Politics with a Scientific Face: Transition Countries, Integrated Environmental Assessment, and LRTAP Cooperation. 1998 ENRP Discussion Paper, Harvard University. Wynne, B. (1995). Misunderstood Misunderstandings: Social Identities and the Public Uptake of Science. Misunderstanding Science? The Public Reconstruction of Science. A. Irwin and B. Wynne. Cambridge, UK, Cambridge University Press.

14