RAGNAR FJELLAND
FACING THE PROBLEM OF UNCERTAINTY (Accepted in revised form May 23, 2001)
ABSTRACT. In a certain sense, uncertainty and ignorance have been recognized in science and philosophy from the time of the Greeks. However, the mathematical sciences have been dominated by the pursuit of certainty. Therefore, experiments under simplified and idealized conditions have been regarded as the most reliable source of knowledge. Normally, uncertainty could be ignored or controlled by applying probability theory and statistics. Today, however, the situation is different. Uncertainty and ignorance have moved into focus. In particular, the global character of some environmental problems has shown that the problems cannot be disregarded. Therefore, scientists and technologists have in many ways come into a new situation. The Chernobyl accident is a dramatic example, however, problems such as a possible greenhouse effect, a possible reduction of the ozone layer, and so on are all of the same type. These encompass totally different problems than scientists and technologists are traditionally trained to deal with. In these cases, the standard use of statistics has to change, the burden of proof should be reversed, one should draw on different kinds of expertise, and, in general, science should be “democratized.” KEY WORDS: philosophy, philosophy of science, risk, science and society, science studies, uncertainty
INTRODUCTION: UNCERTAINTY AND CONTROVERSY In 1978 it was discovered that Love Canal, a suburban town in the US state of New York, was built over the waste disposal site of a former chemical factory. New York State’s Department of Health found more than eighty different chemicals at the chemical waste site. Ten of these substances were known to be potentially carcinogenic (Levine, 1982: 41). The course of events followed a pattern that later became standard in such instances: The director of health declared a state of emergency due to the danger to the public health. He justified this on the grounds that he was convinced that the toxic chemical substances from the waste site represented a danger to the residents in the area. Soon after that, the governor of New York offered to buy the houses situated nearest to the dump site and to help the residents relocate. The Department of Health immediately began a massive investigation into the health of the residents, this included blood tests, questionnaires, and an inquiry into the incidences of ill health among the inhabitants. Early in the Fall of 1978 the preliminary results of the Journal of Agricultural and Environmental Ethics 15: 155–169, 2002. © 2002 Kluwer Academic Publishers. Printed in the Netherlands.
156
RAGNAR FJELLAND
study were made public; government representatives assured the residents that the rest of Love Canal was a safe place to live (Paigen, 1982: 29). Dr. Beverly Paigen, a researcher with a nearby government institute, became interested in the case. She had done research on the relationship between chemicals and cancer, and in cooperation with the residents she made preliminary investigations. The results showed a clear geographic distribution in the incidence of ill health, and it seemed to follow the outline of old drainage ditches. Paigen divided all the homes into two categories: “Wet houses” were houses that were built over or adjacent to the drainage ditches. “Dry houses” were all the other houses. The result was striking. Women in “wet” houses had three times as many spontaneous abortions than women in “dry” houses. Birth defects, asthma, urinary track infections, and the number of psychiatric cases were many times higher in “wet” areas than in “dry” ones. These results had also implications for the kinds of remedial actions that should be taken. One should start by evacuating people from the “wet” homes first. This went against the course of action the Department of Health had chosen. They had purchased the houses that were closest to the waste site, and were going to let the rest of the houses remain. To make a long and dramatic story short, Beverly Paigen and the residents of Love Canal won their case. In the summer of 1980, the Congress allocated additional funds that authorized the President to use up to 15 million US dollars to relocate the remaining residents. After a number of negotiations between different parties, Love Canal was for the most part vacated in 1981. Beverly Paigen gave her version of the controversy in an article a few years later. She tells that she had originally believed that the case was a matter of scientific disagreement that could be resolved by having the involved researchers come together and compare the data, experimental design, and statistical analyses. But she was wrong. She continues, But I should learn that facts made little difference in resolving our disagreements – the Love Canal controversy was mainly of a political nature, and it raised a series of questions that had more to do with values than science (Paigen, 1982: 29).
The research group had claimed that it had used a “conservative” scientific approach. Paigen tells that it occurred to her that there was a problem here when she discovered in a conversation with a representative from the research team that they disagreed about how this should be interpreted in every single case they discussed. The researcher emphasized that those who maintained that there was a relationship between the chemicals from the waste site and the illnesses in Love Canal, would have to take the responsibility for spending even more than the 40 million US
FACING THE PROBLEM OF UNCERTAINTY
157
dollars already invested in safeguarding the waste site and moving the residents. To him, this implied that the researchers must be very cautious in concluding that Love Canal was an unsafe place to live. Paigen, on the contrary, claimed that since a mistake could result in severe consequences for the health of the residents, the researchers must be very careful in concluding that Love Canal was a safe place to live. She insisted that underestimating the danger was worse than groundless fear (Savan, 1988: 59). In this paper, I shall try to answer two questions: 1) Does uncertainty represent something genuinely new in science? 2) What are the implications of recognizing uncertainty for science and science policy? THE PURSUIT OF CERTAINTY Let me start with the first question. No doubt, one strong driving force in Western science and philosophy is the pursuit of certainty. This pursuit is motivated by a mathematical scientific ideal, and we can draw a line from Plato, via Galileo and Einstein, to Hawking and “theories of everything.” The empirically minded reader may find it strange that I associate modern science with Plato’s theory of ideas. Nevertheless, as the French historian of science Alexandre Koyré has pointed out, Platonism represents an essential aspect of Galileo’s science and philosophy of science (Koyré, 1943: 34). Furthermore, Galileo’s importance to modern science is indisputable. I want to point out, though, that the historical account is not essential. For the reader not interested in the history of science, it is sufficient to point to the importance of idealization in the mathematical sciences. The common denominator is mathematics. Plato’s theory of knowledge was inspired by geometry as the paradigm of knowledge, and, according to Galileo, “the book of nature” is written in the language of mathematics. However, there is an important difference between Plato on the one side and Galileo and modern science on the other. Whereas Plato’s reality was immaterial, Galileo’s reality was material. Galileo called objective reality “primary sense qualities.” Today we would rather use the term “matter.” The essential property of matter is that it can be described mathematically. But this property presupposes another property that is one of the most important preconditions of modern science: matter can be divided. Therefore, one of the most important characteristics of modern science is that complex objects are reduced to their parts, and in the last resort to simple elements. When the whole has been reduced to its parts, the parts may be put together again.
158
RAGNAR FJELLAND
Galileo recognized that a mathematical description requires measurements, and that measurements require controlled laboratory experiments. The aim of the controlled laboratory experiment is to keep all or most factors constant. Only one or a few factors are varied at a time. These ideal conditions increase certainty. According to the traditional view, controlled experiments are merely simplification and purification of natural situations. We have to leave out some factors to make the problems manageable. Afterwards we “add back” the factors that were left out, and in this way we come closer to natural situations. However, we do not only remove complicating factors. We impose artificial conditions on the object as well, because the ideal conditions are normally not realized in everyday life. Therefore, “adding back” may not be an easy task. There is an alternative, though. We may realize the ideal conditions through technology. From this point of view, technology is a way of reducing uncertainty. It is interesting to note that Galileo was aware of the intimate relationship between the ideal conditions required to carry out experiments, and technology. In Dialogue Concerning Two New Sciences, he pointed out that his own results had been proved in the abstract, and when applied to concrete cases they would yield false results. The horizontal motion would not be uniform, a freely falling body would not move according to the law, and the path of a projectile would not be a parabola. However, speaking of the difficulties arising by these limitations, he immediately adds, . . . in order to handle this matter in a scientific way, it is necessary to cut loose from these difficulties; and having discovered and demonstrated the theorems, in the case of no resistance, to use them and apply them with such limitations as experience will teach. And the advantage of this method will not be small; for the material and shape of the projectile may be chosen, as dense and round as possible, so that it will encounter the least resistance in the medium (Galileo, 1638/1954: 251).
To put it simply: Technology to a large extent realizes the ideal conditions of the laboratory at a larger scale. We can see how this works in biotechnology by looking at the so-called “green revolution” of the 1960s as an example. It involved the development and introduction of new plant species that gave larger yields per acre than did the traditional species. This first happened with wheat in Mexico, and later with wheat and rice in Asia. The new “high-yielding varieties” could make better use of fertilizer with far higher concentrations than traditional varieties, and they had noticeably faster maturation rates. One crucial factor was that the plants receive the correct amount of watering at the correct time. They were also less resistant against a number of diseases and parasites. To summarize: If the new variants were to give higher yields, a “technological package,”
FACING THE PROBLEM OF UNCERTAINTY
159
in the form of the correct amounts of fertilizer, water, pesticides, and plant protection products was needed. If these things are not in place, the process can go wrong. The desired effects can in other words only be achieved if one also has control over the environment. What was needed to carry out the “green revolution,” was to realize controlled laboratory conditions in agriculture.1
UNCERTAINTY CANNOT BE IGNORED Although we can control parts of nature in this way, the inescapable problem is, however, that there will always be something outside the system we control. A factory is a typical example of a controlled system. However, the control is normally far from perfect. First, the production process itself is full of risks, for example the risk of explosions and chemical hazards. Second, there is the area around the factory. Traditionally this was heavily polluted. Although regulations have reduced local pollution, the problem is often moved to other places. In particular, heavily polluting production is often moved from the rich countries to third world countries, where regulations are absent or less strict. Third, we have the uses of the products and the disposal of the worn-out products, and so on. Therefore, when an area is subject to technical control, there is always a large area that escapes control. In what follows, I shall use the term “natural conditions” in contrast to the simplified and idealized conditions of the laboratory and the factory. However, the word “natural” does not imply that the conditions are prior to human intervention. Therefore, according to this terminology the conditions in Love Canal may be referred to as “natural.” The problem of “adding back” from the simplified and idealized conditions of the laboratory to natural conditions has been recognized by ecologists. Therefore, laboratory experiments have limited value in ecology because the artificial conditions sometimes prevent important natural effects from appearing and they may magnify incidental and trivial effects. To quote an ecologist: Laboratory studies are effective in isolating a response to a factor but the response may not be ecologically relevant and the number of potential factors that could be investigated is so large that study of any isolated factors may be futile (Peters, 1991: 138).
1 Cf. the following quotation from Ian Hacking: “In fact, few things that work in the
laboratory work very well in a thoroughly unmodified world? in a world which has not been bent toward the laboratory” (Hacking, 1992: 59). Hacking refers to Latour, 1987.
160
RAGNAR FJELLAND
If laboratory experiments fail, field experiments might do the job. They are something between a laboratory experiment and the natural system. Because they are closer to the natural systems, they are popular in ecology. However, it looks as if we get nothing for free. There is a trade-off between control of the conditions on the one hand and relevance to natural situations on the other: The better the field experiments, the less relevant they are. The problem of uncertainty may also be formulated in the language of risk assessment. We must (at least) distinguish between two different situations: uncertainty and ignorance. When we have uncertainty, it means that we know what can go wrong (When we also know the probabilities, we are talking about risk). However, there are often situations where we have no idea of what can go wrong. These situations are characterized by ignorance. In risk assessment, it is desirable to reduce uncertainty to risk, because it enables the application of the mathematical methods of risk analysis (probability theory, statistics, and the like). This requires simplification and idealization, either in the form of experiments as described earlier or by applying mathematical models. However, we have a similar problem as in the case of ecology: the reduction of uncertainty may increase ignorance (Wynne, 1992: 114). Mathematically speaking, the problem is non-linearity. The mathematical sciences have since the time of Galileo, largely concentrated on linear or approximately linear systems. One reason is that the analytical tools of mathematics can be used. However, when the interactions between the parts of a system or the factors determining a process are non-linear, the situation is changed. This was first observed in chaos. Chaos is characterized by sensitive dependence on initial conditions (the “Butterfly effect”): Small uncertainties in the determination of the initial conditions of a system may increase exponentially until they are the same magnitude as the parameters of the system. In that case, “adding back” does not work, and predictions are limited. This was the conclusion of Edward Lorenz’s classical article, “Deterministic Non-periodic Flow” from 1963, where he investigated a simple model of a natural system: the atmosphere. He concluded, When our results concerning the instability of non-periodic flow are applied to the atmosphere, which is ostensibly non-periodic, they indicate that prediction of the sufficiently distant future is impossible by any method, unless the present conditions are known exactly. In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long range forecasting would seem to be non-existent (Lorenz, 1963/1989: 378).
However, organic nature is in general not chaotic, but complex. Complexity and chaos are not the same. It is sometimes said that complexity
FACING THE PROBLEM OF UNCERTAINTY
161
arises at “the edge of chaos.” But they do have non-linearity in common, rendering impossible both “adding back” and exact predictions. Let me return to my first question, if uncertainty is genuinely new in science. The answer is simply “no.” From the very beginning of philosophy and science there were alternative schools of thought that emphasized uncertainty. The contemporaries of Plato, the Sophists, and even Plato’s own teacher, Socrates, stressed both uncertainty and ignorance. Therefore, we can draw an alternative line, from Socrates, via the Renaissance Humanism of Erasmus and Montaigne, and to the present situation (cf. Toulmin, 1990). In a certain sense, the two aspects, certainty and uncertainty, are combined in the theories of probability and statistics. The birth of the mathematical theory of probability is usually dated to 1654, to a discussion between the two mathematicians Pascal and Fermat. Statistics was developed in the nineteenth century, and integrated in most empirical sciences during the twentieth century (Gigerenzer et al., 1989). Furthermore, the recognition of uncertainty is the very foundation of one of the most basic and influential theories of contemporary science, quantum mechanics. According to quantum mechanics, uncertainty cannot be eliminated for theoretical reasons. However, what is genuinely new today is the recognition that uncertainty cannot be tamed or ignored. Previously, unintended side-effects of industrial production that were outside our control could to a large extent be ignored. However, the global character of some environmental problems has shown that there is no “outside”: the biosphere is finite. Therefore, scientists and technologists have in many ways come into a new situation. The Chernobyl accident is a dramatic example. However, problems such as a possible greenhouse effect, a possible reduction of the ozone layer, and so on are all of the same type. These encompass totally different problems than scientists and technologists are traditionally trained to deal with (Funtowicz and Ravetz, 1993: 742).
SOME CONSEQUENCES OF RECOGNIZING UNCERTAINTY I shall now address the second question, and elaborate some consequences of recognizing uncertainty.
162
RAGNAR FJELLAND
Statistics My first point concerns the uses of statistics. It is theoretically almost trivial, but important in practice. One of the standard methods for establishing causal connections is the use of significance tests. We compare one group that has been exposed to the alleged cause with a similar group that has not been exposed to the same cause. This is based on Mill’s method of difference. However, if we do not have complete control of all the factors, we must have recourse to statistics. We start with a “null hypothesis”: We assume that the observed difference between the two groups has come about by chance. Only if the probability of this happening by chance is lower than the significance level we conclude that there is a real difference, and reject the null hypothesis. We may make two kinds of error due to the statistical nature of the problem: A type I error is rejecting a true null hypothesis. This is equivalent to claiming that there is an effect that in reality does not exist (“false positive”). A type II error is accepting a false null hypothesis. This is equivalent to overlooking an effect that really exists (“false negative”). It is important to note that there is an asymmetry built into the standard uses of statistics concerning the two types of error. We can only claim that there is a significant difference when there is a high probability that the difference cannot be blamed on statistical chance. In all other cases, we must refrain from claiming that there is a difference. This is similar here to the burden of proof needed in criminal cases. There, it is such that the defendant is innocent until proven guilty. Doubt should benefit the defendant. When using statistical methods, we assume that there is no statistical difference until the opposite is proven. This is what the researchers from the Department of Health in Love Canal called a “conservative” scientific approach. They followed the traditional approach. Although the relationship between a type I and a type II error depends on the specific problem, in general there is a trade-off between them: If we decrease the probability of one, the probability of the other increases. In traditional significance tests, the probability of a type I error is controlled (it is like the significance level), thus leaving the probability of a type II error open. If researchers inform us that they have not found a statistically significant difference, it would be important additional information if the probability of overlooking the difference is large. In statistics text books, it is therefore emphasized that the probability for a type II error should always be evaluated. However in practice this is different. We know that this rule is often violated. The Canadian marine biologist R. M. Peterman examined a total of 400 articles in two journals during the period of 1987–1989. Of these, 160
FACING THE PROBLEM OF UNCERTAINTY
163
contained at least one incidence where a null hypothesis was not rejected, that is, one had not proved a significant difference. Of these, 83 articles contained recommendations as if the null hypothesis was true, in spite of the fact that only one article had evaluated the probability for a type II error. Peterman shows that in examples within marine biology, the probability for a type II error can easily come up to 50%. This then implies that one gives recommendations about situations under the assumption that there is no difference, when, in fact, there may be a 50% likelihood that one has overlooked a difference (Peterman, 1990: 8). Let us imagine that a group of researchers get the assignment to examine whether there is a difference in the incidence of illness between two groups, and they give the answer T0 : “We have not found any (statistically significant) difference.” However, this information lacks something. Two possibilities come to mind here, either T1 : “We have not found any difference, and we most likely would have found it if it existed” or T2 : “We have not found any difference, but we most likely would not have found it even if it existed.” Needless to say, T0 would carry much more weight if the correct interpretation was T1 than if it was T2 . Therefore, it is a serious problem when researchers answer T0 , the majority of politicians and others interpret it as T1 , and the correct interpretation is T2 . This was probably the case in many of the examples reported by Peterman, and it is not unreasonable to assume that this also was the case in Love Canal. One possible reason for the neglect of type II errors is the fact that estimating a type II error is not just a matter of standard procedures. A type II error can only be calculated relative to an alternative hypothesis. An adequate estimate of a type II error should take several possible alternative hypotheses into consideration, and their plausibility and cost of possible errors should be discussed too. There is something healthy in this. To carry out these calculations, science has to be put into context. For example, what may be an acceptable cost of an error in one situation, may be unacceptable in a different situation. To put the burden of proof on the one who asserts that there is an effect may be justified in basic science, but it turns out differently when used in applied science. In environmental questions, it is often the case that those affected by the pollution have the burden of proof, while those who pollute benefit from the question of doubt. The traditional approach of minimizing type I error favors the producer and puts the burden of proof on the consumer or victim of pollution. For example, when Beverly Paigen and the residents of Love Canal claimed that the toxic substances from the dump site were the cause of illness, it was they who had the burden of proof.
164
RAGNAR FJELLAND
Some have argued that in such cases the burden of proof should be turned around. This is the basis for, among other things, the precautionary principle: Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation (Rio Declaration, Principle 15).
What the precautionary principle means in practical political terms, and what it implies for the sciences, is far from clear. However, in the Love Canal case, one might shift the burden of proof from those who maintained that there was a causal connection between the toxic chemicals at the waste site and the health problems to those who maintained that no such connection had been established. This could be done by minimizing type II errors instead of type I errors.2 Models The uses of statistics is the technical part of the problem. It is more difficult to come to terms with other kinds of uncertainty. For example, in the case of Love Canal, the controversy did not only concern the uses of statistics. All application of statistics must be based on some kind of model. In the case of Love Canal, the research group of the Department of Health had applied a model based on the basic assumption that the toxic wastes from the chemical site would spread radially from the dump site as a center. According to this model, a decrease of incidences of ill health was expected as the distance from the center increased. From a theoretical point of view, this assumption was not implausible. However, the research group should have been much more cautious in applying the model, and they should have been open to alternatives. The hypothesis that the toxic wastes might migrate along the drainage ditches was rejected without serious considerations (Savan, 1988: 57). Experts trained in a field have a tendency to apply the kinds of models that conform with their field. The following example has been taken from Brian Wynne’s “Uncertainty and Environmental Learning”: In May of 1986 a cloud of radioactive material from the Chernobyl accident passed over Cumbria in North Wales. Heavy rains caused a large amount of radioactive cesium to fall over an area used to raise sheep. The authorities in charge assured everyone that there was no cause for concern, but in spite of this, six weeks after the rains, a ban against selling meat from sheep that 2 For example, Kristin Shrader-Frechette argues that type II errors should be minimized
instead of type I errors (Shrader-Frechette, 1991).
FACING THE PROBLEM OF UNCERTAINTY
165
had grazed in the area was imposed because of the high levels of radioactivity found in the meat. Experts claimed, however, that the radioactivity would rapidly decrease, and that the ban would be lifted in a few weeks. Yet, even after six years the level of radioactivity was so high in some of the affected areas that restrictions had to be upheld. How could the experts be so wrong? Their predictions were based on extrapolations from the behavior of cesium in alkaline clay soil to the acid peat soil of Cumbria. Measurements showed that the dispersion of cesium in these types of soil was fairly similar, and on that basis they assumed that cesium would sink so far down into the ground that after a short period of time there would be no problem. This was based on the assumption that the radiation would come from the cesium in the soil and would be absorbed by people or animals who happened to be in the area. Under this assumption, it was the physical transport of cesium in the soil that was important. However, this assumption was wrong. The sheep got cesium in their bodies through the grass they ate. The important question was, therefore, not how the cesium was dispersed throughout the soil but if it was absorbed into the vegetation. Here there proved to be a significant difference between alkaline clay soil and acid peat. In alkaline clay soil, cesium adsorbs onto aluminum silicate molecules so that it does not get absorbed into the vegetation, whereas in peat it remains chemically mobile and can therefore be taken up into the vegetation. The experts did not consider these possibilities, and that was the cause of their mistaken predictions (Wynne, 1992: 121). Should not a model that took into consideration, for example, chemical properties, have been used at the onset? The answer is, of course, yes. But to understand why the experts made such an apparently elementary error we have to take into consideration that they had been trained as physicists. Physicists are used to think in terms of physical transportation and radiation. Chemists are trained to think in terms of chemical reactions and chemical mobility. The problem is that it is not a part of professional training to learn about the limits of the models and methods of a field. A serious obstacle to coming to terms with this problem is the fact that Thomas Kuhn’s description of the scientific community is to a large extent valid. We do not have to accept the more controversial parts of Kuhn’s theory in order to agree that scientists are trained within a “paradigm.” Parts of the paradigm will be the tacit knowledge that is imperative to everyday scientific work. This kind of knowledge cannot be articulated as explicit rules. Kuhn himself uses Michael Polanyi’s term “tacit knowledge” (Kuhn, 1970: 187). When experts deal with situations that fit into their paradigm, this works fine. But when confronted with situations that
166
RAGNAR FJELLAND
are not so easy to accommodate to the expert’s paradigm, it is a source of error. Because experts in the same field are trained within the same paradigm, they are usually blind to many of their own tacit assumptions. However, experts from other fields may immediately be aware of some of the tacit assumptions of the field. Therefore, in cases involving complexity and uncertainty, it is imperative to draw on various kinds of expertise. In particular, situations involving complex systems are new to most researchers. In the mathematical sciences one is trained to deal with idealized situations and use simple models. The physicist Per Bak tells a story to demonstrate how inadequate this way of thinking may be: The obsession among physicists to construct simplified models is well illustrated by the story about the theoretical physicist asked to help a farmer raise cows that would produce more milk. For a long time, nobody heard from him, but eventually he emerged from hiding, in a very excited state. “I now have figured it all out,” he says, and proceeds to the blackboard with a piece of chalk and draws a circle. “Consider a spherical cow . . .” Here, unfortunately, it appears that universality does not apply. We have to deal with the real cow (Bak, 1997: 45).
In their important book Uncertainty and Quality in Science for Policy (1990) Silvio Funtowicz and Jerome Ravetz argue that science has to enter into a “post-normal” phase to adequately address problems where uncertainty and “decision stakes” are high. In the book, they develop a conceptual scheme to deal with the new challenges. I shall not go into technical details, but just mention one aspect of “post-normal” science that is relevant to my discussion: the uses of what they call “extended peer communities.” “Extended peer communities” implies an extension of the traditional scientific community to include non-experts as well. However, this does not mean that non-experts should invade the research laboratories and carry out research. It does mean, though, that non-experts should take part in discussions of priorities, evaluation of results, and policy debates. One reason for including non-experts is that they sometimes are closer to the problem. For example, persons directly affected by an environmental problem will have a keener awareness of its symptoms, and a more pressing concern with the quality of official reassurances, than those in any other role. Thus they perform a function analogous to that of professional colleagues in the peer-review or refereeing process in traditional science, which otherwise might not occur in these new contexts (Funtowicz and Ravetz, 1993: 752). The arguments in favor of extended peer communities is similar to Paul Feyerabend’s arguments for a democratization of science.3 I regard it as a 3 Cf. “Laymen can and must supervise Science” (Feyerabend, 1978: 96).
FACING THE PROBLEM OF UNCERTAINTY
167
continuation of an important element in the Socratic tradition. We know that it was part of Socrates’s strategy to pretend that he was more ignorant than he actually was. By asking apparently naive questions to an expert, one may reveal tacit assumptions that the expert himself is not aware of. Many scientists are skeptical to public debates about controversial scientific and technological questions, like nuclear power and genetically modified food, and allege that public opinion is often based on prejudices and lack of information. No doubt this is sometimes the case. But there are at least two reasons for not keeping these kinds of questions away from the public. First, non-experts may be wrong because they are prejudiced or lack the required information. But experts may also be wrong. Some of their errors may be corrected by bringing in non-experts. To put it simply: The public may be wrong because it is too far away from the technical problems, whereas experts may be wrong because they are too close. The “tunnel” vision of experts is at least as great a problem as the ignorance of non-experts. The second reason is that common people are affected by the decisions that are made. The questions of global warming, the ozone layer, radioactive waste, and genetically modified food concerns everybody, experts as well as non-experts. These questions are too important to be left only to the experts. CONCLUSION In this paper, I have addressed two questions: 1) Does uncertainty represent something genuinely new in science? and 2) What are the implications of recognizing uncertainty for science and science policy? My answer to the first question was that although uncertainty has been recognized since the Greeks, the present situation is genuinely new in the sense that uncertainty has moved into focus. I pointed to two consequences of this recognition, one concerning the uses of statistics (and more generally the burden of proof) and one concerning the uses of models (in particular idealized models). I argued that in some cases the burden of proof should be reversed, one should draw on various kinds of expertise, and science should be “democratized.” It can be argued that these consequences do not influence science and “the scientific method,” but only science policy. In a certain sense this is true, but it depends on what is meant by “the scientific method.” What is affected, is not science per se, but a dominating ideal of what science should be, emphasizing measurements, mathematics, idealized models, laboratory experiments, exact predictions, and reductionism. When it is recognized that this scientific ideal is too narrow even for the mathema-
168
RAGNAR FJELLAND
tical (or “exact”) sciences, the toxicologist or ecologist should have few reservations against doing the same. However, one might take one step further and argue that the root of uncertainty is complexity. Therefore, to come to terms with the new situation, a new science of complexity is required. An increasing number of authors argue in this way (for a small selection, see Nicolis and Prigogine, 1989; Bak, 1997; Auyang, 1998). This is an important question, but it goes beyond the scope of this paper.
AKNOWLEDGEMENT A preliminary version of this paper was presented at 11th International Congress of Logic, Methodology and Philosophy of Science, Krakow 20–26 Aug. 1999. I have benefited greatly from discussions with Silvio Funtowicz and Jerome Ravetz. However, they have not read this manuscript, and should not be held responsible for errors that might occur. I also want to thank Judith Ann Larsen, who has proofread the text and has suggested many improvements of the language. Finally, I want to thank the anonymous referees of the journal for useful comments.
REFERENCES Auyang, Sunny Y., Foundations of Complex-System Theories (Cambridge University Press, Cambridge, 1998). Bak, Per, How Nature Works. The Science of Self-Organized Criticality (Oxford University Press, Oxford, 1997). Feyerabend, Paul, Science in a Free Society (NLB, London, 1978). Funtowicz, Silvio and Jerome Ravetz, Uncertainty and Quality in Science for Policy (Kluwer, Dordrecht, 1990). Funtowicz, Silvio og Jerome Ravetz, “Science for the Post-Normal Age,” Futures (September, 1993), 739–755. Galilei, Galileo, Dialogue Concerning Two New Sciences (1638), translated by Henry Crew and Alfonso de Salvio (1914) (Dover Publications, New York, 1954). Gigerenzer, Gerd, Zeno Swijtink, Theodore Porter, Lorraine Daston, John Beatty, and Lorenz Krüger, The Empire of Chance. How probability changed science and everyday life (Cambridge University Press, Cambridge, 1989). Hacking, Ian, “The Self-Vindication of the Laboratory Sciences,” in Andrew Pickering (ed.), Science as Practice and Culture (The University of Chicago Press, Chicago and London, 1992), 29–64. Koyré, Alexandre, “Galileo and Plato,” in A. Koyré (ed.), Metaphysics and Measurement (1943) (Johns Hopkins Press, Baltimore and London, 1968). Kuhn, Thomas, “Postscript? 1969,” The Structure of Scientific Revolutions (University of Chicago Press, Chicago, 1970).
FACING THE PROBLEM OF UNCERTAINTY
169
Levine, Adeline Gordon, Love Canal: Science, Politics, and People (Lexington Books, Lexington, MA, 1982). Latour, Bruno, Science in Action (Open University Press, Milton Keynes, 1987). Lorenz, Edward N., “Deterministic Nonperiodic Flow,” Journal of the Atmospheric Sciences 20 (1963), reprinted in Predrag Cvitanovic (ed.), Universality in Chaos (Adam Hilger, 2. ed. Bristol and New York, 1989), 367–378. Nicolis, Grégoire and Ilya Prigogine, Exploring Complexity (Freeman and Company, New York, 1989). Paigen, Beverly, “Controversy at Love Canal,” The Hastings Center Report (June, 1982), 29–37. Peterman, Randall M., “Statistical Power Analysis can Improve Fisheries Research and Management,” Can. J. Fish. Aquat. Sci. 47 (1990), 1–15. Peters, Robert Henry, A Critique for Ecology (Cambridge University Press, Cambridge, 1991). Savan, Beth, Science under Siege: The Myth of Objectivity in Scientific Research (CBC Enterprises, Montreal, 1988). Shrader-Frechette, Kristin, Risk and Rationality (Univ. of California Press, Berkeley, 1991). Toulmin, Stephen, Cosmopolis. The Hidden Agenda of Modernity (The Free Press, New York, 1990). Wynne, Brian, “Uncertainty and Environmental Learning,” Global Environmental Change (June, 1992), 111–127.
Center for the Study of the Sciences and the Humanities University of Bergen, Allegt. 32, 5020 Bergen, Norway E-mail:
[email protected]