ULRIKE HAHN and MIKE OAKSFORD
A BAYESIAN APPROACH TO INFORMAL ARGUMENT FALLACIES
ABSTRACT. We examine in detail three classic reasoning fallacies, that is, supposedly “incorrect” forms of argument. These are the so-called argumentam ad ignorantiam, the circular argument or petitio principii, and the slippery slope argument. In each case, the argument type is shown to match structurally arguments which are widely accepted. This suggests that it is not the form of the arguments as such that is problematic but rather something about the content of those examples with which they are typically justified. This leads to a Bayesian reanalysis of these classic argument forms and a reformulation of the conditions under which they do or do not constitute legitimate forms of argumentation.
“The most remarkable feature of the history of the study of fallacies is its continuity. Despite the waves of disinterest and rebellion that phase and punctuate it, and despite fundamental changes in logical doctrine, the tradition has been unquenchable. The lesson of this must be that there is something of importance in it.” Hamblin (1970, 190).
Lists of so-called fallacies of argumentative discourse date back to Aristotle’s Sophistical Refutations and his Topics, and have received further additions throughout the ages. There is disagreement on the appropriate definition of “fallacy” (see e.g., Eemeren and Grootendorst 2004 for discussion), but the kinds of things intended is clear: petitio principii (or ‘question begging’), arguments from authority, ad hominem arguments, or Locke’s argumentum ad ignorantium (argument from ignorance). These fallacies have accumulated in logic textbooks and have been dubbed “informal” fallacies because it has not been possible to give “a general or synoptic account of the traditional fallacy material in formal terms” (Hamblin 1970, 191). The fallacies existence in logic textbooks has been guaranteed by the fact that they seem to highlight important considerations outside formal logic. Like the Aristotelian Topics, the fallacies shall be “superannuated when we see what their function is and how it can be fulfilled in a modern idiom” (Hamblin 1970, 191). This question has boiled down Synthese (2006) 152: 207–236 Knowledge, Rationality & Action 241–270 DOI 10.1007/s11229-005-5233-2
© Springer[241] 2006
208
ULRIKE HAHN AND MIKE OAKSFORD
to whether or not traditional (logical) theories of formal inference can provide analytical accounts of some or even better, all fallacies. Such analytical treatment has seemed elusive because some of the fallacies are logically valid, but seem to fail as arguments nevertheless. For example, begging the question is deductively valid (or can be rendered so by making implicit premises explicit). Moreover, examples have come up where a supposedly fallacious argument form does not, in fact, seem to be fallacious. Walton (1989) remarks that, “. . . it is increasingly a problem endemic to the field of logic that, in some cases, arguments that seem to be of the same general type as the fallacies are (in some sense) reasonably good arguments, that are, and ought to be, acceptable as legitimate ways of rationally persuading someone to accept a conclusion. Thus the field of informal logic is at a crisis point. How can we generally distinguish between the fallacious and non-fallacious instances of these arguments?” (7–8).
Depending on the nature of such exceptions there seem to be two paths: (i) try to refine the formulation of the structural characteristics of the argument, such that the seeming counterexample is now no longer part of the fallacy, as now more exactly defined, or (ii) accept that fallacies are fallacies not because of their structure alone. A popular response to the problem has been that informal logic deals with arguments in context – and it is the context that is responsible for the differential acceptability of arguments with the same structure. Such an approach is provided by the pragma-dialectical or pragmatic theories of argumentation (e.g., van Eemeren and Grootendorst 1984, 1992, 2004; Walton 1995, 1998). Fallacies arise because an argumentation technique is misused by a participant so as to ‘get the best’ of another participant in the dialogue (Walton 1991). For example, begging the question is a fallacy in that it amounts to pressing ahead too aggressively without allowing a respondent to raise legitimate critical questions (Walton 1991). More generally, in the pragma-dialectical approach traditional fallacies are understood by considering the kinds of dialogues in which they occur. Dialogues come in a variety of conventional types. Evaluation of an argument as fallacious depends on the nature of the dialogue type. An argument is acceptable if it contributes to the goals of that type of dialogue. So status as a fallacy derives from how an argument is used, not from its form or its specific content alone. What is a non-fallacious argument in the context of one kind of dialogue may be fallacious in the context of another. [242]
BAYESIAN APPROACH TO FALLACIES
209
What this kind of account does not predict is that given an unvarying context of a given dialogue type such as “critical discussion” (e.g., van Eemeren and Grootendorst 1984, 1992), arguments of the same structure could still be more or less fallacious. For several classical fallacies we seek to demonstrate that they can be made to vary more or less continuously in strength from clearly uncompelling, and so fallacious, to convincing, without changes in the type of dialogue in which they are embedded. Specifically, we argue that the strength of an argument of a particular structure – e.g., petitio principii – varies as a function of its content (see also, Powers 1995; Ikuenobe 2004). Standard textbook examples of fallacies are weak arguments, not due to the structure of the argument type in question or to the context in which they occur, but because of the content of individual instantiations of the argument. This content dependence suggests that adequately capturing the classic fallacies requires a normative theory of argument strength. Such a theory is desirable above and beyond its ability to explain fallacies; it should be an essential ingredient of any rational theory of argumentation. As we will see in the Discussion section, this is a view shared by many in the AI literature on uncertain reasoning, where argument strengths have been introduced for largely technical reasons. We seek to demonstrate that Bayesian inference can provide a normative theory of argument strength that can capture differences in acceptability for three different fallacies (the use of probabilities for argument strength is also suggested in Kohlas 2003). Our project bears comparison to recent proposals by Ikuenobe (2004) who has argued for a unified approach to the fallacies. Ikuenobe (2004, 193) argues that, “a fallacy is fundamentally an epistemic error, involving the failure to provide in the form of a premise, adequate proof for a belief or proposition in the form of a conclusion.” However, he does not propose any explicit epistemic principles about what constitutes adequate proof or evidence. We argue that Bayesian inference can provide a useful account of the epistemic adequacy of proof or what we call “argument strength.” Ikuenobe (2004) also focuses on the importance of content and prior knowledge in determining the adequacy of an argument, factors that are also directly relevant to our Bayesian approach. The focus on epistemic principles also relates our project to attempts to capture essential characteristics of scientific reasoning in Bayesian terms (Howson and Urbach, 1989; Earman 1992) and to Kuhn’s (1993) attempts to relate scientific and informal reasoning. [243]
210
ULRIKE HAHN AND MIKE OAKSFORD
Our approach is not in opposition to those that have sought to explicate procedural rules of argumentation, whether in the form of the pragma-dialectical approach or others (e.g., Alexy 1989). We are not disputing the importance of these projects for deriving normative theories of argumentation. Rather, our point is that because procedural accounts do not address argument strength they do not fully capture the questions raised by the classic fallacies. Even where all rules of a particular discourse type are obeyed some arguments will still seem more compelling than others. Normative theories of argumentation would ideally include an account of this difference in the form of a rational theory of argument strength. Like these other projects our concerns are normative. By using intuitive examples, we hope to demonstrate that it would be rational for someone to be differentially convinced by slightly different versions of the same argument. That is, not only are people differentially convinced by these different versions, on our view they should be if they follow the standard laws of probability theory. In detail, the paper proceeds as follows. We first consider why we adopt a specifically Bayesian approach to argument strength. We then consider three kinds of fallacy in detail: the argument from ignorance, question begging or petitio principii, and slippery slope arguments. In each case we start from sample textbook arguments that are weak, and then identify stronger versions with the same structure. We then show how Bayesian inference can capture the differences in relative strength. Finally, we show how this approach can be extended to other fallacies and discuss other possible approaches in the literature.
1. WHY BAYESIAN AND WHY PROBABILISTIC?
We describe our approach as “Bayesian” not only because we use Bayes’ rule. We also regard the probabilities that figure in these calculations as subjective degrees of belief and we treat prior probabilities as important in the reasoning process (Oaksford and Hahn 2004). There are three fundamental reasons that suggest that this is the only interpretation under which the probability calculus is likely to provide real insight in to an account of argument strength. First, the important role a Bayesian analysis assigns to prior belief is an instance of a fundamental aspect of argumentation – the nature of the audience, which has been assumed to be a [244]
BAYESIAN APPROACH TO FALLACIES
211
crucial variable for any rational reconstruction of argumentation (e.g., Perelman and Olbrechts-Tyteca 1969). Audience relativity has the consequence that a fallacy for one person may not be a fallacy for someone else because their prior beliefs differ. Ikuenobe (2004) makes the same point using the argument that all cases of killing a living human being are bad and abortion is a case of killing a living human being, therefore, abortion is bad. This argument may provide adequate proof for someone who already believes that a fetus is a living human being. However, for someone who does not believe this proposition, this argument is weak or provides inadequate proof for the conclusion. Second, one might argue that Bayes’ rule provides the means of bringing people’s priors into alignment with the world and so priors can be disregarded. However, this is only the case if there is sufficient legitimate evidence relevant to deciding the argument. But for most arguments in everyday life this is simply not the case. Indeed, the value of the Bayesian approach is that in most argumentative contexts there simply isn’t sufficient mutually agreed evidence to bring people’s beliefs in to alignment “with the facts.” Third, moreover, much of the evidence adduced will relate to singular events. For example, an argument over who killed Kennedy will have to appeal to many events that can also only have happened once, e.g., what is the probability that Oswald was hired by the Mafia? Consequently, to provide a general probabilistic account of argument strength will require assigning single event probabilities, which only makes sense from a Bayesian subjective perspective. However, one might question not only why we adopt a Bayesian approach but why we adopt a probabilistic approach at all. There are many artificial intelligence approaches to argumentation that, while not tied to probabilities, offer a quantitative or qualitative approach to argument strength (for a review, see Prakken and Vreeswijk 2002). Moreover, these approaches have the advantage that they deal with the structure of the propositions that make up an argument and transmit argument strengths over the logical relations between evidence statements and the conclusions they purport to support. There are two points to make. First, our argument is that the fallacies are differentially strong irrespective not only of pragmatic dialogical structure but also irrespective of propositional structure. Thus, in providing an account of the fallacies, we see no particular advantage in appealing to structural theories. Moreover, we are not proposing a complete theory [245]
212
ULRIKE HAHN AND MIKE OAKSFORD
of argumentation, which would indeed need to deal with structure. Rather we have the more limited goal of presenting an account of argument strength that can account for the fallacies. We argue that our Bayesian account can succeed in this more limited respect. This of course leaves open the possibility that other accounts can also deal with the fallacies we consider. That a Bayesian account of argument strength can address the fallacies provides a challenge to other approaches to show whether they can also address them. Second, there is a good prima facie case for using Bayesian probability theory as account of argument strength. Some of these other accounts appeal to ways of dealing with uncertainty which are not all consistent with normative probability theory. As we are concerned with an account of when the fallacies should or should not be accepted, sticking with the well-understood normative theory in this domain is the conservative option (Pearl 1988). Only if we encounter examples in considering the fallacies that suggest we need to move beyond the realms of probability theory, would it be sensible to adopt another account. As we will see, our Bayesian approach seems to do very well in accounting for the differential strength of the various fallacies. Consequently, we see no need to appeal to alternative theories. In sum, there are good reasons to stick with probability theory and with a Bayesian approach in particular. We return to the issue of alternative theories again in the Discussion section but now we turn to our account of the argumentative fallacies 2. THE ARGUMENT FROM IGNORANCE
A commonly used example of the argument from ignorance is: (1)
Ghosts exist because no one has proved that they do not.
This argument seems unacceptable. However, we argue that this is not because the argument structure it embodies is fallacious as has traditionally been assumed from a logical perspective. Surveying the definitions of the argument from ignorance, Walton (1996) identifies the following form for the argument: If A were true (false), it would be known (proved, presumed) to be true (false). A is not known (proved, presumed) to be true (false). Therefore, A is (presumably) false (true). [246]
BAYESIAN APPROACH TO FALLACIES
213
In general, of course, lack of knowledge, evidence or proof, is not sufficient to establish that a proposition is false. Indeed, if it were, then all kinds of absurd conclusions could be licensed by such arguments. For example, simply because we have no evidence that flying pigs do not exist outside our solar system, does not mean that we should conclude that they do. Likewise, simply because we have no evidence that flying pigs do exist outside our solar system, does not mean that we should conclude that they do not. Both these arguments are instances of the argument from ignorance, and both seem to be fallacious. Importantly, however, the second argument seems intuitively more compelling. This seems to be due to our prior belief in the nonexistence of flying pigs. As we will show, our Bayesian analysis readily captures this aspect of human intuition regarding the validity of these arguments. Walton (1996) identifies three basic types of the argument from ignorance where fallacies may arise: negative evidence, epistemic closure, and shifting the burden of proof. 2.1. Negative Evidence The first type of the argument from ignorance that Walton (1996) identifies is based on negative evidence. We use the example of testing drugs for safety and first show how this example fits the scheme for this argument: If drug A were toxic, it would produce toxic effects in legitimate tests Drug A has not produced toxic effects in such tests Therefore, A is not toxic There are three basic intuitions about the argument from ignorance that we believe are captured by a Bayesian analysis. Each intuition shows that arguments of this form, set in the same context, nonetheless vary in argument strength dependent on the content of the materials. First, negative arguments should be acceptable but are generally less acceptable than positive arguments. So (2) is intuitively more acceptable than (3). (2)
Drug A is toxic because a toxic effect was observed (positive argument). [247]
214 (3)
ULRIKE HAHN AND MIKE OAKSFORD
Drug A is not toxic because no toxic effects were observed (negative argument, i.e., the argument from ignorance).
One could argue that the difference between (2) and (3) is actually that with respect to if a drug produces a toxic effect, it is toxic, (2) is a valid inference by modus ponens whereas (3) is an invalid denying the antecedent inference. Of course, such an account does not get round the fact that the intuition is not that (3) is unacceptable, because invalid, but that (3) is generally fine but just less acceptable than (2). Second, a Bayesian analysis also captures the intuition that people’s prior beliefs should influence argument acceptance (see the “flying pigs” example above). The more the conclusions of an argument are believed initially, the more acceptable the argument. Third, the more evidence compatible with the conclusions of these arguments, the more acceptable they are, i.e., (4) is more acceptable than (5). (4)
Drug A is not toxic because no toxic effects were observed in 50 tests.
(5)
Drug A is not toxic because no toxic effects were observed in one test.
We now show how each of these intuitions is compatible with a Bayesian analysis. Demonstrating the relevance of Bayesian inference for negative vs. positive arguments involves defining the conditions for a legitimate test. Let e stand for an experiment where a toxic effect is observed and ¬e stands for an experiment where a toxic effect is not observed; likewise let T stand for the hypothesis that the drug produces a toxic effect and ¬T stand for the alternative hypothesis that the drug does not produce toxic effects. The strength of the argument from ignorance is given by the conditional probability that the hypothesis, T , is false given that a negative test result, ¬e, is found, P (¬T |¬e). This probability is referred to as negative test validity. The strength of the argument we wish to compare with the argument from ignorance is given by positive test validity, i.e. the probability that the hypothesis, T , is true given that a positive test result, e, is found, P (T |e). These probabilities can be calculated from the sensitivity (P (e|T )) and the selectivity (P (¬e|¬T )) of the test and the prior belief that T is true (P (T )) using Bayes’ theorem. Let n denote sensitivity, i.e., n = P (e|T ), l denote selectivity, i.e., l = P (¬e|¬T ), and h denote the prior probability of drug A being toxic, i.e., h = P (T ): [248]
BAYESIAN APPROACH TO FALLACIES
(6)
P (T |e) =
nh nh + (1 − l)(1 − h)
(7)
P (¬T |¬e) =
l(1 − h) l(1 − h) + (1 − n)h
215
Sensitivity corresponds to the “hit rate” of the test and 1 minus the selectivity corresponds to the “false positive rate.” There is a trade-off between sensitivity and selectivity which is captured in the receiver operating characteristic curve (Green and Swets 1966) that plots sensitivity against the false positive rate (1 – selectivity). Where the criterion is set along this curve will determine the sensitivity and selectivity of the test. Positive test validity is greater than negative test validity as long as the following inequality holds: (8)
h2 (n − n2 ) > (1 − h)2 (l − l 2 )
Assuming maximal uncertainty about the toxicity of drug A, i.e., P (T ) = .5 = h, this means that positive test validity, P (T |e), is greater than negative test validity, P (¬T |¬e), when selectivity (l) is higher than sensitivity (n) and n + l > 1. These are conditions often met in practice for a variety of clinical and psychological tests (Oaksford and Hahn 2004). One can see why by considering cancer screening tests. A high false positive rate, i.e., low selectivity, means that many people sent for further testing will not have cancer. High selectivity is therefore clearly desirable. High sensitivity, at this stage, is not as important as further more sensitive tests will be conducted given a positive result. So, in a variety of settings, positive arguments are stronger than negative arguments. Prior belief in the conclusion should also strongly affect argument strength. So if someone’s prior belief that the drug produces toxic effects is high, e.g., P (T ) = .8, then for all combinations of sensitivity and selectivity, positive test validity is higher than negative test validity. But for someone who initially disbelieved this hypothesis, e.g., P (T ) = .2, for all combinations of sensitivity and selectivity, negative test validity would be higher than positive test validity. Consequently, a Bayesian account captures the intuition that people’s prior beliefs should also influence the acceptability of the argument from ignorance. As we discussed in the section Why Bayesian and Why Probabilistic?, the importance of priors to a Bayesian analysis captures the audience relativity of argumentation (e.g., Perelman and OlbrechtsTyteca 1969). [249]
216
ULRIKE HAHN AND MIKE OAKSFORD
Bayesian updating also captures the intuition that the acceptability of an argument should be influenced by the amount of evidence. Figure 1 shows the results of five negative (¬e) or five positive trials (e) on P (T |e) and P (¬T |¬e) with selectivity (P (¬e|¬T ) = .9) greater than sensitivity (P (e|T ) = .7) for three different priors, i.e., P (T ) = .2, .5, and .8 (indicated as trial 0). As the number of trials accumulates, people should become equally convinced by the argument from ignorance as by its positive counterpart, even when participants start off strongly believing the positive conclusion, i.e., that the drug has toxic effects. In sum, a Bayesian analysis of the negative evidence version of the argument from ignorance directly captures the three intuitions with which we began concerning the effects of different contents on the strength of these arguments. We now turn to what is called the epistemic closure form of the argument from ignorance. 2.2. Epistemic Closure Walton (1992a) first drew attention to parallels between the argument from ignorance and the negation-as-failure procedure (Clark 1978) in 1
0.8
Posterior Probs.
P(¬S|¬e) .5 P(S|e) .5
0.6
P(¬S|¬e) .8 P(S|e) .8
0.4
P(¬S|¬e) .2 P(S|e) .2
0.2
0 0
1
2
3
4
5
Trial Figure 1. Bayesian updating of positive and negative argument strength for three different priors (P (T ) = .2, .5, and .8) with sensitivity (P (e|T )) set to .7 and selectivity (P (¬e|¬T )) set to .9.
[250]
BAYESIAN APPROACH TO FALLACIES
217
AI. Knowledge-based systems frequently support the inference that a proposition is false – so its negation is true – because it cannot be proved from the contents of the data base. This type of inference relies on the concept of epistemic closure (De Cornulier 1988; Walton 1992a) or the closed world assumption (e.g., Reiter 1980, 1985). Our Bayesian analysis can be generalized to the epistemic closure form of the argument from ignorance because the subjective probabilities involved may vary with other beliefs as well as with objective experimental tests. We now illustrate the epistemic form of the argument from ignorance and show that it conforms to Walton’s schema using his example of a railway timetable: If the train stops at Hatfield, it should indicate this on the timetable. The timetable has been searched and it does not indicate that the train stops at Hatfield. Therefore, the train does not stop at Hatfield. Epistemic closure cases depend on how convinced someone is that the search of the relevant knowledge base has been exhaustive and that it is epistemically closed. The greater the confidence in the reliability of this search, the higher someone’s degree of belief in the conclusion. If a closed world assumption can be made, then the epistemic argument from ignorance is deductively valid. For example, if all relevant information about where trains stop is displayed in the timetable and the search has been exhaustive, then the conclusion that the train does not stop at Hatfield, because it does not say it does in the timetable, follows deductively (by “stop” here we mean “scheduled stop;” the train may stop at Hatfield in an emergency but this possibility gives you no grounds to board the train unless you intend pulling the emergency cord). However, there may be grounds to believe that the epistemic closure condition is met in varying degrees. The following three arguments have exactly the same form and presumably could occur in the exactly the same contexts. But their varying content intuitively suggests varying degrees of argument strength. (9)
The train does not stop at Hatfield because my friend, who rarely travels on this line, says she can not recall it ever stopping there.
[251]
218
ULRIKE HAHN AND MIKE OAKSFORD
(10)
The train does not stop at Hatfield because the railway guard says he can not recall it ever stopping there.
(11)
The train does not stop at Hatfield because the railway guard checked the computer timetable, which does not show that it stops there.
In each of (9)–(11), there is a question of how complete the knowledge base is and how likely an individual is to conduct an exhaustive search. It seems intuitively clear that (11) is a stronger argument than (10) which is a stronger argument than (9). That is, someone’s degree of belief that this train does not stop at Hatfield would be greater in (11) than in (9). Moreover, if someone needed to get to Hatfield in a hurry for an important meeting and this train was on the platform to which they had been directed, it would be rational to be less willing to board given (11) than (9) (although even (9) might give someone pause for thought). These arguments can be integrated into a Bayesian account straightforwardly. From (9) to (11) the probability that Hatfield would be mentioned if the train stopped there increases, i.e., sensitivity increases from (9) to (11). As sensitivity rises so does positive and negative test validity.
2.3. The Burden of Proof A third and final class of arguments from ignorance are characterized as illegitimate attempts to shift the burden of proof (Walton 1992a). Indeed, several authors wish to restrict the argument from ignorance to this final case (e.g., Copi and Cohen 1990). Consequently, it merits further inspection. In light of the fact that some arguments from negative evidence can be reasonable, violation of the burden of proof is invoked to explain arguments such as the classic ghost example (1). The idea is that the pragmatics of argument (at least for the ‘information-seeking’ discourse relevant here) demand that whoever makes a claim has to provide reasons for this claim when challenged. Pointing out that no one has disproved their existence as a reason for believing in ghosts is an illegitimate attempt to shift that burden onto the other party instead of providing an adequate reason oneself. However, this idea of shifting the burden of proof does not explain why the ghost example is a fallacy, it merely labels it. As we’ve seen in the preceding sections, negative evidence can constitute a good reason for believing something. [252]
BAYESIAN APPROACH TO FALLACIES
219
What’s more, there are combinations of test sensitivity, specificity and priors where negative evidence is more compelling than positive evidence. This means one has to be able to explain why negative evidence vis-a-vis ghosts is not of this kind. Without such an explanation it remains entirely unclear why it is not an adequate reason in this case also and as such does not shift the burden of proof. Consequently, ‘shifting the burden of proof’ doesn’t explain an argument’s weakness it presupposes it (for a general analysis of the burden of proof in argumentation see, e.g., Prakken et al. 2005). The real reason we consider negative evidence on ghosts to be weak is because of the lack of sensitivity (ability to detect ghosts) we attribute to our tests as well as our low prior belief in their existence.
3. CIRCULARITY
The next main fallacy we consider is also one of the most widely cited: “question begging” or circularity, where the conclusion is already contained in the premises. A standard response to this fallacy has been to point out that it constitutes a deductively valid argument as any proposition implies itself. Consequently, its fallaciousness must stem from its pragmatic failure (Walton 1985). However, few textbook examples actually involve a direct statement of the conclusion as a premise, so that this view seems questionable. More common in real discourse are examples such as: (12)
God exists, because the bible says so, and the bible is the word of God.
In contrast to saying: (13)
God exists, because God exists.
(12) is deductively valid only by adding an implicit premise, that the bible being the word of God presupposes the existence of God. This form of presupposition is referred to as “epistemic circularity” and is viewed as less problematic than “logical circularity” (see e.g., Shogenji 2000). In fact, examples which would seem to follow the structure of (12) can be found in philosophical and scientific discourse as well as in everyday argumentation. Brown (1993, 1994) discusses examples from astronomy but examples like (12) occur wherever unobservable entities are involved: [253]
220 (14)
ULRIKE HAHN AND MIKE OAKSFORD
Electrons exist, because we can see 3 cm tracks in a cloud chamber, and 3 cm tracks in cloud chambers are signatures of electrons.
Given (14), (12) looks like a (less compelling) instance of the classic inference to the best explanation (Harman 1965; see also, Josephson and Josephson 1994). Consequently it is of great interest whether there can be instances where the asserted relationship between a hypothesis and evidence allows an increase in the posterior probability of the presupposition itself. Shogenji (2000) provides a Bayesian analysis which demonstrates that presupposing the hypothesis under consideration in order to evaluate data, can raise the probability of that same hypothesis. For a hypothesis to be confirmed by Bayes’ theorem, the probability of the evidence E given the hypothesis H, (P (E|H )), must be greater than the probability of the evidence alone (P (E)). These relationships can include additional background knowledge (B) which is routinely necessary in both inductive and deductive contexts. So, more accurately, the evidence given the hypothesis and the background knowledge (P (E|B & H )) must be greater than the probability of the evidence given the relevant background knowledge B alone (P (E|B)). This seems to preclude any form of self-dependent justification. The background knowledge already contains the hypothesis as a presupposition as it is needed to interpret the evidence. So this situation seems to correspond to a comparison between P (E|B & H ) and P (E|B & H ), instead of the desired comparison between P (E|B & H ) and P (E|B). As the relevant probabilities are now equal this situation can not raise the probability of the hypothesis. However, such an analysis is misleading. To make this point, Shogenji distinguishes between the “proper background knowledge” B ∗ , which excludes the hypothesis H under consideration, from the full set of background beliefs, B, which includes H . Presupposing a hypothesis in evaluating the data is the case of P (E|H & B ∗ & H ) which needs to be greater than the probability of the evidence given all relevant background knowledge. But this second probability need not include the prior hypothesis, i.e., it does not need to be rendered as P (E|B ∗ & H ). This is because, as the expanded conditional P (E|H & B ∗ & H ) makes clear, in the case of presupposition the same hypothesis is used twice to predict the probability of the evidence, as opposed to splitting it across the roles [254]
BAYESIAN APPROACH TO FALLACIES
221
of background knowledge and hypothesis. In other words, we can rewrite the second conditional P (E|B ∗ & H ) as P (E|B ∗ ). We are already taking into account our double use of the hypothesis in the term P (E|H & B ∗ & H ) which is equivalent to P (E|H & B ∗ ). In other words, using the same hypothesis twice is no worse than using it once in probabilistic terms. So, these cases of presupposition are as circular (or not) as ordinary Bayesian confirmation. Lest this still seem mysterious, we consider a computational model that embodies this style of reasoning. The model in question is the stochastic interactive activation model of word recognition (McClelland and Rumelhart 1981). A sketch of this model is provided in Figure 2. The model is designed to identify words given particular featural inputs, specifically four letter words. The model has a layer in which each unit represents the four letter words of English, a further layer in which each unit represents a letter of English in each of the four positions, and layer in which there is a unit for each possible feature of these letters in the font with which it is familiar (e.g., vertical line on left, horizontal line
Figure 2. The interactive activation model of word recognition.
[255]
222
ULRIKE HAHN AND MIKE OAKSFORD
at top, bottom or middle etc.). Given a featural input, the model will assign the most likely interpretation of that input even in cases where it is degraded, e.g., “C-T” or “C \T” as opposed to “CAT”. It does so by virtue of a process involving interactive activation from featural units up through letter units up to word level units, as well as back down in the opposite direction which eventually reaches an equilibrium state which corresponds to the “best” interpretation. The model is simultaneously using its possible hypotheses about words to interpret the observation regarding the second letter of CAT, while using that observation to evaluate competing hypotheses about letters and words. Yet the process can be given a straightforward Bayesian interpretation (see McClelland 1998 for details); it also provides a simple mechanism whereby this kind of inference could be achieved computationally by the perceptual system. The model’s interpretation is that the word in question is “cat.” Treating the letter in second position as A given just the feature “-,” is a guess, but it is the best one possible, and the one that corresponds to the true posterior probability of that being the case. We can use the simple world of this example to highlight the factors that make non-viciously circular inferences of this kind stronger or weaker. The probability of the interpretation “CAT” and “A” is influenced in particular by (A)
The number of other words with C T patterns that do not have an A there – as this reduces the probability of P(“A”|C T).
(B)
The number of words in the language that have an A in that position, but are not CAT, as this also influences the degree to which the hypothesis “CAT” makes the observation of an “A” more likely.
The influence of (A) can be seen readily by contrasting, (15)
The strength of the inference from “C” “-” “T” to “CAT” (alternatives: CUT, COT; but none have the feature “-”), with
(16)
The strength of the inference from “B” “-” “G” to “BAG” (alternatives: BUG, BOG, BIG, BEG; BEG also has the feature “-”).
[256]
BAYESIAN APPROACH TO FALLACIES
223
The role of (B) can be seen by comparing (16) in English with a corresponding inference in a hypothetical language in which all or most words have an A in that position. Here, the background beliefs (the base rate of A in that position throughout the language) already fully support the observation of an “A”, making the hypothesis “CAT” irrelevant to the prediction of “A”. So in conclusion, how strong an argument one takes (12) above to be, depends on how many alternative explanations (and their probabilities) one sees for the existence of the bible, and one’s prior belief in the existence of God. The presupposition involved makes the argument in a sense circular but this is the “circularity” involved in Bayes’ theorem. This does not preclude it from providing a framework for scientific reasoning (e.g., Howson and Urbach 1989) and this should hold for everyday argumentation all the more. 4. THE SLIPPERY SLOPE
(17)
Legalizing cannabis will ultimately lead to increased use of cocaine or heroin, hence it should remain banned.
Slippery slope arguments such as the above pervade everyday discourse. Roughly, they are arguments that urge one to resist A, because allowing it could lead to B which is clearly objectionable, often because A and B are connected by a series of gradual intervening steps along which no stopping point can be defined (Lode 1999). Critical thinking books routinely list slippery slope arguments as fallacies be avoided (e.g., McMeniman 1999; Bowell and Kemp 2002) as do classic treatments of argument (Walton 1989) and many logic textbooks (see, Walton 1992a for references). Again, the emphasis from those who view the argument as fallacious is on logical reconstruction – slippery slope arguments seem deceptively logical to the uninitiated but unravel on closer scrutiny. However, specific areas of practical reason present a far more differentiated picture. In particular, law has seen both extensive use of slippery slope arguments in judicial and legislative contexts and considerable debate about the argument form’s merits (see e.g., Lode 1999; Volokh 2003). Within the domain of practical reason, slippery slope arguments are not without their detractors (see, e.g., Burg 1991; see also, Lode 1999; Volokh 2003). However, there are sufficient examples where [257]
224
ULRIKE HAHN AND MIKE OAKSFORD
such arguments are legitimate to rule out their being generally fallacious (Lode 1999; Volokh 2003). So, whether a particular slippery slope argument is compelling may depend on its specific content.1 Slippery slope arguments are not structurally unsound. They are to be viewed not as formal proofs but practical arguments about likely consequences (Walton 1992a) and their relative strength or weakness rests on the probabilities associated with them.2 It is widely held that these are often low in real world examples and that this is to blame for the reputation of slippery slope arguments in general. For example, Schauer (1985) remarks that the “exaggeration present in many slippery slopes makes it possible for the cognoscenti to sneer at all slippery slope arguments, and to assume that all slippery slope assertions are vacuous”. But it is not necessary that the asserted consequences are unlikely to occur. Volokh (2003) and Lode (1999) present examples where there is good reason to believe that the probability of the feared consequence will be raised because there is a simple, plausible causal mechanism that could bring it about. Moreover, they also present examples where there is reason to believe that a precedent has had a historically facilitating effect for subsequent legal change. For example, the issuing of a patent on a transgenic mouse by the US Patent and Trademark Office in 1988 seems to be the result of a slippery slope set in motion by the US Supreme court’s decision Diamond v. Chakrabarty (Lode 1999, 511–512). This decision allowed a patent for an oil-eating microbe and granting a patent for the mouse seems unthinkable without the earlier Chakrabarty decision. The focus of most recent literature, especially in legal philosophy, has been on trying to provide criteria for the evaluation of slippery slope arguments. This typically involves trying to classify them in to different kinds. We argue that the slippery slope is a form of consequentialist argument (see e.g., Govier 1982) about the desirability of a particular event, policy or action. Consequently, they can be captured by normative Bayesian decision theory (Savage 1954).3 This theory allows for the evaluation of alternatives by comparing the ‘utilities’ of these outcomes in combination with their probabilities of occurrence. Utilities are the subjective values we assign to these outcomes (the assignment of utilities must obey certain axioms if outcomes that are undesirable to the decision-maker are to be avoided).4 In Bayesian decision theory, the relationship between subjective probabilities and utilities is direct, i.e., probabilities represent the odds at which [258]
BAYESIAN APPROACH TO FALLACIES
225
you would treat differently valued lotteries as of equal expected value (Ramsey 1931; Savage 1954). Slippery slope arguments make reference both to utilities – the undesired future consequence – and (at least implicitly) to their probability of occurrence. The strength of the argument will be determined by both these factors. So, if the consequence of the debated action is perceived to be more or less neutral, it will give little grounds for taking that action, even if the consequence is certain. Likewise, if the consequence is highly undesirable (high negative utility), it will give little grounds for not taking the action, if the probability of its occurrence is near zero. The relative strength of a slippery slope argument will depend on the extent to which the negative expected utility of its undesirable potential consequence exceeds that of the alternative against which it is compared. An intuitive example of these two factors at work is given in the following examples. Assuming that both listening to reggae music and heroin consumption are equally likely: (18a)
Legalizing cannabis will lead to an increase in heroin consumption.
(18b)
Legalizing cannabis will lead to an increase in listening to reggae music.
(18a) would nevertheless constitute a stronger argument against the legalization of cannabis than would (18b). Likewise, given a shared outcome, (18c) is stronger than (18d) as the transition to another hard drug seems more probable: (18c)
Legalizing cocaine will lead to an increase in heroin consumption.
(18d)
Legalizing cannabis will lead to an increase in heroin consumption.
The Bayesian account can also be applied to current classifications of slippery slope arguments. For example, “arbitrary result slippery slope arguments” (e.g., Williams 1985; Walton 1992b; Lode 1999) maintain that a currently debated policy or practice should be refused because adopting it will mean stepping on a slope of gradual change without a non-arbitrary stopping place. This represents a special case where the feared outcome occurs with certainty (or is claimed to) as it is incurred simply by stepping on the slippery slope:5 [259]
226
ULRIKE HAHN AND MIKE OAKSFORD
“Suppose that some tax relief or similar benefit is allowed to couples only if they are legally married. It is proposed that the benefit be extended to some couples who are not married. Someone might not object to the very idea of the relief being given to unmarried couples, but nevertheless argue that the only non-arbitrary line that could be drawn was between the married couples and the unmarried, and that as soon as any unmarried couple was allowed the benefit, there would be too many arbitrary decisions to be made. . . ” (from, Williams 1985)
More common are slippery slope arguments where the feared consequence is only a possible, long term event, or what Lode (1999) calls “empirical” or (“predictive”) slippery slopes. Their evaluation necessarily involves an assessment of the probabilities involved. A crucial step is identifying the kinds of causal mechanisms that might link current actions or events and their projected consequences. Volokh (2003) has compiled a list of mechanisms that might support particular classes of slippery slope argument. These provide a valuable starting point for further empirical investigation. Given such knowledge of general mechanisms it should be possible to estimate the probabilities involved in specific arguments, or at least, to clarify the kind of further information that would be required in order for this to be possible. In summary, the slippery slope is a kind of consequentialist argument whose strength depends both on the probabilities and the utilities involved. Maximization of expected utility provides a formal framework for their combination. Graded variation in argument strength arises both through changes to the utilities and to the probabilities involved as shown in examples 18a–d above. 5. EXTENDING THE ACCOUNT
So far, we have provided an in-depth analysis of three, key fallacies. We have shown that their graded variation in argument strength seems well-captured by a Bayesian account. It has been a key question in the study of fallacies whether any one of them can be given a formal treatment. The ultimate prize, however, would be a formal treatment of all fallacies. Here we consider whether this goal is attainable and whether our Bayesian account can provide a general theory of argument strength. Contemporary lists of reasoning fallacies include well over 20 different fallacies (see e.g., Woods et al. 2004). It is unlikely that they all share common, underlying properties. Indeed, we have [260]
BAYESIAN APPROACH TO FALLACIES
227
identified at least two fallacies which seem best treated within a pragma-dialectical framework. These are the classic fallacies of “multiple questions” (also known as “complex questions,” “leading question” or plurium interogatium) and “false dilemma” (also known as bifurcation or black and white fallacy). Multiple questions can be characterized as any question that contains hidden, illicit assumptions which make it difficult for the respondent to counter false or unjustified propositions. For example (19)
“[Reporter’s question] Mr. President: Are you going to continue your policy of wasting taxpayer’s money on missile defense?” (Taken from, the Internet Encyclopedia of Philosophy6 )
Probabilistic considerations may be involved in determining when an assumption was unsupported or “illicit.” However, the impact of a leading question on an argument seems well captured as a move which seeks to unfairly disadvantage an opponent in a way that violates pragmatic rules of rational discourse. Exactly the same would seem to hold for the fallacy of false dilemma, which can be characterized as any argument that inappropriately assumes that there exist only two alternatives, e.g., (20)
Do you jog or are you sedentary? (See, Woods et al. 2004).
Contemporary lists of fallacies also contain numerous fallacies of reasoning with probabilities and with causes. These look like they fall rather trivially within the range of a probabilistic account, given in particular, the recent advances on Bayesian networks for causal reasoning (see e.g., Pearl 1988, 2000). What then of the remainder? We provide a sketch here of two further classic, core fallacies to give a sense of how our account might be extended. The first of these is the argumentum ad populum or “appeal to popular opinion”. Once again, different subgroups have been distinguished here. Most recently, Woods et al. (2004) distinguish appeals to “common knowledge” from what they call “boosterism.” In the latter, one’s premises are tailored to specifically appeal to the sentiments or prejudices of those whom the argument is intended to persuade. This case is difficult to see as fallacious on our Bayesian account with its emphasis on priors. This also applies to classic treatments of argumentation within the [261]
228
ULRIKE HAHN AND MIKE OAKSFORD
rhetorical tradition which have stressed the importance of the nature of the audience (Perelman and Olbrechts-Tyteca 1969). Furthermore, on our account, it is important to distinguish whether the agreement with popular sentiment has the character of a consequence, in which case it must be captured in terms of utilities, or whether it is to provide evidential support. A consequentialist treatment is required for cases such as appeals to popular opinion in guiding action in a democratic context (e.g., Walton 1998) where the argument can be clearly non-fallacious. Where popular opinion is advanced as evidential support for the truth or falsity of a statement, the basic probabilistic framework applies directly. The degree of acceptability reduces to the degree to which popular opinion correlates with truth or falsity on this issue. There is considerable evidence of group processes leading to more accurate estimates and decisions than those of individuals. For example, in logical reasoning people identify the logical response to a reasoning problem more reliably in a group than individually (Moshman and Geil 1998). Surowiecki (2004) lists studies in which groups of varying intellectual ability and expertise have been found to outperform individual experts or even groups of experts. These effects might be explained by the fact that large samples statistically lead to reductions in error variance. In other words, there is considerable scope for revising some of the traditional distrust of “mass opinion” that underlies the classification of ad populum arguments as fallacious. The final argument we want to briefly sketch is the argumentum ad misericordiam, which uses an appeal to pity or sympathy to support a conclusion, i.e., (21)
“We should have sympathy for the wretched situation of this person P, therefore we ought to accept the conclusions of the argument that P maintains”. (Walton 1998)
This argument again poses difficulties because, though it appears fallacious, sound examples exist, e.g., appeals for donations for research to cure a crippling childhood illness (Walton 1998). Here it seems justified to arouse pity in the potential donor. Once again, our account suggests that two fundamentally different cases must be distinguished. The first is where the situation to be pitied or pity itself takes on the role of evidence. The second is where pity or the pitiful situation is a consequence. In other words, pity can be claimed to be relevant to facts or for choice. Cases such [262]
BAYESIAN APPROACH TO FALLACIES
229
as the appeal for medical donations are instances of consequentialist arguments and what we have said about these in the slippery slope section above applies directly. However, pitiful situations can also be relevant for facts, e.g., (22)
Drugs are dangerous – look at the terrible state they have got me in to.
Furthermore, there will probably often be some ambiguity in arguments as to whether the to-be-pitied state is to count as evidence or as consequence or both: (23)
I feel awful, so I must be right in maintaining that the earth is flat.
Both characterizations are likely to converge: the state of the earth is evidentially generally assumed to be independent of my feelings, and alleviating my pitiful state will typically have low utility in the context of altering beliefs. The argumentum ad misericordiam makes clear a thread running through many of the fallacies – relevance (see also, Walton 1995, 1998, 2004). How well a Bayesian account of the fallacies will fare will depend largely on the degree to which probabilistic notions of relevance turn out to be appropriate. Looking at research within statistics and AI, there are some grounds for optimism here. Pearl (1988, 2000) has argued that the conditional independence axioms provide a characterization of informational relevance in a wide range of circumstances. The long-term project of assessing these claims for argumentation will be crucial to whether or not a formal treatment of all, or most, of the classic fallacies is possible. Currently, we claim no more than that the available evidence suggests that our Bayesian account seems to extend well beyond the three fallacies we have described in detail. Consequently, such an account provides a promising basis for a general theory of argument strength.
6. DISCUSSION
We have suggested that many reasoning fallacies are not simply acceptable once an appropriate context is specified but rather they vary in strength depending on content even when the context remains fixed. We have also suggested that the examples we have [263]
230
ULRIKE HAHN AND MIKE OAKSFORD
provided seem consistent with a Bayesian approach to argument strength. This approach seems consistent with the three reasoning fallacies we have discussed in detail, i.e., the argument from ignorance, circularity, and the slippery slope. It is also seems to extend naturally to other reasoning fallacies and is also consistent with other Bayesian approaches in the literature (Shogenji 2000). This approach suggests that pragma-dialectical theories need to be supplemented with an explicit account of argument strength. This integration is an important project for future research. There are, however, some further issues we address before concluding. The continuity of everyday and scientific reasoning has been assumed in many disparate literatures on informal argumentation. Our Bayesian approach trades on similar approaches to scientific inference (e.g., Howson and Urbach 1989; Earman 1992), which, following other authors in the area (Kuhn, 1993), we have suggested can be extended to informal argument. This approach, therefore, may be regarded as standing or falling with Bayesian approaches to scientific inference, which have been strongly criticized (e.g., Miller 1994; Sober 2002). Of course, the jury is still out on this debate in the philosophy of science and it is not an issue we can resolve here. If, as we would argue, Bayesian approaches to scientific inference represent the current best explanation, then a similar account of informal argument strength seems equally viable. Thus to the extent that Bayesian accounts can explain the patterns of reasoning observed in science as well as the reasoning fallacies, these accounts are mutually supportive. However, the purpose of these patterns of inference may be fundamentally different in each case such that criticism of Bayesianism in the philosophy of science may not hit our similar position on informal argument. For example, the purpose of much informal reasoning is to persuade an audience, not necessarily to establish the Truth. Thus objections to the subjectivism of the Bayesian approach in the philosophy of science may not apply to informal reasoning. As we mentioned in the section Why Bayesian and Why Probabilities?, in AI there are many other accounts that introduce argument strengths as part of a computational account of argumentation and defeasible reasoning, i.e., reasoning in which conclusions can be defeated by subsequent argument. Moreover, it is the inclusion of argument strengths that provides these systems with their nice default properties (e.g., see, Gabbay 1996; Fox and Parsons 1998; Pollock 2001; in particular see, Prakken and Vreeswijk 2002, for a [264]
BAYESIAN APPROACH TO FALLACIES
231
recent overview). Some of this discussion focuses on technical issues about how adopting an argumentation or dialectical approach can provide solution to problems of uncertain or retractable reasoning (see the special issue of the Journal of Logic and Computation, vol. 13(3), on computational dialectics). Further, there is a great deal of technical work looking at the relationship between probability and logic (see the special issue of the Journal of Applied Logic, vol. 1, on combining probability and logic). Here the debate seems to be between those who think probability theory is logic and address the issues involved in making good on this claim (e.g., Howson 2003; Kyburg 2003) and those who explore alternative formalisms to interface between probability and logic (e.g., Fox 2003). Fox (2003), for example, explicitly advocates logics of argumentation as the appropriate interface. Even audience relativity (Perelman and OlbrechtsTyteca 1969) is being addressed by allowing that different audiences may place a different ordering over the values they attach to a set of arguments (Bench-Capon 2003). While we have explicitly eschewed issues of structure in this paper, any complete account of argumentation is going to have to deal with structure along the lines proposed by these recent developments in the relationship between logic and probability theory. Our demonstration that most of the fallacies can be dealt with from the normative perspective of Bayesian probability and decision theory serves as an existence proof that they can be provided with a formal treatment. Other AI approaches may then also be able to explain the fallacies in similar ways (for overviews of different formalisms for uncertainty, see, e.g., Krause and Clark 1993; Parsons 2000). Thus we see no incompatibility between the approach we have taken and these other accounts. Any formalism that provides similar mechanisms for representing subjective degrees of uncertainty and a means of updating these in argument would appear able to account for the fallacies. These other AI approaches also tend to introduce argument strengths – and now values (Bench-Capon 2003) – to tackle technical issues in computational reasoning systems. We think that our approach to the fallacies shows that these developments are well motivated from more than just a technical perspective. They allow these systems to make appropriate distinctions about the degree to which these patterns of reasoning are fallacious or not. Thus, there is more than just technical value to the introduction of argument strengths: they potentially allow systems like this to address one of [265]
232
ULRIKE HAHN AND MIKE OAKSFORD
the most enduring questions in the study of human reasoning, the nature of informal reasoning fallacies. However, one aspect of these approaches is that they tend to use logical entailment to determine whether a proposition is relevant to another proposition and so can confer an element of plausibility. As we pointed out in discussing the argument ad misericordiam, in argumentation theory, it is well recognized that many fallacies are fallacies of relevance. As we saw, such arguments can establish relevance despite the lack of any logical connection between propositions describing the to-be-pitied state and the recommended action. Consequently, existing theories of argumentation that transmit plausibilities via logical consequence relations may not capture all the ways in which relevance relations are established in argument. Recent accounts of informational relevance (e.g., Pearl 1988, 2000) may be a promising line to follow in developing notions of relevance with greater applicability to argumentation.
7. CONCLUSIONS
We have sought to demonstrate in this paper for several classical fallacies that they can be made to vary more or less continuously in strength from clearly uncompelling, and in that sense fallacious, to convincing, without changes in the type of dialogue in which they are embedded. This suggests strongly that pragma-dialectical considerations alone are not sufficient to explain the fallacies. Specifically, we have sought to show that it is not just context that affects the strength of an argument of a particular structure as put forth by pragma-dialectical theories, but rather that strength varies as a function of content. Standard textbook examples of fallacies are weak arguments, not due to the structure of the argument type in question or simply to the context in which they occur, but because of the particular content of individual instantiations of the argument. Bayesian inference provides a powerful analytic framework for capturing the variation in argument strength demonstrated for these fallacies and helps identify what sources of information are relevant to evaluating strength. Thus, while there may be alternative theories, we think these features recommend Bayesian inference for consideration as a general, normative theory of argument strength. [266]
BAYESIAN APPROACH TO FALLACIES
233
NOTES 1 By contrast, Walton (1992a) emphasizes the dependence on context, in line with recent emphasis on the pragmatic analysis of argumentative discourse, e.g., van Eemeren and Grootendorst (1984). Whether these two approaches, emphasis on content and emphasis on context, will turn out to be rivals or complementary treatments of different aspects of argumentation remains to be seen. 2 Such a probabilistic interpretation seems to be disavowed by Walton (1992a) however: “nor are they [Slippery Slope arguments] best conceived of as inductive arguments that predict an outcome based on probability. These practical arguments involve a kind of presumptive reasoning that makes them best seen as relative to the specific circumstances in a given situation.” 14). 3 This framework allows for a formalization of the components identified e.g., by Lode (1999, 512) and Holtug (1993) and their interrelationships. 4 The fact that utilities are subjective captures the fact that, as noted by Lode, 1999, “ideological differences” can have a considerable impact on how compelling different individuals find a given slippery slope argument. 5 This description applies more generally to what Lode (1999) calls “rational slippery slope arguments”. Their strength depends entirely on the relative utilities involved. 6 http://www.iep.utm.edu/f/fallacies.htm
REFERENCES
Alexy, R.: 1989, A Theory of Legal Argumentation, Clarendon Press, Oxford. Bench-Capon, T. J. M.: 2003, ‘Persuasion in Practical Argument Using Value-Based Argumentation Frameworks’, Journal of Logic and Computation 13, 429–448. Bowell, T. and Kemp, G.: 2002, Critical Thinking: A Concise Guide, Routledge, London. Brown, H.: 1993, ‘A Theory-Laden Observation can Test a Theory’, British Journal for the Philosophy of Science 44, 555–559. Brown, H.: 1994, ‘Circular Justifications’, PSA 1, 406–414. Burg, W. van der: 1991, ‘The Slippery Slope Argument’, Ethics 42, 42. Cherniak, C.: 1986, Minimal Rationality, MIT Press, Cambridge, MA. Clark, K. L.: 1978, ‘Negation as Failure’, In H. Gallaire and J. Minker (eds.) Logic and Databases, Plenum Press, New York, pp. 293–322. Copi, I. M. and Cohen, C.: 1990, Introduction to Logic (8th ed.), Macmillan Press, New York. De Cornulier, B.: 1988, ‘Knowing Whether, Knowing Who, and Epistemic Closure’, in M. Meyer (ed.) Questions and Questioning, Walter de Gruyter, Berlin, pp. 182–192. Earman, J.: 1992, Bayes or Bust? MIT Press, Cambridge, MA. Eemeren, F. H. van and Grootendorst, R.: 1984, Speech Acts in Argumentative Discussions. A Theoretical Model for the Analysis of Discussions Directed Towards Solving Conflicts of Opinion, De Gruyter, Berlin.
[267]
234
ULRIKE HAHN AND MIKE OAKSFORD
Eemeren, F. H. van and Grootendorst, R.: 1992, Argumentation, Communication, and Fallacies, Lawrence Erlbaum, Hillsdale, NJ. Eemeren, F. H. van and Grootendorst, R.: 2004, A Systematic Theory of Argumentation. The Pragma-Dialectical Approach, Cambridge University Press, Cambridge. Fox, J.: 2003, ‘Probability, Logic and the Cognitive Foundations of Rational Belief’, Journal of Applied Logic 1, 197–224. Fox, J. and Parsons, S.: 1998, ‘Arguing about Beliefs and Actions’, in A. Hunter and S. Parsons (eds.), Applications of Uncertainty Formalisms (Lecture Notes in Artificial Intelligence 1455), Springer Verlag, Berlin, pp. 266–302. Gabbay, D.: 1996, Labelled Deduction Systems, Oxford University Press, Oxford. Geiger, D. and Shenoy, P. P. (eds.) 1997, Uncertainty in Artificial Intelligence: Proceedings of the Thirteenth Conference, Morgan Kaufman, San Francisco, CA. Govier, T.: 1982, ‘What’s Wrong with Slippery Slope Arguments’, Canadian Journal of Philosophy 12, 303–316. Green, D. M. and Swets, J. A.: 1966, Signal Detection Theory and Psychophysics, Wiley, New York. Hamblin, C. L.: 1970, Fallacies, Methuen, London. Harman, G.: 1965, ‘The Inference to the Best Explanation’, Philosophical Review 64, 88–95. Holtug, N.: 1993, ‘Human gene therapy: Down the slippery slope’, Bioethics 7, 402– 419. Howson, C.: 2003, ‘Probability and logic’, Journal of Applied Logic 1, 151–165. Howson, C. and Urbach, P.: 1989, Scientific Reasoning: The Bayesian Approach, Open Court, La Salle, Illinois. Ikuenobe, P.: 2004, ‘On the Theoretical Unification and Nature of the Fallacies’, Argumentation 18, 189–211. Josephson, J. R. and Josephson, S. G.: 1994, Abductive Inference: Computation, Philosophy, Technology. Cambridge University Press, Cambridge. Kohlas, J.: 2003, Probabilistic Argumentation Systems: ‘A New Way to Combine Logic with Probability’, Journal of Applied Logic 1, 225–253. Krause, P. and Clark, D.: 1993, Representing Uncertain Knowledge: An Artificial Intelligence Approach, Intellect Books, Oxford, UK. Kuhn, D.: 1993, Connecting Scientific and Informal Reasoning, Merrill-Palmer. Kyburg, Jr., H. E.: 2003, ‘Are there degrees of belief ?’, Journal of Applied Logic 1, 139–149. Lode, E.: 1999, ‘Slippery Slope Arguments and Legal Reasoning’, California Law Review 87, 1469–1544. McClelland, J. L. and Rumelhart, D.E.: 1981, An Interactive Activation Model of Context Effects in Letter Perception: Part 1. An Account of Basic Findings, Psychological Review 88, 375–407. McClelland, J. L.: 1998, ‘Connectionist Models and Bayesian Inference’, in M. Oaksford and N. Chater (eds.) Rational Models of Cognition, Oxford University Press, Oxford. pp. 21–53. McMeniman, L.: 1999, From Inquiry to Argument, Allyn & Bacon, Needham Heights, Mass. Miller, D.: 1994, Critical Rationalism: A Restatement and a Defence, Open Court, La Salle, IL.
[268]
BAYESIAN APPROACH TO FALLACIES
235
Moshman, D. and Geil, M.: 1998, ‘Collaborative Reasoning: Evidence for Collective Rationality’, Thinking and Reasoning 4, 231–248. Oaksford, M. and Chater, N.: 1991, ‘Against Logicist Cognitive Science’, Mind & Language 6, 1–38. Oaksford, M. and Hahn, U.: 2004, ‘A Bayesian Approach to the Argument from Ignorance’, Canadian Journal of Experimental Psychology 58, 75–85. Parsons, S.: 2000, Qualitative Methods for Reasoning Under Uncertainty, MIT Press, Cambridge, MA. Pearl, J.: 1988, Probabilistic Reasoning in Intelligent Systems, Morgan Kaufman, San Mateo, CA. Pearl, J.: 2000, Causality, Cambridge University Press, Cambridge. Perelman, C. and Olbrechts-Tyteca, L.: 1969, The New Rhetoric: A Treatise on Argumentation, University of Notre Dame Press, Notre Dame, Indiana. Pollock, J. L.: 2001, ‘Defeasible Reasoning with Variable Degrees of Justification’, Artificial Intelligence 133, 233–282. Powers, L.: 1995, ‘The One Fallacy Theory’, Informal Logic 17, 303–314. Prakken, H. and Vreeswijk, G. A. W.: 2002, ‘Logics for Defeasible Argumentation’, in D. M. Gabbay and F. Guenthner (eds.), Handbook of Philosophical Logic, 2nd edn., Vol. 4, Kluwer Academic Publishers, Dordrecht/Boston/London. pp. 219–318. Prakken, H., Walton, D. and Reed, C.: 2005, ‘Dialogues about the Burden of Proof’, in Proceeding of the Tenth Internal Conference on Artificial Intelligence and Law, Bologna, Italy, June, Association for Computing Machinery, New York, pp. 115–124 Ramsey, F. P.: 1931, The Foundations of Mathematics and Other Logical Essays, Routledge and Kegan Paul, London. Reiter, R.: 1980, ‘A Logic for Default Reasoning’, Artificial Intelligence 13, 81–132. Reiter, R.: 1985, ‘On Reasoning by Default’, in R. Brachman, and H. Levesque (eds.), Readings in Knowledge Representation, Morgan Kaufman, Los Altos, CA, pp. 401–410. Savage, L. J.: 1954, The Foundations of Statistics, John Wiley, New York. Schauer, F.: 1985, ‘Slippery Slopes’, Harvard Law Review 99, 361–383. Shogenji, T.: 2000, ‘Self-dependent Justification Without Circularity’, British Journal for the Philosophy of Science 51, 287–298. Sober, E.: 2002, ‘Bayesianism: Its Scope and Limits’, in R. Swinburne (ed.), Bayes Theorem, Oxford University Press, Oxford. Surowiecki, J.: 2004, The Wisdom of Crowds, Doubleday, New York. Volokh, E.: 2003, ‘The Mechanisms of the Slippery Slope’, Harvard Law Review 116, 1026–1137. Walton, D. N.: 1985, ‘Are Circular Arguments Necessarily Vicious?’, American Philosophical Quarterly 22, 263–274. Walton, D. N.: 1989, Informal Logic, Cambridge University Press, Cambridge, UK. Walton, D. N.: 1991, Begging the Question: Circular Reasoning as a Tactic in Argumentation, Greenwood Press, New York. Walton, D. N.: 1992, ‘Nonfallacious Arguments from Ignorance’, American Philosophical Quarterly 29, 381–387. Walton, D. N.: 1992b, Slippery Slope Arguments, Oxford University Press.
[269]
236
ULRIKE HAHN AND MIKE OAKSFORD
Walton, D.N.: 1995, A Pragmatic Theory of Fallacy, The University of Alabama Press, Tuscaloosa/London. Walton, D. N.: 1996, Arguments from Ignorance, Pennsylvania State University Press, Philadelphia, PA. Walton, D.N.: 1998, The New Dialectic: Conversational Contexts of Argument, University of Toronto Press, Toronto. Williams, B.: 1985, ‘Which Slopes are Slippery?’, in M. Lockwood (ed.), Moral Dilemmas in Modern Medicine, Oxford University Press, Oxford. (pp. 126–137) Woods, J., Irvine, A. and Walton, D. N.: 2004, Argument: Critical Thinking, Logic and the Fallacies, Revised edn., Prentice Hall, Toronto. Ulrike Hahn School of Psychology Cardiff University Cardiff, CF10 3AT Wales United Kingdom E-mail:
[email protected] Mike Oaksford School of Psychology Birkbeck College London London, WC1E 7HX United Kingdom E-mail:
[email protected]
[270]