Abduction Without Minimality Abhaya C. Nayak1 and Norman Y. Foo2 1
Computational Reasoning Group, Department of Computing Macquarie University, NSW 2109 Australia
[email protected] 2 Knowledge Systems Group, School of Computer Science and Engineering The University of New South Wales, NSW 2052 Australia
[email protected]
Abstract. In most accounts of common-sense reasoning, only the most preferred among models supplied by the evidence are retained (and the rest eliminated) in order to enhance the inferential prowess. One problem with this strategy is that the agent’s working set of models shrinks quickly in the process. We argue that instead of rejecting all the non-best models, the reasoner should reject only the worst models and then examine the consequences of adopting this principle in the context of abductive reasoning. Apart from providing the relevant representation results, we indicate why an iterated account of abduction is feasible in this framework.
Keywords: belief revision, common-sense reasoning, philosophical foundations
1 Introduction In many approaches to common-sense reasoning [6], belief change [3] and abductive reasoning [10] appeal is made to the principle of minimal change. This principle can be viewed as the commonsensical principle of selecting the best from the available set of alternatives [12]. In a recent work [9], Nayak et al. have advocated the adoption of the principle of rejecting the worst in lieu of the principle of selecting the best in the context of AGM style belief revision [1, 3]. The aim of this work is to extend this idea to abductive reasoning – in particular, to explore the consequences of discarding the choose the best principle in favour of reject the worst principle in the context of abductive belief change [10]. This paper is organised as follows. In the next section we argue that the principle of selecting the best is inappropriate in contexts of a certain character, and should be discarded in favour of the principle of rejecting the worst. In section x3 we quickly present the account of abductive belief change due to Pagnucco [10] and argue that it is one of those contexts where adopting the principle of selecting the best has perilous consequences. Section x4 explores the consequences of adopting the principle of rejecting the worst in the context of abductive belief change. Section x5 is devoted to soundness and completeness results for this approach. We end with a brief discussion regarding the feasibility of an iterative account of abductive belief change in the proposed framework.
2 The Perils of Choosing the Best The principle of choosing the best essentially says that if there are multiple available ways of attaining a certain (desired) goal, one should choose those that one considers best (according to some contextually defined preference criteria). This principle is very appealing indeed. A moment’s reflection shows that the appeal of this principle lies in a simple linguistic fact, namely, that the expression “best item” more or less means an item that should be chosen if offered as an alternative. The principle “select the best” is hence a glaringly obvious but entirely content-less principle. This principle means that one should select what should be selected, and hence is only as good as the underlying implementation of the concepts “best”. More to the point, there is the underlying assumption that one already knows what is best in the given context. In particular, if one does not exactly know what the best item in the choice set is, and one considers x to be only a first approximation to what might be the best, the principle “select the best” has no prescriptive force as to whether or not one should select x. Let us now consider a concrete situation. Suppose you are planning to fly from Australia to Europe and you are considering which airline to choose. Your choice set, of course, is the set of airlines that provide service from Australia to Europe. The simple suggestion, “Choose the best airline”, is not of much help since it does not tell you which airline to choose. A bit of soul searching might explicate your criteria of choice – (low) price, (good) service, (less) number of stopovers, (less) hours of waiting at airports, (convenient) departure and arrival times, (good) safety record. 1 Now, if you knew how to quantify these properties of an airline, and the relative importance of these properties (as weights) so far as your choice is concerned, then you could presumably take the weighted sum of the first figures as the desirability of an airline and easily determine what the best airline is. In other words, given that we have an exhaustive list of the preference criteria, their relative importance as weights and the relevant properties of individual airlines as quantities, the criteria in question can be combined in order to provide a single (read ultimate) preference criterion. Now, applying the principle of selecting the best we can find the desired airline to be contacted. (If there are more than one of them, we can devise some tie-breaking mechanism.) In practice, however, the required quantifications may not be available. If so, we are not dealing with a single (read ultimate) preference criteria, but a bunch of them and we have to sequentially use these criteria to determine the item to be selected. The question then is whether the principle of selecting the best can be applied in this situation. In our favourite example, suppose that the criteria in question were arrived at (and applied) in the given sequence. TABLE 2 encodes information as to the preference among airlines with respect to different criteria. 2 For instance, according to this table, Air France and Quantas offer the best price followed by British Airways which is in turn followed by Lufthansa. On the other hand, JAL and Swiss Air offer the worst price, 1
2
This is not necessarily the only criteria you are going to consider – you might come up with more criteria, e.g. the type of frequent flyer program offered by the airline and want to add them to the list later without recomputing the best airline from scratch again. But without loss of generality, let us pretend that the list in question is complete. This table is completely fictitious, and has nothing to do with what really is or is not the case.
Price Service Stopovers Waiting Timing Safety AF, QA JAL BA SW, KLM LU, SA LU BA SW, SA SW, LU LU SW SW, KLM LU QA, KLM KLM, JAL SA, JAL JAL JAL, BA KLM, SA BA, LU AF, QA QA, AF AF, KLM SA JAL, SW AF SA BA QA, BA QA, AF Table 1. Preference over airlines based on different criteria.
whereas KLM and Singapore Airline offer next to worst. Suppose you consider low price as the primary factor. By applying the principle of selecting the best, the choice set is shrunk to just Air France and Quantas. Next you come up with the criterion “good service”. Since your choice set is now fAir France, Quantasg and Quantas fares better than Air France on service count, your choice set is now reduced to the singleton set fQuantasg. After that you are stuck with Quantas, no matter how terrible its safety record is, no matter how inconvenient its timing is, etc. – unless you are prepared to go back to the original choice set and apply the criteria in a different sequence.3 In fact, according to our table, Quantas has the worst safety record, the worst timing and only next to worst in both waiting and number of stopovers. You are still committed to choose this airline due to the principle of choosing the best! If instead of price, you had started with the quality of service, you would have been stuck with JAL right at the outset although it is most expensive among the air lines and is only mediocre as far as stopovers, waiting time and timing are concerned and next to worst in safety record. The perils of the “select the best” approach is obvious. By selecting the best, we are severely restricting the available choices for future selection, and end up choosing an option which is possibly not at all preferable on some other count. The way out of this peril is also equally obvious – we should follow some approach which is less restrictive. One way to achieve this goal is, instead of rejecting every option which is non-best, we should rather reject every option that is worst. Applying this alternative principle to our pet example, when we consider the criterion of price, we reject JAL and Swiss Air, and are still left with six other airlines. Next, on service count we eliminate Air France, on Stopover count we reject Singapore Airlines, on waiting count British Airways, on count of timing Quantas (BA is already eliminated at this point), and on safety count KLM (at this point SW, JAL, BA, SA, QA and AF are no longer available for elimination). Thus we end up selecting Lufthansa which is ranked mediocre on count of price, second best on count of stopovers and waiting, best on counts of timing and safety, and next to worst only on count of service. Many would agree that this is a lot more sensible choice than Quantas, given TABLE 2. We have thus noticed that there are contexts when the “reject the worst” principle seems to be more sensible than the “select the best” principle. We will conclude this section with a sketchy outline of the features which, when present, make a context 3
But that does not solve the problem, only postpones it!
more appropriate for the “reject the worst” principle as opposed to the “select the best” principle. – First of all, these principles apply to a choice context. If no choice is at issue, then these principles are irrelevant in that context. – Given that a choice is to be made, the set of alternatives (or the choice set) is clearly specified – and that the choice must be made from members of that set. For instance, in our example, since Cathay Pacific is not an available option, the agent is not allowed to choose Cathay Pacific. – It is understood that the choice being made is not necessarily the final choice. It is possible that the agent might be required to narrow down the choices further in light of hitherto unavailable criteria. We will maintain that the above three are the salient features of a choice context in which the “choose the best” principle should be given up in favour of the “reject the worst” principle. In the next section we will show how abductive reasoning is a context with these features, and hence is appropriate for the “reject the worst” principle.
3 Abductive by Choosing the Best Recently a very interesting account of abduction has been offered by Pagnucco [10] as an extension of the classic AGM system of belief change [1]. We will briefly recount the AGM system of belief change followed by Pagnucco’s account of abductive belief change. 3.1 Belief Change In the AGM system, a belief state is represented as a theory (i.e., a set of sentences closed under your favourite consequence operation), new information (epistemic input) is represented as a single sentence, and a state transition function, called revision, returns a new belief state given an old belief state and an epistemic input. If the input in question is not belief contravening, i.e., does not conflict with the given belief state (theory), then the new belief state is simply the consequence closure of the old state together with the epistemic input. In the other case, i.e., when the input is belief contravening, the model utilises a selection mechanism (e.g. an epistemic entrenchment relation over beliefs, a nearness relation over worlds or a preference relation over theories ) in order to determine what portion of the old belief state has to be discarded before the input is incorporated into it. From here onwards we will assume a finitary propositional object language L.4 Let its logic be represented by a classical logical consequence operation Cn. The yielding relation ` is defined via Cn as: ? ` iff 2 Cn(? ). The AGM revision operation is required to satisfy the following rationality postulates: Let K be a belief set (a set of sentences closed under Cn), the sentence x 2 L be the evidence, the revision operator, and Kx the result of revising K by x. 4
A finitary language is a language generated from a finite number of atomic sentences. So the number of sentences in this language is not finite.
(1 )
(2 ) (3 ) (4 ) (5 ) (6 ) (7 ) (8 )
K is a theory x 2 K K Cn(K [ fxg) If K 6` :x then Cn(K [ fxg) K K = K? iff ` :x If ` x $ y; then K = K K( ^ ) Cn(K [ fyg) If :y 62 K then Cn(K [ fy g) K( ^ x
x
x
x
x
x
x
y
x
y
x
x
x
y)
Motivation for these postulates can be found in [3]. We call any revision operation that satisfies the above eight constraints “AGM rational”. These postulates can actually be translated into constraints on a non-monotonic inference relation j [5]. 3.2 Semantics of Belief Change There are various constructions of an AGM rational revision operation. The one we will present is equivalent to the construction via “Systems of Spheres” (SOS) propounded by Adam Grove [7]. Let M be the class of maximally consistent sets w of sentences in the language in question. The reader is encouraged to think of these maximal sets as worlds, models or scenarios. We will use the following expressions interchangeably: “w j= ”, “ allows w” and w 2 []”, where w is an element in M and is either a sentence or a set of sentences.) Given the belief set K , denote by [K ] the worlds allowed by it, i.e., [K ] = fw 2 M j K wg. (Similarly, for any sentence x, let [x] be the set of “worlds” in which x holds.) A system of spheres is simply represented by a connected, transitive and reflexive relation (total preorder) v over the set M such that [K ] is exactly the set of v-minimal worlds of M. Intuitively, w v w0 may be read as: w is at least as good/preferable as w0 (or, w0 is not strictly preferred to w). We define the Grove-revision function G as: G G G [Kx ] = fw 2 [x]j for all w 0 2 [x]; w v w 0 g, whereby Kx = [Kx ]. It turns out that the AGM revision postulates characterise the Grove revision operation G.5 A visual representation of the crucial case in the Grove Construction is given in Figure 1.
T
In this, the area marked [x] represents the models allowed by the evidence x. The area represents the model currently entertained by the agent, and the broken circles demarcate models according to their perceived plausibility. The farther a model is from the centre, the more implausible it is. The shaded part of [x] represents the least implausible of the models allowed by the evidence x – hence identified with [Kx ]. [K ]
5
Readers acquainted with Grove’s work will easily notice that given a system of spheres , the relation can be generated as: 0 iff for every sphere 0 that has 0 as a member, 0 with there exists sphere as a member. On the other hand, given a total preorder on , a system of spheres v can be generated as follows: A set is a sphere in v iff given any member of , if 0 then 0 is also a member of . It is easily noticed that the -minimal worlds of constitute the central sphere, and for any sentence , the -minimal members of [ ] constitute [ xG ] in the corresponding SOS.
v
M
v
v
SS
wv w w
w S w vw M x K
S
w
w
SM S
v
x
[K] [x]
Fig. 1. Minimality Based revision – the principal case
Viewed from this semantic angle, belief change is about preferential choice: [KxG ] essentially identifies the subset to be chosen from [x] as the set of worlds that are v-best in [x]. We introduce the following notation for later use. Definition 1 A subset T of M is said to be v-flat just in case w v w0 for all members w; w0 of T . In this case, the members of T are called v-equivalent. w < w0 , on the other hand, is used as an abbreviation for (w v w0 ) ^ (w0 6v w) 3.3 Minimality based Abduction In section x3.2 we offered a constructive approach to belief change via a similarity relation v among worlds. There is a well known alternative to this construction based on a binary relation over sentential beliefs, known as “epistemic entrenchment” [4]. This relation may be viewed as ranking the beliefs based on their comparative strengths of acceptance. Alternatively, a constructive approach to belief change can also be based on the pair-wise comparison of the disbeliefs with respect to their strength of denouncement [2]. None of these approaches allow any nontrivial comparison among plausible hypotheses that have neither been accepted nor rejected by the agent (henceforth plausibilities). A case can be made, however, that these plausibilities, namely the hypotheses that the agent has suspended judgement about, can be meaningfully compared with respect to their plausibility. After all, the whole Bayesian tradition is based the probabilistic comparison of such plausibilities! If we grant that plausibilities can be meaningfully compared with each other, it has an interesting spin-off with respect to the Grovian Systems of Spheres. Let us say, for a start, that of two plausibilities x and y , the former is more plausible iff some x-validating scenario is preferable (closer to the reality) to every y -validating scenario. However, since x and y are plausibilities, the most preferred x-validating and y-validating worlds are members of [K ] and [K ] is v-flat! So in order to allow meaningful comparison of plausibilities, we have to supplement the Grovian measure (primarily over M n [K ]) with a measure over [K ]. That is precisely what Pagnucco does in [10] in order to offer us an account of abduction.
Pagnucco effectively ignores (with good reason) the extra- [K ] systems of sphere and introduces an intra-[K ] systems of spheres and examines the consequences of adopting a minimality-based belief change operation with respect to the later. The result is not belief revision proper since the pieces of evidence that are of interest here are not disbeliefs but plausibilities, and hence are consistent with the current knowledge. Since the result is in general stronger than classical expansion, it is closest to what has been called abduction or inference to the best explanation in the literature [11]. The following figure provides a visual representation of the abductive process suggested in [10].
[K]
[x] [K+x]
Fig. 2. Minimality based Abduction
Pagnucco has examined the properties of this abduction operation. Let K be the current belief set, x the evidence and + the abductive expansion operation. The following list fully characterises this operation. (1+ ) +
(2 )
(3+ )
(4+ )
(5+ )
(6+ )
(7+ ) (8+ )
K + is a theory If :x 62 K then x 2 K + K K+ If K 6` :x then K + = K If K 6` :x then :x 62 K + If K ` x $ y; then K + = K + K + Cn(K(+ _ ) [ fxg) If :x 62 K +_ then K(+ _ ) K + x
x
x
x
x
x
x
x
x
y
y
y
x
y
x
The motivation behind these properties can be found in [10]. 3.4 Failure of Minimality based Abduction Despite its innovative approach, Pagnucco’s suggestion soccumbs to a serious problem. It has been early recognised that any belief change operation should satisfy the properties of category matching: the object that undergoes change must result in an object
of the same category. Without this property, there is no guarantee that the resultant object can face up to another change. This was a major problem with the classical AGM approach to belief change; extensions of this approach avoid this myopic problem [8]. However, Pagnucco’s approach has not addressed this issue. In this, a structured object ([K ]) undergoes an epistemic change and results in an unstructured object ( [Kx+ ]) which, in turn, cannot handle further abductive change. In section x2 we outlined some features in presence of which the “reject the worst” principle is more appropriate than the “select the best” principle. It is easily verified that the context of abductive belief change has all these features. Hence we suggest that we give up “choose the best” principle in the context of abduction and adopt the reject worst principle instead.
4 Abduction by Rejecting the Worst In the case of abduction, the crucial test is what happens when the evidence x is consistent with the current knowledge K . Accordingly, we will pretend that Mn [K ] is v-flat, although [K ] itself is, in general, not v-flat. This assumption is granted in Pagnucco’s account as well, and is similar in spirit to the assumption in [9] that K is v-flat. For convenience, we will assume that K is consistent. When the evidence is inconsistent with K , it is a boundary case, and it does not really matter how we deal with the boundery case. In this case, Pagnucco disallows any change in the current knowledge. On the other hand, we will stick to the classical AGM approach and assume that in this case the resultant state is inconsistent. (This accords well with the Reject worst principle – assuming that the worlds outside [K ] are all equally preferred, they are all rejected; so the resultant set is empty.) Suppose now that the evidence x is consistent with the current knowledge K , namely, [K ] \ [x] 6= ;. Then the choice set in question is the set of worlds [K ] \ [x]. If not all members of this set are equally preferred (or dispreferred), then according to the Reject Worst Principle, at least one member of this set will be rejected, and the rest will be returned as [Kx+ ].6 Accordingly, given an appropriate total preorder v on M for a belief set K we define the non-minimal abduction operation 2+ v (the subscript is henceforth dropped for readability except when the context is confusing) as follows:
v to 2 ) Where v is a total preorder on M and K a belief set such that K ] = fw j w < w0 for some w0 2 Mg, 8; if [K ] \ [x] = ; >< [K ] \ [x] else if [K ] \ [x] is v-flat 2 [Kx ] = >: fw 2 [K ] \0 [x] j w < w0 for some w 2 [K ] \ [x]g other wise.
Definition 2 (from
+
[
+
This definition separates three distinct ways of processing the evidence, as pictured in Figure 3. The first case is represented by [z ]. In this case all the models in [z ] are 6
We should not use the symbol + here, since we have already used it to denote Pagnucco’s abductive abduction. Later on we use a more appropriate symbol 2+ .
eliminated. The area [y ] represents the second case. Here, since the models in [K ] \ cannot be discriminated on the basis of v alone, none of them is eliminated. The principal case, namely the third case, is represented by [x]. Here, among the models provided by [K ] \ [x], the most implausible ones are eliminated and the rest are retained, perhaps, for future scrutiny.
[y ]
[z] [K]
[x] [K+x]
[K+y] [y]
Fig. 3. Abduction Without Minimality
How good is 2+ , as defined above, as an abduction operation? We suggest that any abduction operation must have the following basic properties:
2 ) K 2 is a theory (22 ) x 2 K 2 (32 ) K K 2 (42 ) If K 6` :x then K 2 6` :x (52 ) If K 6` ? then K 2 = K? iff K ` :x (62 ) If K ` x $ y; then K 2 = K 2 (1
+ + + + + +
+ x
+ x
+ x
+ x + x
+ x
+ y
The first three of these properties are obvious requirements for any expansion operation, abductive or otherwise. Given (22+ ), the fourth condition says that evidence consistent with the current knowledge cannot introduce inconsistency into one’s body of knowledge. The fifth property says that an abductive process results in an inconsistent body of knowledge exactly when the evidence in question conflicts with the current knowledge. The sixth property says that the syntactic representation of evidence is irrelevant to the result of an abductive expansion, modulo the consequences of the current knowledge. It is interesting to compare these basic properties with the first six postulates proposed by Pagnucco [10]. The basic difference is based on the difference between the corresponding second properties. Unlike (22+ ), Pagnucco’s Success postulate is conditional upon the evidence being consistent with the current knowledge. When the evidence is inconsistent with the current knowledge, expansion is a boundary case, and
how it is handled should not be given much importance. Accordingly, while we retain the AGM property of Success at the cost of allowing possible expansion into inconsistency, Pagnucco avoids such silly expansion at the cost of losing Success. This explains the difference between the fourth properties of abduction in the two systems. As will be reported in section x5, our abduction operation 2+ , apart from satisfying these basic postulates, also satisfies the following five supplementary postulates for the abductive expansion.
: 2 ) If Kx2 6 Cn(K [ fx; yg) then Kx2^y Cn(2Kx2 [ fyg) 2 2 (7:22 ) If Ky = Cn(K [ fyg) then Kx^y Cn(Kx [ fyg) 2 (7:32 ) If Kx \ Cn(K [ fyg) Cn(K [ fxg) then Kx2^y Cn(Kx2 [ fyg) If Kx2 6` :y then Cn(Kx2 [ fyg) Kx2^y (82 ) If K 6` :x; Kx2 ` :y but K [ fxg 6` :y (92 ) then Kx2^y = Cn(K [ fx; yg). Postulates (7:12 )–(7:32 ) tell us under what condition a piece of evidence y loses its inferential power in presence of another piece of evidence x. For instance, (7:12 ) (7 1
+ + +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
may be paraphrased as follows: Given some background knowledge K , if x is able to explain something that cannot be classically inferred from x and y together, then there is nothing that does not classically follow from y in presence of what x alone explains, and yet is jointly explainable by x and y together. Postulate (82+ ) on the other hand says that x and y jointly fail to explain something that follows from y in presence of what is explainable by x only if y conflicts with something that is explained by x. Finally, postulate (92+ ) essentially says that even though evidence y does not conflict with x (and the background knowledge K ), if y conflicts with something explainable in terms of x, then x and y jointly have no abductive force. One striking difference between our supplementary postulates and those in [10] is that while in the latter constraints are sought for handling disjunctive evidence (i.e., on Kx+_y ), in the current approach constraints are sought on the result of processing conjunctive evidence (i.e., on Kx2+^y ). The primary reason for this is that we wanted the in [9] and 2+ to be made obvious. We however connection between the properties of 2 believe that the same effect can be achieved by putting constraints on Kx+_y as are achieved by constraining Kx+_y in the postulates (7:12+ )–(92+ ).
5 Technical Results In section x4 we proposed and discussed a list of abduction postulates. It remains to be seen whether the proposed postulates in fact capture the semantic intuition behind Definition 2. The following results show that the postulates (12+ )–(92+ ) indeed characterise the operation in question. The proofs have been omitted due to space constraints. Our first result, the soundness result, shows that the expansion operation 2+ defined from the total preorder v operation via Definition 2 in fact satisfies properties (12+ )–(92+ ). Theorem 1 Let v be a total preorder. Let the operation 2+ = 2+ v be defined from v in accordance with Definition 2. The operation 2+ satisfies the properties (12+ ? 92+ ).
Next we show the completeness result to the effect that given an abductive expansion operation 2+ that satisfies (12+ ? 92+ ) and a fixed belief set K we can construct a binary relation v2+;K with the desired properties. (We will trade off rigour against readability, and normally drop the subscripts.) In particular, we will show that, where v is the relation so constructed: (1) v is a total preorder over M, (2) the SOS (System of Spheres) corresponding to v is a SOS whose maximal elements are exactly the 2+ members of M n [K ] and (3) Kx2+ = Kxv for any sentence x. Definition 3 (from 2+ to v) Given a revision operation 2+ and a belief set K , w v2 +;K w0 iff either w0 62 [K ] or both fw; w0 g [K ] and w 2 [Kx2+ ] whenever w0 2 [Kx2+ ]; for every sentence x such that fw; w0 g [K ] \ [x]. Theorem 2 Let 2+ be a revision operation satisfying (12+ ) ? (92+ ) and K a belief set. Let v be generated from 2+ and K as prescribed by Definition 3. Then v is a total preorder on M such that [K ] is the set of v-non maximal elements of M. Theorems 1 and 2 jointly show that the postulates (12+ )-(92+ ) exactly characterise the abductive expansion operation constructed from the v relation. Furthermore, the total preorder v2+;K constructed from a given non-minimal revision operation 2+ and belief set K is the desired v in the sense that the non-minimal revision operation constructed from it, in turn, behaves like the original operation 2+ with respect to the belief set K . Theorem 3 Let 2+ be a non-minimal belief revision operator satisfying postulates (12+ ? 92+ ) and K be an arbitrary belief set. Let v be defined from 2+ and K in accordance with Definition 3. Let 2+ 0 = 2+ v be defined from v, in turn, via Definition 2. Then for any sentences x (and the originally fixed belief set K ) it holds that 0 Kx2+ = Kx2+ . Conversely, one can start with a total preorder v, construct a revision operation 2+ from it via Definition 2 and then construct a a total preorder v from that 2+ in turn via Definition 3, then one gets back the original relation v.
Theorem 4 Let v be a total preorder on M and [K ] the set of v-non maximal members of M. Let 2+ be defined (for K ) from v via Definition 2. Let v0 =v2+ be defined from 2+ , in turn, via Definition 3. Then w v w0 iff w v0 w0 for any two worlds
w; w0 2 M
6 Discussion: Iterated Abduction In this paper, we examined the consequences of adopting the reject worst principle in the context of abductive belief change. This was done with the intent of extending a recently proposed account of abduction in [10] so that iterative abduction can be accommodated in the resulting framework. In the account of abduction provided in [10], there is no room for abduction. This is because Pagnucco’s minimality based abduction relies on the degree of plausibility (of sentences that are at the time neither believed nor disbelieved) but provides no such measure in the resultant belief state. Graphically
speaking, (see Figure 2) [Kx+ ], the candidate for [K ] in the next generation, is devoid of any structure, making it impossible to generate a measure of plausibility.7 So, after one round of abduction, this account will reduce to classical AGM expansion. The account of abduction provided in this paper addresses this shortcoming. In general, there is enough structure left in [Kx2+ ] making further rounds of abduction possible. Of course, after each round of abduction, less structure is left in the new [K ], and eventually it will flatten out. Thus, it would appear as if our account merely postpones the problem of iterated abduction. Such a conclusion, however, is rather premature. Often, the evidence we handle is not consistent with our current knowledge. In such a case, it is understood that the agent should use a revision operation (instead of an expan, although currently the plausibility measure is sion operation). And if the agent uses 2 flat, the revision is likely to inject structure into one’s plausibility measure. A rigorous presentation of this material is beyond the scope of this paper, and is the subject of a different work.
References 1. Carlos E. Alchourr´on, Peter G¨ardenfors, and David Makinson. On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic, 50:510–530, 1985. 2. Didier Dubois and Henri Prade. Belief change and possibility theory. In Peter G¨ardenfors, editor, Belief Revision, pages 142–182. Cambridge University Press, 1992. 3. Peter G¨ardenfors. Knowledge in Flux: Modeling the Dynamics of Epistemic States. Bradford Books, MIT Press, Cambridge Massachusetts, 1988. 4. Peter G¨ardenfors and David Makinson. Revisions of knowledge systems using epistemic entrenchment. In Proceedings of the Second Conference on Theoretical Aspect of Reasoning About Knowledge, pages 83–96, 1988. 5. Peter G¨ardenfors and David Makinson. Nonmonotonic inference based on expectations. Artificial Intelligence, 65:197–245, 1994. 6. Michael R. Genesereth and Nils J. Nilsson. Logical Foundations of Artificial Intelligence. Morgan Kaufmann, 1987. 7. Adam Grove. Two modellings for theory change. Journal of Philosophical Logic, 17:157– 170, 1988. 8. Abhaya C. Nayak. Iterated belief change based on epistemic entrenchment. Erkenntnis, 41:353–390, 1994. 9. Abhaya C. Nayak and Norman Y. Foo. Reasoning without minimality. In Hing-Yan Lee and Hiroshi Motoda, editors, Proceedings of the Fifth Pacific Rim International Conference on Artificial Intelligence (PRICAI-98), pages 122–133. Springer Verlag, 1998. 10. Maurice Pagnucco. The Role of Abductive Reasoning within the Process of Belief revision. PhD thesis, University of Sydney, 1996. 11. Gabrielle Paul. Approaches to abductive reasoning: An overview. Artificial Intelligence Review, 7:109–152, 1993. 12. Yoav Shoham. Reasoning About Change. MIT Press, Cambridge, Massachusetts, 1988.
7
Strictly speaking, it will generate a flat plausibility measure.