Abstract. Possibility theory offers a general framework for dealing with default rules of the form "generally, if p, then q" which are modelled by constraints ...
Expressing Independence in a Possibilistic Framework and its Application to Default Reasoning Salem Benferhat, Didier Dubois and Henri Prade1 Abstract.
Possibility theory offers a general framework for dealing with default rules of the form "generally, if p, then q" which are modelled by constraints expressing that having p and q true is strictly more possible than having p and not q true. Then possibilistic logic offers an inference machinery for default reasoning which obeys classical postulates for nonmonotonic reasoning including rational monotony. In this paper after presenting an overview of the possibilistic approach to default reasoning, we discuss how to express independence information of the type "in context p, the truth or the falsity of r has no influence on the truth of q". A brief discussion of the modelling of independence in possibility theory is provided. Taking into account independence assumptions leads to supplement the given set of defaults with more specific defaults; this allows to deal with independence information in a way homogeneous with defaults, and to solve blocking of property inheritance problems.
1
INTRODUCTION
In [2], a method has been proposed to encode default rules of the form "normally if p then q", in the framework of possibilistic logic. However, commonsense reasoning does not only depend on generic pieces of knowledge pervaded with exceptions. It also takes advantage of (contextual) independence assumptions of the form: the fact that r is true (or is false) does not affect the validity of the rule "normally if p then q". In this paper we investigate how to model such independence assumptions in the possibilistic framework where, as already said, we deal with default rules in a satisfactory way. However, it is not completely clear what kind of notion of independence can be expressed in possibility theory and how it can be done. Indeed there is no well-established concept of independence in possibility theory, in contrast with probability theory where it plays a key role, as for instance in belief networks [16]. The paper starts with a motivating example where it is shown how possibility theory and possibilistic logic apply to the treatment of default rules. It is also the opportunity to give the necessary background. Then we investigate how independence can be defined in possibility theory, discussing several possible definitions which depart from probabilistic independence. After having determined a definition of independence which is appropriate for default reasoning, we show how to introduce independence assumptions in our possibilistic knowledge base. Lastly, some relations with other works and lines for further research are pointed out. 1 Institut de Recherche en Informatique de Toulouse (I.R.I.T.) – C.N.R.S., Université Paul Sabatier, 118 route de Narbonne, 31062 Toulouse Cedex, France
2
BACKGROUND AND EXAMPLE
Let us consider the following (usual) set of default rules d1: "birds fly", d2: "penguins do not fly", d3: "penguins are birds", symbolically written {d1: b → f, d2: p → ¬f, d3: p → b}.
2.1
Possibility Theory
Each default p → q can be viewed as a constraint expressing that the situation where p ∧ q is true has a greater plausibility than the one where p ∧ ¬q is true; if we prefer, we want to express that in the context where p is true, q is more possible or plausible than ¬q. So we need a qualitative relation for comparing plausibility levels. We use a qualitative possibility relation ≥∏ defined on a Boolean algebra of propositions [15][3][6]. Such a relation is assumed to be i) complete (∀p, ∀q, p ≥∏ q or q ≥∏ p), ii) transitive, and to satisfy iii) T >∏ ⊥ , where T and ⊥ represent tautology and contradiction respectively, and > ∏ the strict part of the ordering ≥∏ ; iv) T ≥∏ p ≥∏ ⊥, ∀p; and the characteristic axiom v) p ≥∏ q ⇒ r ∨ p ≥∏ r ∨ q, ∀ p, ∀ q, ∀ r. In the finite case, the only numerical counterparts to possibility relations are possibility measures [22], such that ∀p, ∀q, ∏(p ∨ q) = max(∏(p),∏(q)). By convention, ∀p, ∏(p) ∈ [0,1] and ∏(⊥) = 0. Let Ω be the finite set of interpretations of our propositional logic language. If this language is made of the literals p1, …, pn, these interpretations correspond to the possible worlds where the conjunctions ∗p1 ∧… ∧ ∗pn are true, where ∗ stands for the presence of the negation sign ¬ or its absence. Let Ω = {ω 1 , …, ω k } with k = 2n . In the finite case a possibility measure ∏ is equivalent to a possibility distribution π from Ω to [0,1] according to the relation ∏(p) = max{π(ω) | ω
. p}
letting π(ω r ) = ∏(∗ r p 1 ∧… ∧ ∗ r p n ) where ∗ r denotes the assignment of signs '¬' corresponding to ω r , and ω p means that ω makes p true. By convention π represents some background knowledge about where the real world is; π(ω) = 0 means that ω is not possible, and π(ω) = 1 that nothing prevents ω from being the real world. When π(ω) > π(ω'), ω is a preferred candidate to ω' for being the real world (ω is said to be more possible than ω'). π is thus a
.
© 1994 S. Benferhat, D. Dubois and H. Prade ECAI 94. 11th European Conference on Artificial Intelligence Edited by A. Cohn Published in 1994 by John Wiley & Sons, Ltd.
convenient encoding of a preference relation that can embody concepts such as normality, typicality, consistency with available knowledge, qualitative likelihood, etc. ∏(p)
evaluates the extent to which p is consistent with the available knowledge expressed by π. By duality a necessity measure N is associated with ∏ by
∀p, N(p) = 1 – ∏(¬p). N expresses certainty since when ¬p is impossible, p becomes certain. We have N(p) = min{1 – π(ω) | ω ¬p}.
.
2.2
Generation of a Normality Ordering
The set of three defaults in our example will be represented by the following set C of constraints: (b ∧ f >∏ b ∧ ¬f), (p ∧ ¬f >∏ p ∧ f), (p ∧ b >∏ p ∧ ¬b). And let Ω be the set of possible interpretations {ω 0 : ¬b ∧ ¬f ∧ ¬p, ω 1 : ¬b ∧ ¬f ∧ p, ω 2 : ¬b ∧ f ∧ ¬p, ω 3 : ¬b ∧ f ∧ p, ω 4 : b ∧ ¬f ∧ ¬p, ω 5 : b ∧ ¬f ∧ p, ω 6 : b ∧ f ∧ ¬p, ω 7 : b ∧ f ∧ p}. Then the set of constraints C' on models is: C'1: max(π(ω 6), π(ω 7)) > max(π(ω 4), π(ω 5)) C'2: max(π(ω 5), π(ω 1)) > max(π(ω 3), π(ω 7)) C'3: max(π(ω 5), π(ω 7)) > max(π(ω 1), π(ω 3)). Let >π be a ranking of Ω, such that ω >π ω' iff π(ω) > π(ω') on Ω. Any finite consistent set of constraints: {pi ∧ qi >∏ pi ∧ ¬qi} induces a partially defined ranking >π on Ω, that can be completed according to the principle of minimum specificity. The idea is to assign to each world ω the highest possibility level (in forming a well-ordered partition of Ω) without violating the constraints. The ordered partition of Ω associated with > π using the minimum specificity principle can be easily obtained by the following procedure [2]
π(ω 6) = 1; π(ω 4) = π(ω 5) = 2/3; π(ω 1) = π(ω 3) = π(ω 7) = 1/3. Note that the numerical scale is purely a matter of convenience and any other numerical counterpart such that π(ω) > π(ω') iff ω >π ω' will work as well. Namely [0,1] is used as an ordinal scale. From π, we can compute the necessity degree N(p) for any proposition p. For instance, N(¬p ∨ ¬f) = min{1 – π(ω) | ω p ∧ f} = min(1 – π(ω 3), 1 – π(ω 7 )) = 2/3, while N(¬b ∨ f) = min(1 – π(ω 4 ), 1 – π(ω5)) = 1/3 and N(¬p ∨ b) = min(1 – π(ω1), 1 – π(ω3)) = 2/3.
.
2.3
Encoding Defaults in Possibilistic Logic
The method then consists in turning each default pi → qi into a possibilistic clause (¬pi ∨ qi, N(¬pi ∨ qi)) where N is computed from the possibility distribution π induced by the set of constraints corresponding to the default knowledge base. Then we apply the possibilistic inference machinery for reasoning with the defaults together with the available factual knowledge. We first recall the basic principles of possibilistic inference and the guarantees we have about the conclusions which can be inferred by this method. Possibilistic logic (see [4] for a general exposition) manipulates first order logical formulas weighted by lower bounds of necessity measures or possibility measures. We only consider necessity-weighted formulas here. Inference at the syntactic level in possibilistic logic is performed by means of a weighted version of the resolution principle here stated for propositional formulas [4]:
a. i = 0 b. While Ω is not empty repeat b.1.-b.4.: b.1. i ← i + 1 b.2. Put in Ei every model which does not appear in the right side of any constraints of C', b.3. Remove the elements of Ei from Ω, b.4. Remove from C' any constraint containing elements of Ei. Let us apply now this algorithm. The set of models which do not appear in the right side of any constraint of C' is E 1 = {ω 0, ω 2, ω 6}. We remove the elements of E1 from Ω and we remove the constraint C' 1 from C' (since assigning to ω 6 the highest possibility level makes the constraint C'1 always satisfied). We start again the procedure and we find successively the two following sets {ω4, ω5} and {ω1, ω3, ω7}. Finally, the well ordered partition of Ω is: {ω 0 , ω 2 , ω 6 } >π {ω 4 , ω 5 } >π {ω 1 , ω 3 , ω 7 }. Let E1 , …, Em be the obtained partition. A numerical counterpart to >π can be defined by π(ω) = m + 1 – i if ω ∈ E i, i = 1,m. m In our example we have m = 3 and π(ω 0 ) = π(ω 2 ) =
(p ∨ q α); (¬p ∨ r β)
; (q ∨ r
min(α,β)).
Proving (p α) from a possibilistic knowledge base B = {(pi αi), i = 1,n} comes down to deriving the contradiction (⊥ β) from B ∪ {(¬p 1)} with a weight β ≥ α. It will be denoted by B (p β ). We can also compute the degree of inconsistency of B as max{α, B (⊥ α)} = Inc(B). When Inc(B) = 0, B is consistent and this is equivalent to the classical consistency of the set of formulas in B without taking into account the weights. This inference method is as efficient as classical logic refutation by resolution, and has been implemented in the form of an A*-like algorithm. This inference method is sound and complete (see, e.g., [4]) with respect to the possibilistic semantics of the possibilistic knowledge base B represented by the possibility distribution πB defined by
;
;
.
∀ ω ∈ Ω, πB (ω) = mini=1,n {1 – α i, ω ¬pi}. Namely B (p α) if and only if the degree of inconsistency of the possibility distribution πB associated to B ∪ {(¬p 1)} is greater than α. Equivalently, NB (p) ≥ α in the sense of π B . This possibility distribution is not necessarily normal (i.e., ∃ω, πB(ω) = 1) and 1 – maxω∈Ω πB(ω) is called the degree of inconsistency of the possibilistic knowledge base. Using the weights computed in Section 2.2 for the defaults, πB completely agrees with the ordered partition {E 1,E2,E3}. Besides, given a possibility distribution π on Ω, a notion of preferential entailment π can be defined in the spirit of Shoham [19]'s proposal:
;
.
Automated Reasoning
151
S. Benferhat, D. Dubois and H. Prade
p
.π q ⇔ all the worlds which maximize π, among those which satisfy p, satisfy q.
We restrict this definition to propositions p such that ∏(p) > 0. It can then be established that [7] p
.π q
.
if and only if {ω p | π(ω) = ∏(p) > 0} ⊆ {ω if and only if ∏(p ∧ q) > ∏(p ∧ ¬q).
. q}
expressing that "birds have feet" noted b → ft. As we can check below, we cannot deduce that a penguin has feet. Indeed the set of defaults {b → f, b → ft, p → b, p → ¬f} leads to the following possibilistic partition of Ω in a 3level stratification
Preferential possibilistic entailment can also be captured at the syntactical level, and we can prove [7] p
.π q for π = πB if and only if B ∪ {(p 1)} ; (q α) with α > Inc(B ∪ {(p 1)}.
It was established [2] that any nonmonotonic inference relation obeying the following postulates: reflexivity, left logical equivalence, right weakening, AND or OR rules and rational monotony (see [12]), can be represented by a possibilistic entailment π. Besides π is equivalent to the inference 1 proposed by Pearl [17] in system Z, and indeed the ranking procedure of the interpretations, of the formulas and of the defaults described above is equivalent to the one based on system Z. We can now finish the treatment of our example. Notice first that the possibilistic knowledge base equivalent to our set of defaults is ∑={(¬b ∨ f 1/3),(¬p ∨ b 2/3),(¬p ∨ ¬f 2/3)}. Then the following derivation, knowing with certainty that Tweety is a penguin (p 1) and a bird (b 1), gives the optimal degree of inconsistency of ∑ ∪ {(p 1), (b 1)} which is equal to 1/3 where:
.
;
(¬ p ∨ b
.
(¬ b ∨ f
2/3) (¬ p ∨ f
1/3)
(¬ p
1/3) (¬ p ∨ ¬ f
2/3)
1/3)
(p
(⊥
1)
1/3)
By refutation, the following derivation shows that ¬f is truly a logical consequence of Σ ∪ {(p 1),(b 1)}, i.e., Tweety does not fly since adding the piece of information (f 1) we find the degree of inconsistency equal to 2/3 which is higher than 1/3. Indeed {(¬p ∨ ¬f 2/3),(p 1)} (¬f 2/3) and {(f 1),(¬f 2/3)} (⊥ 2/3)
;
;
N.B.1: This framework can cope with the irrelevance problem, e.g., knowing only that Tweety is a bird (b 1) and is red (r 1) we can still deduce that Tweety flies since
.
∑ ∪ {(b 1), (r 1)} π (f 1/3) with 1/3 > Inc(∑ ∪ {(b 1), (r 1)}) = 0. This is not surprising:
It can be easily seen that it will not be possible to deduce that a penguin has feet since the two preferred models (where penguin is true) are ω 8 where ft is true and ω 9 where ft is false. The fact that penguins form a non-typical subclass of birds due to the failure of the fly property leads to assume that this subclass is exceptional in other respects. However, we would like to be able to draw the conclusion that penguins have feet since intuitively this has nothing to do with the failure of the fly property. A natural idea is then to try to express some kind of independence between the fact of having feet and the fact of flying, for birds.
3 IN SEARCH OF INDEPENDENCE 3.1 Non-Interactivity in Possibility Theory In possibility theory, there exists a form of independence between variables, called non-interactivity, first introduced in [21]. Given two normal possibility distributions πX and πY restricting the possible values of two variables X and Y on their respective domains ΩX and ΩY , the two variables X and Y will be said to be non-interactive iff the possibility distribution restricting the pair (X,Y) is given by π(X,Y)(ω,ω') = min(πX(ω), πY(ω')). In this case πX,Y can be built from its projections since max ω' π(X,Y) (ω,ω') = πX (ω) and maxω π(X,Y) (ω,ω') = π Y (ω'). Note that πX,Y is the largest (i.e., least specific) possibility distribution obeying the constraints πX ≥ π(X,Y) and πY ≥ π(X,Y) . What it expressed is that X and Y are logically independent in the sense that preferences can be expressed independently on X and Y. It corresponds to a lack of information on the possible links between X and Y since π(X,Y) ≤ min(πX,πY)
.π satisfies rational monotony.
N.B.2: It is also possible to accommodate hard rules, i.e., rules without exceptions in our framework. Indeed a hard rule p ⇒ q is equivalent to the constraint ∏(p ∧ ¬q) = 0. See Benferhat [1] for details. However, as an expanded version of the example shows, there are cases, where we are not able to deduce expected conclusions. Let us add to our set of defaults a fourth default Automated Reasoning
E1 = {ω 0 : ¬p ∧ ¬b ∧ ¬f ∧ ¬ft, ω 1 : ¬p ∧ ¬b ∧ ¬f ∧ ft, ω 2 : ¬p ∧ ¬b ∧ f ∧ ¬ft, ω 3 : ¬p ∧ ¬b ∧ f ∧ ft, ω 4: ¬p ∧ b ∧ f ∧ ft} E2 = {ω 5 : ¬p ∧ b ∧ ¬f ∧ ¬ft, ω 6 : ¬p ∧ b ∧ ¬f ∧ ft, ω 7 : ¬p ∧ b ∧ f ∧ ¬ft, ω 8 : p ∧ b ∧ ¬f ∧ ft, ω 9: p ∧ b ∧ ¬f ∧ ¬ft} E3 = {ω 10 : p ∧ ¬b ∧ ¬f ∧ ¬ft, ω 11 : p ∧ ¬b ∧ ¬f ∧ ft, ω 12 : p ∧ ¬b ∧ f ∧ ¬ft, ω 13 : p ∧ ¬b ∧ f ∧ ft, ω 14 : p ∧ b ∧ f ∧ ¬ft, ω 15 : p ∧ b ∧ f ∧ ft}.
152
in any case. For instance, consider the particular case, where Ω X = {p,¬p}, ΩY = {q,¬q}, non-interactivity between variables means altogether ∏(p ∧ q) = min(∏(p),∏(q)); ∏(¬p ∧ q) = min(∏(¬p),∏(q)); ∏(p∧¬q)=min(∏(p),∏(¬q)); ∏(¬p∧¬q)=min(∏(¬p),∏(¬q)). Since we have ∏(p) = ∏((p ∧ q) ∨ (p ∧ ¬q)) = max(∏(p ∧ q), ∏(p ∧ ¬q)), etc., the only solutions are such that S. Benferhat, D. Dubois and H. Prade
∃ (x,y) ∈ {p,¬p} × {q,¬q} ∪ {q,¬q} × {p,¬p}, ∏(x ∧ y) = ∏(x ∧ ¬y) ≤ min(∏(¬x ∧ y), ∏(¬x ∧ ¬y)) and max(∏(¬x ∧ y), ∏(¬x ∧ ¬y)) = 1. It leaves a lot of freedom, and we may, for instance be in a state of total ignorance about one of the variables say Y, (∏(q) = ∏(¬q) = 1) while the other variable is either somewhat certainly true (∏(p) = 1, ∏(¬p) = α) or somewhat certainly false ∏(p) = α, ∏(¬p) = 1. This kind of independence is too much related to the idea of lack of information for being what we need to express, which is, in our case, a supplementary information.
3.2
in the setting of possibilistic logic since it is not a constraint of the same nature as before. A less restrictive condition of independence that does not requires an extended representational power is the following: r does not question the acceptance of q iff N(q) > 0 ⇒ N(q|r) > 0.
Clearly (A) entails (B). Note that (B) is sensitive to negation, i.e., we cannot change r into ¬r in (B), contrary to probabilistic independence. Similarly, q and r do not play symmetric roles as is the case in probability theory, i.e., (B) is not equivalent to N(r) > 0 ⇒ N(r|q) > 0. Hence if we want to express that q is accepted whether r is true or false we have to explicitly state that N(q) > 0 ⇒ N(q|r) > 0 and N(q|¬r) > 0. Note that this negation-insensitive definition can be also expressed using equivalence instead of ⇒ since it is ever true that N(q|r) > 0 and N(q|¬r) > 0 imply N(q) > 0. The form of independence expressed by (B) can be conditioned by a given context, say p. It leads to the definition q is certain in context p independently of r iff N(q|p) > 0 ⇒ N(q|p ∧ r) > 0. (CB)
Independence Based on Conditioning
Independence is related in the probabilistic framework to the idea of conditioning. A purely ordinal notion of conditioning can be defined for possibility and necessity measures, by means of an equation similar to Bayesian conditioning ∏(p ∧ q) = min(∏(q|p), ∏(p))
(1)
when ∏(p) > 0. ∏(q|p) is defined as the greatest solution to this equation in accordance with the minimum specificity principle. It leads to ∏(q|p) = 1 if ∏(p ∧ q) = ∏(p) = ∏(p ∧ q) otherwise
Condition (CB) is thus equivalent to
when ∏(p) > 0. If ∏(p) = 0, ∏(q|p) = 1, ∀ q ≠ ⊥. The conditional necessity measure is simply defined as N(q|p) = 1 – ∏(¬q|p). This notion of conditioning contrasts with Dempster rule of conditioning. The latter is obtained when min is changed into product in the equality (1), viewing a possibility measure as a special case of Shafer [18]'s plausibility function. Note that conditional necessity enables a simple expression of the possibilistic entailment since [2]: p
.π q
iff ∏(p ∧ q) > ∏(p ∧ ¬q) iff N(q|p) > 0
A natural way for expressing independence in terms of conditional necessity is then the following. Accepting q is not affected by r iff N(q) = N(q|r) > 0. (A) We restrict to the case when N(q) > 0 since when N(q) = 0 and N(¬q) > 0 it is enough to consider ¬q. If N(q) = N(¬q) = 0, then q does not appear in the knowledge base (it is unknown), and in this paper we consider only the independence between facts already present in the knowledge base. Note that (A) can be written ∏(¬q) = ∏(¬q|r) < 1. It is not related to the equality ∏(q) = ∏(q|r), which in turn entails ∏(p ∧ q) = min(∏(q),∏(p)). The converse does not hold. (A) also implies that ∏(p ∧ q) = min(∏(q),∏(p)). It is easy to verify that (A) is equivalent to the set of three inequalities ∏(q) > ∏(¬q) ; ∏(q ∧ r) > ∏(¬q ∧ r) ; ∏(r ∧ ¬q) ≥ ∏(¬r ∧ ¬q). It means that "generally q is true", "generally, if r is true then q is true", but it is forbidden to claim that when q is false then r is false. The latter condition appears too restrictive in the setting of exception-tolerant generic rules since claiming that r does not affect our acceptance of q should leave us free to accept r or not when q is false. Moreover the third condition corresponds to the negation of a default rule (¬(¬q → ¬r)), something we cannot handle yet Automated Reasoning
(B)
∏(q ∧ p) > ∏(¬q ∧ p) ⇒ ∏(q ∧ p ∧ r) > ∏(¬q ∧ p ∧ r) In other words, assuming that the default p → q is independent of the truth of r just amounts to replacing p → q by one more specific default, namely p ∧ r → q. Two defaults p ∧ r → q and p ∧ ¬r → q must be added if p → q is claimed to hold regardless of the truth or falsity of r. Let us first consider two simple examples before going back to our penguin example. Let our knowledge base be made of the two defaults p → q and p → r corresponding to the graph of Figure 1. So knowing that p is true we can q derive q and r. Knowing p and ¬r we p would like to still deduce q, if q is known r not to depend on ¬r in the context p. However while we do conclude q from p Figure 1 and r, we cannot conclude q from p and ¬r using the results of Section 2. Thus applying (BC) to p → q and ¬r leads to add the constraint ∏(q ∧ p ∧ ¬r) > ∏(¬q ∧ p ∧ ¬r). By symmetry, applying (BC) to p → r with respect to ¬q leads to add ∏(r ∧ p ∧ ¬q) > ∏(¬r ∧ p ∧ ¬q). Altogether, the principle of minimum specificity leads to the refined ordering ∏(p ∧ q ∧ r) > ∏(p ∧ ¬q ∧ r) = ∏(p ∧ q ∧ ¬r) > ∏(p ∧ ¬q ∧ ¬r). Then clearly, knowing p and ¬r, we still conclude q since the interpretation which makes p ∧ q ∧ ¬r true is strictly preferred to the one which makes p ∧ ¬q ∧ ¬r true. Now consider the set of defaults {p → q, q → r} corresponding to the graph of Figure 2. The problem is here that the approach of Section 2 does not allow to conclude q in presence of p and ¬r. It can be easily seen that applying (BC) to p → q with respect to ¬r, i.e., ∏(p ∧ q ∧ ¬r) > ∏(p ∧ ¬q ∧ ¬r) is sufficient to reach this objective. p
q
r
Figure 2
Finally, let us go back to our penguin example which corresponds to Figure 3. In order to overcome the property of blocking of inheritance we can state that in the context b,
153
S. Benferhat, D. Dubois and H. Prade
the property ft does not depend on p, i.e., add the default b ∧ p → ft, i.e., ∏(b ∧ p ∧ ft) > ∏(b ∧ p ∧ ¬ft) which reads max(π(ω8),π(ω15)) > max(π(ω9),π(ω14)) and comes down to π(ω 8 ) > π(ω 9 ), i.e., ω 9 is put in E3 . The same conclusion could be reached by requiring that in the context b, ft is independent from ¬f since it leads to require that ∏(b ∧ ft ∧ ¬f) > ∏(b ∧ ¬ft ∧ ¬f), i.e., max(π(ω 8 ),π(ω 6 )) > max(π(ω5,ω9)). This constraint, along with the principle of minimal specificity leads to put both ω 5 and ω 9 in E3 . Clearly this constraint also leads us to conclude that penguins have feet.
q", with a view to generalize his "rational closure" construction [14]. He insists on a conditional contraposition property (that in our model reads N(q|p) > 0 ⇒ N(q|p ∧ r) > 0 entails N(¬r|p) > 0 ⇒ N(¬r|p ∧ ¬q) > 0) that is forbidden by (A) and enabled by (B). The relationships between the above works and the proposal made here should be laid bare in the future.
ACKNOWLEDGEMENTS This work has been partially supported by the European ESPRIT Basic Research Action n° 6156 entitled "Defeasible Reasoning and Uncertainty Management Systems (DRUMS-II)".
f p
b ft
Figure 3
REFERENCES
4
[1]
CONCLUSION AND RELATED WORKS
We have seen that the proposed notion of independence amounts to adding new constraints to a set of constraints representing defaults. Since these new constraints are equivalent to the expression of new and more specific defaults, these constraints are of the same type as the others, and the procedure for building the least specific possibility distribution compatible with the augmented set of constraints, as well as the possibilistic logic machinery, still apply. However, a question of interest will be to avoid, in a given configuration of defaults, independence assumptions leading to an inconsistent set of constraints. Another question is whether it is possible to encode the independence relations in graphical structures, such that the new defaults could be generated by a topological check on the graph drawn from the conditional knowledge base. Several authors have recently considered independence outside the probabilistic framework. Goldszmidt and Pearl [11] have tried to use Bayesian network methods in the framework of Spohn functions. Contrary to the graphs used here, nodes represent propositional variables, not literals. Moreover the authors focus on acyclic networks having a causal flavor. Clearly, in conditional knowledge bases, those assumptions should not be required. Fonck [10] has thoroughly investigated the counterpart of Bayesian networks in possibility theory, both at the theoretical level (graphoïd and semi-graphoïd structures for possibilistic independence) and at the algorithmic level. She uses a symmetrized version of (A) for defining independence. Studeny [20] proposes a comparative overview of the properties of conditional independence in various calculi. Fariñas del Cerro and Herzig [9] study notions of independence in connection with updating problems. Their notion of interference between formulas is negationinsensitive and enables conditional logics to be enriched with a frame axiom. More recently, these authors [8] have investigated several notions of independence that can be expressed in the setting of possibility theory (especially the non-interactivity of Section 3.1, and ∏(q|p) = ∏(q)). Their idea is to relate independence to revision and contraction of knowledge bases. Lastly, Lehmann [13]'s current investigations are very close to ours in their intent since he proposes to express pieces of information of the form "in the context p, the truth of r cannot undermine my belief in Automated Reasoning
[2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]
154
S. Benferhat, 'Handling hard rules and default rules in possibilistic logic', Proc. 5th IPMU'94, Paris, July 4-8, 1994. S. Benferhat, D. Dubois and H. Prade, 'Representing default rules in possibilistic logic', Proc. KR'92, Cambridge, MA, 1992, 673-684. D. Dubois, 'Belief structures, possibility theory and decomposable confidence measures on finite sets', Comput. Artif. Intell., 5(5), 403416, (1986). D. Dubois, J. Lang and H. Prade, 'Possibilistic logic', In: Handbook of Logic in Artificial Intelligence and Logic Programming (D.M. Gabbay et al., eds.), Oxford University Press, 1994, 439-513. D. Dubois and H. Prade, 'Necessity measures and the resolution principle', IEEE Trans. on Systems, Man & Cyber., 17, 474-478, (1987). D. Dubois and H. Prade, 'Epistemic entrenchment and possibilistic logic', Artificial Intelligence, 50, 223-239 (1991). D. Dubois and H. Prade, 'Possibilistic logic, preferential models, non-monotonicity and related issues', Proc. IJCAI'91, Sydney, 1991, 419-424. L. Fariñas del Cerro and A. Herzig, 'Conditional possibility and dependence', Proc. 5th IPMU'94, Paris, July 4-8, 1994. L. Fariñas del Cerro and A. Herzig, 'Interference logic=conditional logic + frame axiom', Int. J. Intel. Syst., 9, 119-130, (1994). P. Fonck, Réseaux d'inférence pour le raisonnement possibiliste. Dissertation, Université de Liège, Belgium, 1993. M. Goldszmidt and J. Pearl, 'Rank-based systems', Proc. KR'92, Cambridge, MA, 1992, 661-672. S. Kraus, D. Lehmann and M. Magidor, 'Nonmonotonic reasoning, preferential models and cumulative logics', Artificial Intelligence, 44, 167-207, (1990). D. Lehmann, Representing commonsense information. Personnal communication, 1993. D. Lehmann and M. Magidor, 'What does a conditional knowledge base entail?', Artificial Intelligence, 55(1), 1-60, (1992). D.K. Lewis, 'Counterfactuals and comparative possibility', J . Philosophical Logic, 2 (1973). J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan & Kaufmann, 1988. J. Pearl, 'System Z: A natural ordering of defaults with tractable applications to default reasoning', Proc. TARK'90, Morgan & Kaufmann, 1990, 121-135. G. Shafer, A Mathematical Theory of Evidence. Princeton University Press, Princeton, NJ, 1976. Y. Shoham, Reasoning About Change. MIT Press, Cambridge, MA, 1988. M. Studeny, 'Formal properties of conditional independence in different calculi of AI', Proc. ECSQARU'93, LNCS 747, Springer Verlag, Berlin, 1993, 341-348. L.A. Zadeh, 'The concept of a linguistic variable and its application to approximate reasoning', Information Sciences, 8, 199-249 (1975). L.A. Zadeh, 'Fuzzy sets as a basis for a theory of possibility', Fuzzy Sets and Systems, 1, 3-28, (1978).
S. Benferhat, D. Dubois and H. Prade