Implicit Degree of Support for Finite Lower-Upper Conditional Probabilities Extensions Andrea Capotorti Maroussa Zagoraiou Dip. di Matematica e Informatica Dip. di Scienze Statistiche “Paolo Fortunati” Universit`a di Perugia - Italy Universit` a di Bologna - Italy
[email protected] [email protected]
Abstract In this paper we propose a measure for the implicit degree of support for coherent extensions of probabilistic models based on partial conditional lower-upper assessments. This degree of support is induced by the different extension bounds that arise operationally in automated computational inference procedures. These different bounds are induced by the “extreme” distributions compatible with an initial assessment, i.e. those that strictly attain at least one of the constraints imposed by the modeler. The minimum and the maximum of these extension bounds determine the so called “least commitment” coherent extension interval, while the intermediate values can be used to grade the support of the initial assessment for tighter intervals. The appropriateness of the degree of support is determined by coherence of particular sub-intervals of the wider “least commitment” coherent extension range. The proposed degree of support can be helpful in the process of finding appropriate extension bounds whenever the standard procedure results in bounds that are too wide to be useful. Keywords: Coherent lower-upper conditional probability assessments, inference, degree of support for conditional probability distributions.
1
Introduction
During recent decades the use of imprecise models to deal with uncertainty has increased dramatically. The value of these developments has been most striking in problems where the information at hand is not so fully detailed to allow us to adopt standard statistical tools. Initially outlined in the pioneering work of de Finetti [8] and later by Walley [11], imprecise models have been applied by now to a wide range of subjects. (For an exhaustive overview, refer to recent ISIPTA symposia [9].) Of particular interest are coherent lower-upper conditional probabilities assessments. These have been developed in the work started in [5] and subsequently described fully in a book by Coletti & Scozzafava [7]. Their relevance derives both from their flexibility and their potential to incorporate several different approaches to uncertainty. Nonetheless, when these procedures are used in their fully generality they can lead to conclusions that are too vague to be useful. This can happen especially when they are applied to problem with really scarce information. In the present paper hence we propose a systematic procedure to shrink inference conclusions that are identified by extension bounds. Our result relies on further analysis of the coherence characterization of an agreeing (compatible) class of conditional probabilities distributions [5]. Within the class of compatible distributions are the so called “extreme” distributions, i.e., those that strictly attain at least one of the constraints imposed by the
initial model. Starting with an assessment of n conditional lower-upper probabilities, the extreme distributions number no more than 2n, one for each specific bound. Each one of these extreme distributions produces its own extension interval of conditional probability values for a new inferential target (conditional event). The named characterization theorem uses the various extension intervals generated by the extreme distributions to build the final coherent extension interval via their convex combinations. The resulting coherent interval represents the bounds for the conditional probability of the inferential target that the initial full class of agreeing distributions induces. It is sometimes named either “least commitment” or “natural” extension because based only on the initial assessment and obtained simply by coherence, without any additional consideration or imposition. Sometimes it happens that this general procedure produces very wide intervals, even the vague extension [0, 1] that expresses the absence of probabilistic influence of the initial model on the inferential target. In a previous paper [4] we have proposed a coherent shrinkage procedure in the case the different extension intervals generated by the extreme distributions have a non-empty intersection. In the present paper we complete the reasoning by proposing a coherent shrinkage of the general extension interval also in the case the different extension intervals have an empty intersection. The difference is that now we find an interval that guaranties the coherence of any enlargement; in our previous argument we found that any sub-interval of the “core” was coherent. The detection of surely coherent bounds for the inferential target that reduces vague intervals generated by the “least commitment” procedure can be a helpful support in a decision process. The deepening of the analysis and the use of reasonable assumptions remain the most valid tools to reduce the vagueness. See for example [1]. However, the procedure we propose here can provide support that makes explicit information that is other-
wise hidden inside the initial class of agreeing distributions. It is also an “objective” procedure, in the sense that the result is induced only by the initial model, without any external influence. Nonetheless, our procedure requires thoughtful use as a tool. Its meaningfulness and appropriateness will depend on the specific applications and cannot be adopted systematically and a-critically. Apart from the coherence, the different extensions intervals induced by the extreme distributions naturally produce a “degree of support” on any sub-interval of the most general one. We propose a measure of degree of support that is based on “how many” extreme distributions are compatible with the proposed range. The rest of the paper is organized as follows. In the next three subsections we will give the basic notions about inference through coherent lower-upper conditional probability assessments. Even if everything is already fully described in [7], we will report what would suffice to let the contribution be as much as possible self-contained. Section 2 contains the main results about the coherent choice of subintervals of the “natural extension” and the “degree of support” that naturally derives. We conclude with Section 3 where a medical application illustrates a potential use of the results obtained. 1.1
Preliminaries
For the sake of simplicity we will use conditional and unconditional events, but everything can be easily generalized to (finite) random variables, conditional or not. (See for example what has been done with conditional previsions in [3]). The initial information, usually a knowledge “and/or” rule base, is represented through a conditional lower-upper probability assessment (E, LC , p). The first component of an assessment is a generic list of n conditional events E = (E1 |H1 , . . . , En |Hn ). Note that some Ei |Hi could be actually unconditional and in such case Hi will coincide with the sure event Ω. In the following we will also refer to the set
UE = {E1 , . . . , En , H1 , . . . , Hn } of unconditional events appearing as components of the elements of E. Incompleteness of the information can have two origins: firstly the Ei ’s might not describe all possible combinations of situations; secondly the different circumstances Hi ’s might overlap or might not cover all possibilities. To evaluate this, it is crucial to know the extent of logical relations existing among the events in UE . In general, the list LC of the logical constraints among UE will appear as the second component of an assessment. Such relationships LC represent constraints that limit the possible atoms 1 in the problem. The atoms Ar , with r = 1, . . . , a ≤ 22n , are elementary events (i.e. they form a partition) obtained by full combinations of affirmed or negated events in UE . Note that in the three valued logic associated with conditional events, each element Ei |Hi ∈ E generates a partition with only three elements {Ei Hi , ¬Ei Hi , ¬Hi }. Thus, the couples of atoms that present Ei ¬Hi and ¬Ei ¬Hi can be joined together leading to an expression for the atoms as ^ ^ Ar = E 1 |H1 ∧ . . . ∧ En |Hn ,
(1)
^ where each E i |Hi varies among Ei Hi , ¬Ei Hi , or ¬Hi . This will actually reduce to 3n the upper bound for the number of atoms. Moreover, as it is usual in conditional contexts (see [6] and [7, §11.3]), we will refer only to atoms W spanned by UE and inside the disjunction ni=1 Hi . Only elementary situations contemplated in some of the considered hypotheses must be involved to check the consistency of the assessment2 . Hence the proper upper bound for the number of atoms a is 3n − 1. In the sequel we will also need to use the characteristic vectors of the events. These are vec1 In some discipline atoms are called possible worlds, while in [10] they are referred as realm. 2 For those familiar with Walley’s notation, a similar motivation is used to introduce the consistency property of Avoiding Uniform Loss rather than the Avoiding Sure Loss for conditional previsions. See [11, §7.1.3].
tors whose components are 1 or 0 depending on whether the corresponding atom implies the event or not. We will denote such vectors with the same letter as the event, but in boldface lower-cases. Hence, ei and hi will denote the characteristic vectors of Ei and Hi , respectively, while their juxtaposition ei hi will represent the characteristic vector of the conjunction Ei Hi . (For the sake of simplicity in the following we will omit the usual conjunction operator ∧.) Introducing a vector of variables x = (x1 . . . xa ), where each component xr is associated with possible values for the probability of the atom Ar , it is possible to rebuild the possible values of probability for any event in UE , say Ei , simply by P (Ei ) =
X
P (Ar ) = ei · x ,
(2)
Ar ⊆Ei
where · represents the row-column matrix product. Notice that the atoms, and consequently the characteristic vectors, are not a part of the conditional probability assessment. That is, they are not asserted directly by the analyst as input to the problem. Rather, they are defined implicitly by the first two components E and LC . Nonetheless, they are important because they are the main operational tool involved in the inferential process. The last component of an assessment is represented by a vector of numerical ranges p = ([lb1 , ub1 ], . . . , [lbn , ubn ]). Extreme points of each closed interval [lbi , ubi ] represent lower and upper bounds associated with probabilities for the corresponding conditional event Ei |Hi . These are usually estimated by expert beliefs, by literature reports or by collected data. Note that some of the numerical ranges [lbi , ubi ] may degenerate to a single value pi , representing a precise assessment. 1.2
Coherence
If we cannot adopt a unique probabilistic model for the assessment (E, LC , p), it is still
possible to search for the class P of full conditional probability distributions that are compatible with the assessments we can make. It is possible to ask various properties of P: in the present paper we look for a class such that p coincides with the convex envelope of P restricted to E, i.e. such that, for all Ei |Hi ∈ E ∀P ∈ P 0
lbi ≤ P (Ei |Hi ) ≤ ubi
00
0
∃P , P ∈ P s.t. P (Ei |Hi ) = lbi 00
P (Ei |Hi ) = ubi
(3) , . (4)
Practically speaking, the third component p of the assessment represents a set of numerical constraints that all the admissible models (the conditional probabilities P ∈ P) must satisfy (inequalities (3)). Such constraints must be tight enough so that their bounds can be actually reached by some of the admissible models (equalities (4)). This is to say that there are no redundant constraints. In the following, such a class P of probability distributions will be said to agree with the assessment (E, LC , p). The existence of the class guarantees the coherence of the assessment. Those distributions that attain at least one of equalities in (4) will be named in the following “extreme” distributions of P. As has been already stated in [5], and in particular in [7, §15.2], the existence of P can be checked operationally by the satisfiability of a class of sequences of linear systems {Sαj }, with j = 1, . . . , 2n and α = 1, . . . , αj , having a common structure like (ej hj − pj hj ) · xα (ek hk − lbk hk ) · xα (ek hk − ubk hk ) · xα α x ≥ 0 , xα 6= 0
= ≥ ≤
α−1
0 if¾hj · x =0 0 ∀Ek |Hk 6= Ej |Hj 0 s.t. hk · xα−1 = 0 (5)
where 0 is the null vector, Ej |Hj equals Ei |Hi for both consecutive odd and even indexes j = 2i − 1 and j = 2i (or, conversely i = x j+1 2 y), while the value pj that appears in the first equation equals lbi for the odd indexes j = 2i − 1 and ubi for the even ones j = 2i. Hence, for each event Ei |Hi ∈ E there will be associated two sequences of linear systems Sα2i−1 and Sα2i . This to ensure that, according
to (4), the bounds lbi and ubi can be actually attained. Of course whenever the bounds degenerate to a single value pi , the two sequences coincide. Hence, it could be said that the existence of P is ensured by the existence of its extreme distributions. Note that sequences of linear systems are necessary to allow conditioning events Hi ’s to have induced probabilities that are not bounded away from 0. This procedure partitions E in different zero layers 3 indexed by α. Such linear systems reflect an attempt to determine unconditional probability distributions through which to construct the agreeing class P (i.e. a set of conditional probabilities satisfying (3) and (4)). The set of all possible solutions to the class of linear systems sequence {Sαj } implicitly induces the searched class P. Hence, if such set of solutions is not empty, the assessment (E, LC , p) is said to be coherent, otherwise not. Note that this coherence notion is almost the same as those usually adopted in imprecise probabilities frameworks. That is, p coincides with its natural extension. (See [11]) The only difference occurs in the proper treatment of conditional events Ei |Hi whose conditioning Hi can have probability not bounded away from zero. 1.3
Extension
In practical applications when information comes from different sources, it turns out that checking the coherence of the assessment (E, LC , p) is a compulsory step. Once coherence has been assured, it is possible to perform inference on any conditional event K|F judged important enough to warrant conclusions in the problem. Generally, K represents some hypothesis to judge whenever there should be some evidence F . In this context, inference reduces to compute the coherent extension of p to K|F , obtainable as the closed interval [P (K|F ), P (K|F )] 3 Here we report only basic notions, for a deeper exposition of this aspect refer again to [7], in particular to § 12 and § 15.
with P (K|F ) = minP ∈P P (K∗ |F∗ ) and P (K|F ) = maxP ∈P P (K ∗ |F ∗ ), where K∗ |F∗ and K ∗ |F ∗ are the greatest and the smallest conditional event logically dependent on UE contained and containing4 K|F , respectively. Although this is theoretically simple, from the practical point of view it is more subtle. In fact, following a method similar of that depicted in the previous subsection, we are required to perform sequences of optimizations {Oαj }. Thanks to the possibility of exploiting zero probabilities and thanks to proper normalization conditions, all the optimizations problems in {Oαj } are reduced to be linear programs5 . The number of sequences is the same as that of the linear systems (5): there are two sequences with j = 2i − 1 and j = 2i for each conditional event Ei |Hi ∈ E. Hence they are actually “at worst” 2n, on account of the already stated consideration about the coincidence of the pair of sequences whenever coherence requires a precise value for pi ∈ p. Each sequence is actually composed by two programs, one pertaining to K∗ |F∗ and the other pertaining to K ∗ |F ∗ , that end with a pair of optimal values lbjK|F and ubjK|F . These values represent the minimum and the maximum, respectively, for the searched probability P (K|F ) but under the specific equality constraint in (4) associated to Ej |Hj . Hence they can be thought as specific extension bounds for P (K|F ) related to each extreme distribution of P. Each optimization program consists of two parts. In the first part there is the goal to reach the deepest zero-layer of the target K∗ |F∗ (or of K ∗ |F ∗ ). This is attained by taking a fictitious objective function, e.g. minimize Ω · x, while the constraint f∗ · xα = 0
(f ∗ · xα = 0)
(6)
is imposed, together with those of (5). Till solutions exist, the program continues to pass 4 For the formal inclusion definition among conditional events refer to [7, §10.2]. 5 Once more, for a full description of the technique refer to [7, §14.1].
from a zero-layer to the subsequent. Finally, either the set of constraints is empty so that lbjK|F = 0 (ubjK|F = 1), or it passes to the second part where an actual linear program is performed. This optimization step has as objective function minimize k∗ f∗ · xα (maximize k∗ f ∗ · xα ) (7) under the constraints (5) plus f∗ · xα = 1
(f ∗ · xα = 1)
(8)
instead of (6). The final coherent interval [P (K|F ), P (K|F )] for P (K|F ) will result from the convex combination of all the intervals [lbjK|F , ubjK|F ], i.e.
P (K|F ) = minj∈{1,...,2n} lbjK|F and P (K|F ) = maxj∈{1,...,2n} ubjK|F .
The main difficulty of such procedures is the usually huge number a of atoms. In fact, it has been show that already the problem of checking the coherence for unconditional precise assessments is NP-complete. Anyhow, thanks to a smart use of null probabilities and to the notion of locally strong coherence, in [2] the complexity problem has also been faced. Abstract problems have been solved with O(n3 ) logical satisfiability tests in place of solving the linear systems and the optimization problems that have O(3n ) number of unknowns. Despite such promising results, a systematic study of the complexity of this last procedure remains to be made.
2
Vague Extensions and their Improvement
As already stated in the previous section, coherent bounds for a new event K|F P (K|F ) = P (K|F ) =
min P (K∗ |F∗ )
P ∈P
max P (K ∗ |F ∗ ) P ∈P
(9)
are obtained by taking into account all the admissible distributions P. They operationally results from the convex combination of all the single optimization bounds {lbjK|F , ubjK|F }, with j ∈ {1, . . . , 2n}, obtained in the different sequences of linear programs in {Oαj } associated to the extreme distributions of P.
Unfortunately, as has been noticed in some practical applications, the range of values between the bounds in (9) could be great, even reaching the uninformative interval [0, 1]. Hence a procedure that introduce a way to shrink the range of the extension is of interest. Recall that differences among optimal values obtained in different sequences derived from the restrictions (4) applied each couple of sequences to a different conditional event Ej |Hj ∈ E. It is possible that the values inside the natural extension [P (K|F ), P (K|F )] are not covered by the same number of “extreme” extension sub-intervals [lbjK|F , ubjK|F ]. This leads to different “degrees of coverage” for the values inside the natural extension which influence the soundness of any restriction. At any rate, it is not appropriate to introduce a “point-wise” degree of support. Imprecise probabilities require reasoning by intervals. In fact, it is wrong to focus the attention on each single value pK|F ∈ [P (K|F ), P (K|F )] because precise values are only thought as extreme conclusions. Moreover, we are not assured that the single pK|F are coherent by themselves. On the contrary, we will need to introduce a degree of support for any subinterval [lb∗K|F , ub∗K|F ] ⊂ [P (K|F ), P (K|F )] because these will be the elements to be used in our goal. It is hence important to select from [P (K|F ), P (K|F )] the “core” of the coherence, i.e. those sub-intervals that by themselves guaranty the coherence of a particular restriction [lb∗K|F , ub∗K|F ]. In a previous work [4] we found that, whenever the intersection of the intervals [lbjK|F , ubjK|F ] is not empty, the core are all the sub-intervals of such common intersection. This can be rigorously expressed through the following proposition and lemma: Proposition 1 Let (E, LC , p) be a coherent conditional assessment, K|F 6∈ E a new conditional event n o2n j and lbK|F , ubjK|F the set of optij=1
mal values obtained by linear programs {Oαj } as described in Subsection 1.3 and performed on K|F . If the interval
[lb∗K|F , ub∗K|F ] = empty then the (E ∗ , LC ∗ , p∗ ) with
T2n
j j j=1 [lbK|F , ubK|F ]
conditional
is not assessment
E ∗ = E ∪ {K|F }; LC ∗ = LC ∪ {log. const. am. K, F and UE }; p∗ = p ∪ {[lb∗K|F , ub∗K|F ]}, is coherent. Lemma 1 If (E ∗ , LC ∗ , p∗ ) is a coherent extension restriction obtained as in Proposition 1, then any assessment (E ∗ , LC ∗ , p∗∗ ), ∗∗ where p∗∗ = p ∪ {[lb∗∗ K|F , ubK|F ]} with ∗∗ ∗∗ ∗ ∗ [lbK|F , ubK|F ] ⊆ [lbK|F , ubK|F ], is still coherent. Here we can complete the picture whenever the intersection of the intervals [lbjK|F , ubjK|F ] is empty. We merely report here the following proposition. Its proof will be reported elsewhere. Proposition 2 Let (E, LC , p), be a coherent conditional assessment, K|F 6∈ E and o2n n as in Proposition1. If the lbjK|F , ubjK|F T2n j=1 interval j=1 [lbjK|F , ubjK|F ] is empty then, by denoting lb∗K|F = minj∈{1,...,2n} ubjK|F and
ub∗K|F = maxj∈{1,...,2n} lbjK|F , we have that the conditional assessment (E ∗ , LC ∗ , p∗ ) with E ∗ = E ∪ {K|F }; LC ∗ = LC ∪ {log. const. am. K, F and UE }; p∗ = p ∪ {[lb∗K|F , ub∗K|F ]}, is coherent. Asssociated with Proposition 2 we have the following lemma:
Lemma 2 If (E ∗ , LC ∗ , p∗ ) is a coherent extension restriction obtained as in Proposition 2, then any assessment (E ∗ , LC ∗ , p∗∗ ), ∗∗ where p∗∗ = p ∪ {[lb∗∗ K|F , ubK|F ]} with ∗∗ ∗ ∗ [lb∗∗ K|F , ubK|F ] ⊇ [lbK|F , ubK|F ], is still coherent. Hence, in both cases of empty or not empty intersection of the [lbjK|F , ubjK|F ] it is possible to select the reference sub-interval [lb∗K|F , ub∗K|F ]
of the coherent extension [P (K|F ), P (K|F )] with lb∗K|F
ub∗K|F
where IA denotes the usual indicator function.
T j j j maxj lbK|F if j [lbK|F , ubK|F ] 6= ∅
=
minj ubj K|F j minj ubK|F
=
In the case-definitions we have at the beginning the full belonging of [a, b] to H, secondly T j the incoherence of [a, b] and finally a “pej if j [lbK|F , ubK|F ] = ∅ nalization” of support for those sub-intervals (10) [a, b] that have some part not covered by eleT if j [lbjK|F , ubjK|F ] 6= ∅ ments of H. Such penalization increases with the part of [a, b] not supported by H. T
maxj lbj K|F if
j j j [lbK|F , ubK|F ]
=∅
By Lemmas 1 and 2, it is possible to identify the aforementioned “core” of the coherence by {[l, u] : lb∗K|F ≤ l ≤ u ≤ ub∗K|F } if not empty intersection H= . ∗ ∗ {[lb , ub ]} K|F K|F if empty intersection (11) The relevance of the “core” H is that any subinterval [a, b] of the least commitment coherent extension [P (K|F ), P (K|F )] must contain one of its elements to be adopted as tighter coherent extension for P (K|F ). Note that in the case of empty intersection the core H reduce to the single interval [lb∗K|F , ub∗K|F ]. As a result of this understanding, it is possible to introduce a degree of support of the “core” H for any sub-interval [a, b] ⊆ [P (K|F ), P (K|F )]. Such degree of support will be maximum for the elements of the “core” H, minimum for incoherent extensions and intermediate for sub-intervals that also contain values outside H. This can be formalized in the following way: = {[a, b] ⊆ Definition 1 Let I [P (K|F ), P (K|F )]} the set of sub-intervals of the least commitment coherent extension, [lb∗K|F , ub∗K|F ] and H defined as in (10) and in (11) respectively, then it is possible to introduce the degree of support function πH : I → [0, 1] defined as πH ([a, b]) = 1 if [a, b] ∈ H 0 if @ hi ∈ H s.t. hi ⊂ [a, b] 1−
(b−ub∗K|F )Iub∗
K|F
∗ ≤b +(lbK|F −a)Ia≤lb∗ K|F
(b−a)
otherwise
This degree of support πH can be a valid tool for the choice of a particular shrinkage of the extension interval. Of course such a choice cannot be performed only on the base πH , but needs further motivations and justifications. Our proposal suggests an objective tool for assessing these further motivations, based solely on the initial assessment and on the differences among the extremal distributions of P.
3
An example
In [4] we reported a medical application of the shrinkage process in the case of not empty intersection. Here we conclude this paper with example of medical application in the case of empty intersection. The example is extremely simplified for the lack of space, but it will suffice to show the potential of our proposal. Let T denote the event of a positive result to a specific medical test for the detection of an infection, e.g. SARS or Tbc. The actual presence of the infection in an individual will be denoted by the event I. T and I are logically independent; hence there are no logical relations specified in LC . The initial assessment is reported in Table 1. The Table 1: A simple initial assessment for the reliability of a medical test statement cond. prob. values or bounds I [.005, .010] T |I 1.0 T |¬I [.001, .005] expectation it specifies is for a mild incidence of the infection, an almost perfect sensitivity of the test with an expected absence of false negatives, and a high specificity with a small
amount of false positives. Based on these inputs, we can compute6 the least commitment coherent expected fraction of positive tests as [P (T ), P (T )] = [.005995, .014500]. Although this interval is already quite tight, it can be “improved” by applying Proposition 1. In fact we have the following four “extreme” extension sub-intervals induced by the only two imprecise constraints: [lb1T , ub1T ] [lb2T , ub2T ] [lb3T , ub3T ] [lb4T , ub4T ]
= = = =
[.005995, [.010990, [.005995, [.009975,
[3] A. Capotorti, T. Paneni. An operational view of coherent conditional previsions, in Symbolic and Quantitative Approaches to Reasoning with Uncertainty, Benferhat, S., Besnard, P. (eds): ECSQUARU 2001, LNAI 2143 (2001) 132–143. [4] A. Capotorti, M. Zagoraiou. Coherent Restrictions of Vague Conditional Lower-Upper Probability Extensions. In LNAI (Lecture Notes in Computer Science) (2005) 3571: 750-762. Available online at
.009975] .014950] .010990] .014950]
http://dx.doi.org/10.1007/11518655 63
Thus, the reference interval reduces to [lb∗T , ub∗T ]
[5] G. Coletti, R. Scozzafava. Characterization of Coherent Conditional Probabilities as a Tool for their Assessment and Extension. Int. Journ. of Uncertainty, Fuzziness and Knowledge-Based Systems, 4(2) (1996) 103–127.
= [.009975, .010990].
This results from the fact that lb∗T is the minimum of the upper bounds of the individual intervals, while ub∗T is the maximum of the lower bounds. According to (11), this interval coincides with the “core” H. It is the unique sub-interval of [P (T ), P (T )] with maximum degree of support πH ([lb∗T , ub∗T ]) = 1. All sub-intervals [a, b] ⊂ [P (T ), P (T )] that do not contain [lb∗T , ub∗T ] are incoherent and have minimum support πH ([a, b]) = 0. Finally, ∗∗ any sub-interval [lb∗∗ T , ubT ] ⊆ [P (T ), P (T )] that contains [lb∗T , ub∗T ] will have an intermediate degree of support with a minimum in πH ([P (T ), P (T )]) = .119342.
[6] G. Coletti, R. Scozzafava. Conditioning and Inference in Itelligent Systems, Soft Computing, 3 (1999) 118–130. [7] G. Coletti, R. Scozzafava. Probabilistic Logic in a Coherent Setting, Dordrecht: Kluwer, Series “Trends in Logic”, (2002). a, [8] B. de Finetti: Teoria della probabilit` Einaudi, Torino, 1970. (Engl. transl.: Theory of Probability, Vol.1 and 2. Wiley, Chichester, 1974).
References
[9] Proocedings of the International Symposia on Imprecise Probabilities and Their Applications., The International Society for Imprecise Probability Theory and Applications, (eletronic versions available at http://www.sipta.org/isipta/).
[1] A. Capotorti, F. Lad and G. Sanfilippo. Reassessing accuracy rates for asbestosis diagnosis using median decision procedures. Submitted to The American Statistician. [2] A. Capotorti, L. Galli and B. Vantaggi. How to use locally strong coherence in an inferential process based on upperlower probabilities. Soft Computing, 7(5) (2003) 280–287. 6
[10] F. Lad. Operational Subjective Statistical Methods: a mathematical, philosophical, and historical introduction, New York: John Wiley, (1996). [11] P. Walley. Statistical reasoning with Imprecise Probabilities, Chapman and Hall, London, (1991).
All the computations performed with the software “Check Coherence Interface” developed by the Research Group of Italian MIUR Cofin Project PAID (PArtial Information and Decision) and freely downloadable by the site http://www.dipmat.unipg.it/~upkd/paid/software.html