Information sharing between heterogeneous uncertain ... - CiteSeerX

6 downloads 0 Views 620KB Size Report
known heterogeneous uncertain reasoning models: the certainty factor model and the subjective .... Suppose in a multi-agent system there are three agents.
International Journal of Approximate Reasoning 27 (2001) 27±59

www.elsevier.com/locate/ijar

Information sharing between heterogeneous uncertain reasoning models in a multi-agent environment: a case study q Xudong Luo

a,*

, Chengqi Zhang b, Ho-fung Leung

a

a

Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, China b School of Computing and Mathematics, Deakin University, Geelong VIC 3217, Australia

Received 1 November 2000; received in revised form 1 February 2001; accepted 1 March 2001

Abstract The issue of information sharing and exchanging is one of the most important issues in the areas of arti®cial intelligence and knowledge-based systems, or even in the broader areas of computer and information technology. This paper deals with a special case of this issue by carrying out a case study of information sharing between two wellknown heterogeneous uncertain reasoning models: the certainty factor model and the subjective Bayesian method. More precisely, this paper discovers a family of exactly isomorphic transformations between these two uncertain reasoning models. More interestingly, among isomorphic transformation functions in this family, di€erent ones can handle di€erent degrees to which a domain expert is positive or negative when performing such a transformation task. The direct motivation of the investigation lies in a realistic consideration. In the past, expert systems exploited mainly these two models to deal with uncertainties. In other words, a lot of stand-alone expert systems which use the two uncertain reasoning models are available. If there is a reasonable transformation mechanism between these two uncertain reasoning models, we can use the Internet

q

The work described in this paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (RGC Ref. No. CUHK4304/ 98E). The work is also partially supported by the Postdoctoral Fellowship Scheme of the Chinese University of Hong Kong. The paper is the expansion and revision of paper [45]. * Corresponding author. E-mail addresses: [email protected] (X. Luo), [email protected] (C. Zhang), [email protected] (H.-f. Leung). 0888-613X/01/$ - see front matter Ó 2001 Elsevier Science Inc. All rights reserved. PII: S 0 8 8 8 - 6 1 3 X ( 0 1 ) 0 0 0 3 2 - 9

28

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

to couple these pre-existing expert systems together so that the integrated systems are able to exchange and share useful information with each other, thereby improving their performance through cooperation. Also, the issue of transformation between heterogeneous uncertain reasoning models is signi®cant in the research area of multi-agent systems because di€erent agents in a multi-agent system could employ di€erent expert systems with heterogeneous uncertain reasonings for their action selections and the information sharing and exchanging is unavoidable between di€erent agents. In addition, we make clear the relationship between the certainty factor model and probability theory. Ó 2001 Elsevier Science Inc. All rights reserved. Keywords: Multi-agent; Distributed expert system; Knowledge sharing; Uncertainty; Algebra

1. Introduction The problem-solving ability of expert systems [29] is greatly improved through cooperation among di€erent expert systems in a distributed expert system [51]. Sometimes these di€erent expert systems may use di€erent uncertain reasoning models [92]. In each reasoning model, the uncertainties of propositions take values on a set. These sets are di€erent in di€erent models. For example, the set is the interval ‰ 1; 1Š in the certainty factor model [44,52,80] used in a seminal expert system MYCIN [78] for diagnosing bacterial infections, while the set is the interval ‰0; 1Š in the subjective Bayesian method [10] used in another seminal expert system PROSPECTOR [11] for determining site potential for mineral exploration. So, to achieve cooperation among these expert systems, the ®rst step is to transform the uncertainty of a proposition from one uncertain reasoning model to another if they use di€erent uncertain reasoning models [43,90,91,93,95], then the second step is to synthesise the transformed di€erent results [97,98]. In other words, the transformation among di€erent uncertain reasoning models is the foundation for cooperation among these heterogeneous expert systems, and so this is a very important and very interesting problem. Recently, there were a few papers which addressed this topic. These papers are in the following two categories. · Quantitative method. Zhang and Orlowska [96] showed that the sets of propositional uncertainties in several well-known uncertain reasoning models with appropriate operators are semi-groups with individual unit elements. The further work of Zhang [91] used this result to establish transformation criteria based on homomorphisms, and to de®ne transformation functions approximately satisfying these criteria. These functions work well between any two of the uncertain reasoning models used in EMYCIN [52], PROSPECTOR and MYCIN [78]. H ajek [25] also tried to build an isomorphism

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

29

between the certainty factor model and the subjective Bayesian method, but he implicitly assumed that in the subjective Bayesian method the unit element is always 0.5. Unfortunately, the unit element is the prior probability of a proposition, and so varies with di€erent propositions. In [95], in the case where the unit element in the subjective Bayesian method can take any values on ‰0; 1Š, we give an isomorphic transformation function between the certainty factor model and the subjective Bayesian method. In addition, in [94], we presented a work which deals with the problem of sharing information between the widely used uncertain reasoning model: the certainty factor model and the Bayesian network. · Qualitative method. Parsons and Saotti [57] try to use a qualitative method to attack the issue of transformation between di€erent uncertain reasoning models. They outline the technique of a kind of interlingua which is weak enough to be subsumed by di€erent uncertainty representation languages. However, their interlingua is just qualitative and too weak, and will never produce as accurate results as quantitative methods. In fact, their interlingua can only be used to express that a value increases, decreases or does not change. Based on the work [95], this paper further constructs a family of the isomorphic transformation functions, which can exactly transform the uncertainties between the certainty factor model and the subjective Bayesian method under the condition that in the subjective Bayesian method the unit element can take any values on ‰0; 1Š. The signi®cance of our family of isomorphic transformations is that they can handle the following nice intuitions. Intuitively, a value representing belief would be transformed into a bigger value by a domain expert with an optimistic attitude than by an expert with a pessimistic attitude, while a value representing disbelief would be transformed into a smaller value by a domain expert with an optimistic attitude than by an expert with a pessimistic attitude. In particular, the motivation of investigating the transformation between the certainty factor model and the subjective Bayesian method lies in a realistic consideration. Recent dramatical progress in the Internet makes it possible to integrate stand-alone pre-existing expert systems into a distributed multi-agent environment. Investigating the issue of information sharing between heterogeneous reasoning models can facilitate the integration of stand-alone expert systems through the Internet. In the past, expert systems exploited mainly the certainty factor model and the subjective Bayesian method to deal with uncertainties. In other words, a lot of stand-alone systems using these two models pre-exist. If there is reasonable transformation mechanism between the certainty factor model and the subjective Bayesian method so that the models can share heterogeneous uncertain information, we can use the Internet to couple these pre-existing expert systems together so that their problem solving capability is enhanced through sharing useful information among each other. Similar work to couple pre-existing systems together can also be found in the

30

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

research area of multi-agent systems. For example, the work of Jennings [31] provides a multi-agent architecture for cooperation among possibly pre-existing and stand-alone systems. ARCHON-project [7,32±35,69,85], which Jennings is in charge of, is another example. It provides an architecture for integrating multiple pre-existing expert systems to exchange information and, therefore, increase the overall performance. However, the problem of sharing information between agents based on heterogeneous uncertain reasoning models has not been addressed in these works. On the other hand, the World Wide Web has provided a convenient medium for the delivery of a variety of knowledge-based systems (KBSs) including expert systems. Web-based KBSs can solve well the problem of availability, i.e., having the expertise provided by a KBS at any place and time where it is needed [23,24]. Some tools and languages for supporting web-based KBS applications have been developed. For example, Java Expert System Shell [16]. Moreover, researchers have developed some speci®c web-based expert systems. For instance, Grove and Hulse [22±24] develop a web-based expert system: the Reptile Identi®cation Helper. The system makes expert herpetological advice available through the Internet for workers who are attempting to identify specimens sighted in a ®eld. Actually, many applications of web-based expert systems can be found in distinct application domains including business/industry, education/research, government and medical informatics [24]. However, many issues associated with web-based KBSs are needed to be dealt with further. One of them is that of communication and cooperation among KBSs. Speci®c problems include a lack of common protocols for domain level knowledge sharing and exchanging for the cooperation among web-based KBSs [23]. Our work in this paper can be regarded as an attempt to solve the problem. The rest of this paper is divided as follows. In Section 2, we show that the transformation between two heterogeneous uncertain reasoning models used in rule-based systems can be based on homomorphic (especially isomorphic) mapping between the algebraic structures corresponding the uncertain reasoning models. In Section 3, we brie¯y review the algebraic structures corresponding to the certainty factor model and the subjective Bayesian method. In Section 4, under the criteria, we discover a family of the isomorphic functions which can exactly transform the uncertainties of a proposition between the certainty factor model and the subjective Bayesian method for any value of the prior probability of this proposition. This solves one of the key problems in the area of distributed expert systems, which are special multi-agent systems, because it o€ers a perfect solution for cooperation among di€erent expert systems using the certainty factor model and the subjective Bayesian method. In Section 5, we discuss the signi®cance of the work, especially in the area of multi-agent systems. In Section 6, we summarise the paper.

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

31

2. The criteria for transformations In a multi-agent system, if di€erent intelligent agents use di€erent uncertain reasoning models, in order to share information the uncertainty value of a proposition is needed to be transformed from one model to another when these agents cooperate to solve problems. This section considers how to judge which transformations are reasonable. In an uncertain reasoning model used in rule-based systems, the propagation for uncertainties depends on ®ve operations: AND, OR, NOT, IMPLY, and parallel combinations. The parallel combination operation is speci®c to an uncertain reasoning model because uncertain reasoning concerns combining the uncertainties of the same proposition from di€erent sources, and this operation is used to do this thing. Intuitively, the order between transformation and parallel combination should be irrelevant. Suppose in a multi-agent system there are three agents ES1 , ES2 and ES3 . ES1 and ES2 employ the same uncertain reasoning models, while ES3 employs another di€erent one. There are the following possible events: (1) ES1 and ES2 output, respectively, to ES3 , two pieces of uncertainty information about a same proposition H. That is, ®rst transform these two pieces of information from the uncertain model (used by ES1 and ES2 ) into another model (used by ES3 ), and then perform a parallel combination in the model used by ES3 . (2) Suppose in ES1 there are two rules as follows: H1 ! H ; H2 ! H : There is enough information to allow ES1 to use the above two rules and get two pieces of uncertainty information about the proposition H. Clearly, ES1 should ®rst perform a parallel combination on these two pieces of information, then transform the result from the model used by ES1 to the model used by ES3 , ®nally output it to ES3 . In the above two events, if their two pairs of information about H are the same, evidently the results which ES3 obtained should be the same. That is, the result of transformation after parallel combination should be the same as that of parallel combination after transformation. In other words, the parallel combination operation should be preserved under the transformation function. In an uncertain reasoning model, the set of all possible estimates for the uncertainty of propositions should contain the three special elements as follows: >, ? and e. (1) > represents that proposition H is known to be true, e.g., in the certainty factor model, > ˆ 1, also in the subjective Bayesian method, > ˆ 1;

32

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

(2) ? represents that proposition H is known to be false, e.g., in the certainty factor model, ?ˆ 1, while in the subjective Bayesian method, ?ˆ 0; and (3) e, called as a unit element, represents the uncertainty of proposition H without observations, e.g., in the certainty factor model, e ˆ 0, while in the subjective Bayesian method, e ˆ P …H †. Obviously, on transforming uncertainty estimates from one uncertain model to another, these special values should correspond to each other. In an uncertain reasoning model, the set of estimates for uncertainties of propositions is an order set. Obviously, on transforming uncertainty estimates from one uncertain model to another, the order relations should be preserved. After discussing intuitions, we can de®ne formally the criteria now. In two di€erent uncertain reasoning models, (1) let the sets of possible uncertainty values of proposition H be U1 and U2 , respectively, (2) let the order relationships on U1 and U2 be 61 and 62 , respectively; (3) let the uncertainty be described by >1 and >2 , respectively, when proposition H is known to be true, while by ?1 and ?2 , respectively, when false; (4) let the parallel combination operators on U1 f>1 ; ?1 g, and on U2 f>2 ; ?2 g be U1 and U2 , respectively; and (5) let the unit element corresponding to proposition H be e1 and e2 , respectively. De®nition 1. A map F : U1 ! U2 is said to be an h-transformation from …U1 f>1 ; ?1 g; U1 † to …U2 f>2 ; ?2 g; U2 †, if it satis®es 1. F…U1 …x; y†† ˆ U2 …F…x†; F…y†† 8x; y 2 U1 f>1 ; ?1 g; 2. F…>1 † ˆ >2 ; 3. F…?1 † ˆ?2 ; 4. F…e1 † ˆ e2 ; 5. 8x1 ; x2 2 U1 f>1 ; ?1 g, if x1 6 1 x2 , then F…x1 † 6 2 F…x2 †. In the above de®nition, Item 1 is the preservation of parallel operations, Items 2±4 are the corresponding relationships of special elements. Item 5 is the preservation of order relationships. Before further discussion, it is useful to recall several basic concepts in algebra. (1) If X is a set of some elements, and the operation X is performed on X, then the pair …X ; X † is called an algebra structure. In particular, an algebra structure …X ; X † is a group if X is associative and has a unit element, and if for any element there exists an inverse element. (2) Let …X ; X † and …Y ; Y † be two algebra structures. A mapping f : X ! Y is called as a homomorphism if f …X …x1 ; x2 †† ˆ Y …f …x1 †; f …x2 ††;

8x1 ; x2 2 X :

Furthermore, if f is a 1±1 mapping, it is called an isomorphism between …X ; X † and …Y ; Y †.

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

33

Therefore, an h-transformation between two uncertain reasoning models is a homomorphism between the two algebra structures corresponding to the two uncertain reasoning models. 3. Algebra structures of uncertain reasonings Since the criteria for reasonable transformations are based on the algebra structures corresponding to uncertain reasoning models, before discussing how to construct transformation functions among the certainty factor model and the subjective Bayesian method, this section discusses their algebra structures ®rst. 3.1. The certainty factor algebra In the certainty factor model, the set X of uncertainties of any proposition is the interval ‰ 1; 1Š, and the combination operation CF on … 1; 1† is de®ned as CF …CF …H ; S1 †; CF …H ; S2 †† ˆ CF …H ; S1 ^ S2 †; where CF …H ; S1 ^ S2 † is given by 8 CF …H ; S1 † ‡ CF …H ; S2 † CF …H ; S1 †CF …H ; S2 † > > > > if CF …H ; S1 † > 0; CF …H ; S2 † > 0; > > > > CF …H ; S † ‡ CF …H ; S † ‡ CF …H ; S †CF …H ; S † > < 1 2 1 2 CF …H ; S1 ^ S2 † ˆ if CF …H ; S1 † 6 0; CF …H ; S2 † 6 0; > > > CF …H ; S1 † ‡ CF …H ; S2 † > > > > > 1 minfjCF …H ; S1 †j; jCF …H ; S2 †jg > : if CF …H ; S1 †  CF …H ; S2 † < 0:

…1†

Theorem 1. …… 1; 1†; CF † is a group. Proof. It is easy to verify that the operator CF on … 1; 1† is closed, and satis®es the associative and commutative laws. The unit element is 0 and the inverse element of x is x [90]. So, …… 1; 1†; CF † is a group.  The above group, called the certainty factor group, can be described clearly as follows: (1) set: … 1; 1†; (2) operator CF : … 1; 1†  … 1; 1† ! … 1; 1† is given by 8 if x1 > 0; x2 > 0; x ‡ x2 x1 x2 > < 1 if x1 < 0; x2 < 0; x1 ‡ x2 ‡ x1 x2 CF …x1 ; x2 † ˆ x1 ‡ x2 > : if x1 x2 6 0; 1 min…jx1 j; jx2 j†

34

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

(3) unit element is 0; (4) 8x 2 … 1; 1†, inverse element of x is x

1

ˆ

x:

3.2. The subjective Bayesian algebra In the subjective Bayesian method, the set of uncertainties of any proposition H is the interval …0; 1†, and the combination function O on …0; 1† is de®ned as O …O…H j S1 †; O…H j S2 †† ˆ O…H j S1 ^ S2 †;

…2†

where O…H j S1 ^ S2 † is given by O…H j S1 ^ S2 † ˆ

O…H j S1 †O…H j S2 † : O…H †

…3†

In the above combination function, O represents odds. The relationship between odds and probability is given by O…x† ˆ

1

P …x† : P …x†

…4†

By using the relationship formula (4), we can turn (3) into (6). In other words, we can transform the subjective Bayesian method from the form of odds to the form of probability: the set of uncertainties of any proposition H is …0; 1†, on which the combination function P is de®ned as P …P …H j S1 †; P …H j S2 †† ˆ P …H j S1 ^ S2 †;

…5†

where P …H j S1 ^ S2 † is given by P …H j S1 ^ S2 † ˆ

P …H j S1 †P …H j S2 †P …:H † : P …:H j S1 †P …:H j S2 †P …H † ‡ P …H j S1 †P …H j S2 †P …:H † …6†

Theorem 2. ……0; 1†; O † is a group. Proof. We can verify that the operator O on …0; 1† is closed, associative, and commutative [90]. Moreover, the unit element exists, and for any element x 2 …0; 1†, its inverse element x 1 exists [90]. Thus ……0; 1†; O † is a group.  The above group can be described clearly as follows: (1) set: …0; 1†; (2) operator O : …0; 1†  …0; 1† ! …0; 1† is given by

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

O …x1 ; x2 † ˆ

35

x1 x2 ; O…H †

(3) unit element is O…H † for proposition H (constant); (4) 8x 2 …0; 1†, the inverse element of x is x

1

ˆ

O…H †  O…H † : x

In contrast with the certainty factor model, the operator O is related to the unit element of proposition H explicitly, because there are di€erent unit elements for di€erent propositions in the subjective Bayesian method. Theorem 3. ……0; 1†; P † is a group. Proof. We can verify that the operator P on …0; 1† is closed, associative, and commutative [90]. Moreover, the unit element exists, and for any element x 2 …0; 1†, its inverse element x 1 exists [90]. Thus ……0; 1†; P † is a group.  The above group, called the subjective Bayesian group, can be described clearly as follows: (1) set: …0; 1†; (2) operator P : …0; 1†  …0; 1† ! …0; 1†, is given by P …x1 ; x2 † ˆ

…1

x1 †…1

x1 x2 …1 P …H †† x2 †P …H † ‡ x1 x2 …1

P …H ††

;

(3) unit element is P …H † (constant); (4) 8x 2 …0; 1†, inverse element of x is: x

1

ˆ

x…1

P …H †P …H †…1 x† : 2P …H †† ‡ P …H †P …H †

Theorem 4. ……0; 1†; P † is an isomorphic group to ……0; 1†; O †. Proof. The map fO!P : …0; 1† ! …0; 1†, de®ned as fO!P …x† ˆ

x ; 1‡x

is an isomorphism from ……0; 1†; O † to ……0; 1†; P †. In fact, clearly, fO!P is a 1±1 map, and

36

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

 fO!P … O …x1 ; x2 †† ˆ fO!P ˆ 1  ˆ P

x1 x2 O…H †



 x1 1 1‡x1

ˆ

x1 x2 O…H † ‡ x1 x2

x1 1‡x1

x2  1‡x  …1 P …H ††  2 x2 x1 x2  1‡x  …1 P …H † ‡ 1‡x 1‡x2 1 2

x1 x2 ; 1 ‡ x1 1 ‡ x2

P …H ††



ˆ P …fO!P …x1 †; fO!P …x2 ††:



4. A family of isomorphic transformations After discussing the algebra structures of the certainty factor model and the subjective Bayesian method, now we can give transformation functions between the two models. Lemma 1. The map ( g…x† ˆ

2x 2

x

1 2x

1

if x P 0; if x 6 0

…7†

is an isomorphism from …… 1; 1†; ‡† to …… 1; 1†; CF †. H ajek [25] gave the above lemma, which tells us that the set of all reals … 1; 1†, with the usual addition ‡, is isomorphic to the certainty factor group …… 1; 1†; CF †. H ajek [25] also tried to give an isomorphism from …… 1; 1†; ‡† to the subjective Bayesian group ……0; 1†; P †. Unfortunately, his solution was correct only in a very special case, that is, P …H † ˆ 0:5. Whereas, in the following lemma, we will give, under the general case that P …H † could be any value in ‰0; 1Š, an isomorphism from …… 1; 1†; ‡† to the subjective Bayesian group ……0; 1†; P †. Lemma 2. The map f …x† ˆ

2ax  P …H † P …:H † ‡ 2ax  P …H †

…8†

is an isomorphism from the group …… 1; 1†; ‡† to ……0; 1†; P †, where a 2 …0; 1† is a constant.

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

37

Proof. Clearly, f is a 1±1 map. P …f …x1 †; f …x2 †† ˆ ˆ

…1 …1

ˆ

ˆ

f …x1 ††…1

f …x1 †f …x2 †…1 P …H †† f …x2 ††P …H † ‡ f …x1 †f …x2 †…1

P …H ††

1 f …x1 ††…1 f …x2 ††P …H † ‡1 f …x1 †f …x2 †…1 P …H ††

1   2ax1  P …H † 2ax2  P …H † 1 P …H † 1 ax1 ax2 P …:H † ‡ 2  P …H † P …:H † ‡ 2  P …H † ‡1 ax2 2ax1  P …H † 2  P …H † …1 P …H ††  ax1 ax2 P …:H † ‡ 2  P …H † P …:H † ‡ 2  P …H †

2a…x1 ‡x2 †  P …H† ˆ f …x1 ‡ x2 †: P …:H† ‡ 2a…x1 ‡x2 †  P …H †

Therefore, the lemma holds.



Lemma 3. Let f1 be an isomorphism from the group …G1 ; 1 † to the group …G2 ; 2 †, and let f2 be an isomorphism from the group …G2 ; 2 † to the group …G3 ; 3 †. Then f1 1 is an isomorphism from …G2 ; 2 † to …G1 ; 1 †, and f2  f1 is an isomorphism from …G1 ; 1 † to …G3 ; 3 †. This is a basic fact in modern algebra [47]. Theorem 5. The map

fCF !P …x† ˆ

8 > > < …1 > > :

1

P …H † a x†  …1 P …H †† ‡ P …H † …1 ‡ x†a  P …H † P …H † ‡ …1 ‡ x†a  P …H †

if

1 > x > 0; …9†

if

0Px >

1

is an isomorphism from …… 1; 1†; CF † to ……0; 1†; P †. Proof. By Lemmas 1 and 3, we know that the map ( 1

g …x† ˆ

log2 …1



log2 …1 ‡ x†

if 1 > x > 0; if 0 P x >

1

is an isomorphism from …… 1; 1†; CF † to …… 1; 1†; ‡†. And by Lemmas 2 and 3, we know that an isomorphism from …… 1; 1†; CF † to ……0; 1†; P † is as follows:

38

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

fCF !P …x† ˆ f  g 8 > > < 1 ˆ > > : 1 8 > > > > > < ˆ 1 > > > > > : 1 8 > > < …1 ˆ > > : 1

1

…x†

2a… log2 …1 x††  P …H † P …H † ‡ 2a… log2 …1 x††  P …H † 2a log2 …1‡x†  P …H † P …H † ‡ 2a log2 …1‡x†  P …H † P …H † a …1 x† a P …H † P …H † ‡ …1 x† a …1 ‡ x†  P …H † a P …H † ‡ …1 ‡ x†  P …H † P …H † a x†  …1 P …H †† ‡ P …H † a …1 ‡ x†  P …H † a P …H † ‡ …1 ‡ x†  P …H †

if

1>x>0

if

0Px >

if

1>x>0

if

0Px >

1

1

if

1>x>0

if

0Px >

1:



Notice that fCF !P …1† ˆ 1, fCF !P … 1† ˆ 0, fCF !P …0† ˆ P …H †, and fCF !P …x† is monotonic and increases. Thus, by De®nition 1, the above theorem gives htransformations from the certainty factor model to the subjective Bayesian method. By the above theorem, when a ˆ 1 we have: Corollary 1. The map 8 P …H † > < 1 x  …1 P …H †† fCF !P …x† ˆ > : …1 ‡ x†P …H † 1 ‡ x  P …H †

if

1 P x > 0;

if

0PxP

1

…10†

is an isomorphism from …… 1; 1†; CF † to ……0; 1†; P †. This is a result we obtain in [95]. The ®gures of this mapping are as shown in Fig. 1. In [25], H ajek just gives the one in the case P …H † ˆ 0:5 among many transformation functions in Fig. 1. From Fig. 1, we can see an interesting property, that is, a transformation function with a bigger value of P …H † is always above the one with a less value of P …H †. For the same value of P …H †, when a takes di€erent values the corresponding isomorphisms are di€erent. This is reasonable because di€erent human experts may have di€erent attitudes when performing a transformation from the certainty factor model to the subjective Bayesian method. Fig. 2 shows some ®gures of these isomorphisms when a takes di€erent values for the same value of P …H †.

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

39

1

0.9

0.8

1.4 P(H)=0.8 1.2

0.7 1

Proability

0.6

Probability

P(H)=0.6 P(H)=0.5

0.5 P(H)=0.4

0.8 0.6 0.4

0.4 0.2 0.3

P(H)=0.2

0 1

0.2

1 0.1

0.5

0.5 0

0 –1

–0.5 –0.5

0 0.5 Certainty Factor

1

Prior Proability

0

–1

Certainty Factor

Fig. 1. Isomorphism from the certainty factor model to the subjective Bayesian method.

From the comparison in Fig. 2, we can see that our family of the isomorphisms can capture some nice intuitions of human experts. · In real life, there are some kind of persons who are positive. That is, when they are in good situation they do not feel the situation is so good and process things still cautiously, while when they are in bad situation they do not lose their con®dence and so do things boldly. When a < 1, the isomorphisms can capture the attitude of such persons. From Fig. 2, we can observe that in the case a < 1, for the same value of P …H †, when the value of a is getting smaller, the transformed value representing belief in H is also getting smaller, instead the transformed value representing disbelief in H is getting bigger. Intuitively, a value representing belief would be transformed into a smaller value (i.e., less belief) by a domain expert when the expert is more cautious, while a value representing disbelief would be transformed into a bigger value (i.e., less disbelief) by a domain expert when the expert is more con®dent. · In real life, there are some negative persons. That is, when they are in good situation they feel the situation was better than it really is, while when they are in bad situation they feel the situation was worse than it really is. When a > 1, the isomorphisms can capture the attitude of such persons. From Fig. 2, we can see that in the case a > 1, for the same value of P …H †, when the value of a is getting bigger, the transformed value representing belief in H is also getting bigger, instead the transformed value representing disbelief in H is getting smaller. Intuitively, a value representing belief would be transformed into a bigger value (i.e., more belief) by a domain expert when the expert is less cautious, while a value representing disbelief would be transformed into a smaller value (i.e., more disbelief) by a domain expert when the expert is less con®dent. Accordingly, the value of a can be regarded as an attitude degree to which a domain expert is positive or negative when performing a transformation from the certainty factor model to the subjective method. In summary,

40

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

(1) a < 1 means that a domain expert is positive, and the smaller a, the more positive the domain expert; (2) a > 1 means that a domain expert is negative, and the bigger a, the more negative the domain expert; (3) a ˆ 1 means that the domain expert is neutral.

1

0.9 1

0.8

0.8

0.7

Proability

Probability

α=3

0.6

0.5

0.6

0.4

α=2 0.4

0.2 α=1

0.3

0.2

α=0.7

0 10

α=0.3

α=0.3

1

α=0.7 0.1

(a)

0 –1

α=1 α=2 –0.5

0.5

5 0

α=3 0 0.5 Certainty Factor

–0.5

Attitudity α

1

0

Certainty Factor

–1

1

0.9

α=3 1

0.8 α=2

0.8

0.7

Proability

0.5

0.4

0.3

Probability

α=1 0.6

α=0.7 α=0.3 α=0.3 α=0.7

0.6

0.4

0.2

α=1 0

0.2

10

α=2

1

α=3 0.1

0.5

5 0

(b)

0 –1

–0.5

0 0.5 Certainty Factor

–0.5

Attitudity α

1

0

Certainty Factor

–1

1 α=3 0.9

0.7

Proability

0.6

0.5

1

α=0.3

α=0.3

0.8 α=0.7

Probability

0.8

α=2 α=1 α=0.7

α=1 α=2 α=3

0.4

0.6

0.4

0.2

0.3 0 10

0.2

1 0.1

0.5

5 0

(c)

0 –1

–0.5

0 0.5 Certainty Factor

1

Attitudity α

–0.5 0

–1

Certainty Factor

Fig. 2. The comparison of isomorphisms from the certainty factor model to the subjective Bayesian method: (a) in the case P …H † ˆ 0:2; (b) in the case P …H † ˆ 0:5; (c) in the case P …H † ˆ 0:8.

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

41

The discussed is the transformation from the certainty factor model to the subjective Bayesian method. The following is about the transformation from the subjective Bayesian method to the certainty factor model. Theorem 6. The map 8  1=a > …1 x†P …H † > > > …1 P …H ††x > : 1 …1 x†P …H †

if

1 P x > P …H †; …11†

if

0 6 x 6 P …H †

is an isomorphism from ……0; 1†; P † to …… 1; 1†; CF †. Proof. Note fP !CF ˆ fCF1!P , and so by Lemma 3, the conclusion holds.



Since fP !CF …1† ˆ 1, fP !CF … 1† ˆ 0, fP !CF …P …H †† ˆ 0, and fP !CF …x† is monotonic and increases, by De®nition 1 the above theorem gives h-transformations from the subjective Bayesian method to the certainty factor model. By the above theorem, when a ˆ 1, we have: Corollary 2. The map 8 x P …H † > < x…1 P …H †† fP !CF …x† ˆ > : x P …H † …1 x†P …H †

if 1 P x > P …H †;

…12†

if 0 6 x 6 P …H †

is an isomorphism from …… 1; 1†; CF † to ……0; 1†; P †. This is also a result we obtain in [95]. The ®gures of this mapping are as shown in Fig. 3. 1

0.8

0.6

1

P(H)=0.2

0.4

0.5

Certainty

Certainty Factor

P(H)=0.4 0.2

0

–0.2

–0.5

P(H)=0.5

–0.4

P(H)=0.6

–1 1

P(H)=0.8

–0.6

0

1.5 –0.8

0.5

1 0.5

–1

0

0.2

0.4 0.6 Proability

0.8

1

Prior Proability

0

0

Probability

Fig. 3. Isomorphism from the subjective Bayesian method to the certainty factor model.

42

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

For the same value of P …H †, when a takes di€erent values the corresponding isomorphisms are di€erent. This is reasonable because di€erent human experts may have di€erent attitudes in the transformation from the subjective Bayesian method to the certainty factor model. In Fig. 4, we draw some ®gures of this isomorphism when a takes di€erent values for the same value of P …H †. The analysis for Fig. 4 is similar to that for Fig. 2, but the value of a indicates di€erent meaning as follows: (1) a < 1 means that the domain expert is negative. The smaller value of a, the more negative the domain expert. (2) a > 1 means that the domain expert is positive. The bigger value of a, the more positive the domain expert. (3) a ˆ 1 means that the domain expert is neutral. 5. Related work In this section, we further discuss signi®cance of our work in this paper. 5.1. Expert systems work for intelligent agents Though the term intelligent agent lacks a widely accepted and precise de®nition, it is reserved for software/hardware entities which have some degree of reactivity, autonomy, and adaptability [83]. In this sense, expert systems are typically less complex than intelligent agents. However, an expert system could be an ingredient of an intelligent agent. 1 According to Brenner et al. [4, pp. 24±25], in a multi-agent systems each agent should have a certain minimum degree of intelligence, and its intelligence is formed from three main components: its internal knowledge base, the reasoning capabilities based on the contents of the knowledge base, and the ability to learn or adapt to changes to the environment. While the basic components of an expert system are its internal knowledge base its reasoning capabilities are based on the contents of the knowledge base. Accordingly, an expert system may be the easiest way to give an agent some intelligence (see [36, p. 39]). 2 In other words, the reasoning is necessary for the intelligence of an intelligent agent. Actually, this is not dicult to be understood. Generally 1

It is interesting that conversely an agent could be integrated into an expert system. For example, Brown et al. [6] integrate their intelligent interface agents into an expert system shell called PESKI (Probabilities, Experts Systems, Knowledge, and Inference) [5,26,27]. 2 In [36], Knapik and Johnson further discuss how agents can make use of expert systems that are up and running in most domains in which agents are likely to be used.

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

43

1 α=0.3 α=0.7

0.8

α=1 0.6

α=3

0.5

Certainty Factor

0.4

Certainty Factor

1

α=2

0.2

0

0

–0.5

–0.2

α=3

–0.4

α=2

–0.6

α=1

–1 10 1

α=0.7 –0.8

(a)

–1

5

α=0.3

0

0.5

0.2

0.4 0.6 Proability

0.8

Attitudity α

1

0

Probability

0

1 α=0.3

0.8

α=0.7 α=1

0.6

1

α=2 0.5

α=3

0.2

Probability

Certainty Factor

0.4

0

–0.2

–0.5

α=3

–0.4

α=2

–1

α=1 –0.6

0

10

α=0.7

α=0.3

1

–0.8

5 0.5

(b)

–1

0

0.2

0.4 0.6 Proability

0.8

Attitudity α

1

0

0

Certainty Factor

1 α=0.3 α=0.7

0.8

α=1 1

0.6

α=2 α=3

0.2

0.5

Probability

Certainty Factor

0.4

0

0

–0.5

–0.2

–0.4

α=3

α=2

–1 10

–0.6

α=1

1

α=0.7 –0.8

5 α=0.3

(c)

–1

0

0.2

0.4 0.6 Proability

0.5 0.8

1

Attitudity α

0

0

Certainty Factor

Fig. 4. The comparison of isomorphisms from the subjective Bayesian method to the certainty factor model: (a) in the case P …H † ˆ 0:2; (b) in the case P …H † ˆ 0:5; (c) in the case P …H † ˆ 0:8.

speaking, an agent must perform two functions: perceptions of changes in the environment it situates, and actions to a€ect changes in the environment. Here the problem is how to choose a proper action against the changes of the environment. Barbara Hayes±Roth of Stanford's Knowledge Systems Laboratory insists that agents reason during the process of action selection (see [54, p. 10]).

44

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

In short, we could have Expert System ‡ Sensor ‡ Effector ‡ Communicator ˆ Intelligent Agent: The vivid agent system developed by Schroder and Wagner [75] is such a system without communicator. The knowledge system of a vivid agent has an update operator and an inference operator, and allows various forms of knowledge representation including (1) relational database, (2) relational factbase, (3) factbase with deduction rules, (4) temporal, disjunctive, fuzzy factbases, (5) deduction rules with negation-as-failure, (6) default rules with two kinds of negation, and (7) the rule-based speci®cation of interagent cooperation. The action selecting of a vivid agent is based on its knowledge. 5.2. Uncertain reasoning in intelligent agents Expert systems can work for intelligent agents, and expert systems are always associated with uncertain reasonings. This raises a question: are uncertain reasonings necessary for intelligent agents? The answer is armative. Knapik and Johnson (see [36, p. 32]) make it clear why an intelligent agent must be able to deal gracefully with these uncertainties: As Russell and Norvig point out, agents' actions based on ®rst-order logic alone ``...almost never have access to the whole truth about their environment'' [70]. Therefore, agents cannot always know for certain what is the correct, rational action to take in the real world. There are too many uncertainties: environmental factors such as location, where to go next in the case of a mobile agent, resource uncertainties, unclear or wrong objectives and goals, faulty communication links, and so forth. In other words, the state of a world could be uncertain, and the sensors of an agent could be uncertain (e.g., fuzzy sensors [15]), thus the agent's belief about the world could also be uncertain, and then the uncertainty about the selection and consequences of actions is unavoidable [39]. Moreover, as Barbara Hayes± Roth of Stanford's Knowledge Systems Laboratory insists, the action selection could be based on reasoning (see [54, p. 10]). Therefore, uncertain reasonings are necessary for intelligent agent. In fact, some researchers have already employed various uncertain reasoning models to handle uncertainty in agents. The followings are some examples. · The D-S evidence theory. (1) Parsons and Giorgini [58,61] extend the work, which is carried out by Parsons et al. [59] and on the use of argumentation in BDI agents (namely agents capable to have Beliefs, Desires and Intentions) [67,68], to include degrees of belief. In their work, the degree of belief is ex-

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

45

pressed as a mass assignment in the D-S evidence theory [76]. If a piece of belief is received from another agent, the degree of the belief re¯ects the known reliability of that agent. If a piece of belief is a new observation an agent makes, the reliability of its sensors is in place of the reliability of other agents. On updating the belief set when new information is received or sensed, the combination rule of the D-S evidence theory is used to calculate the degree of each piece of information in the belief set. (2) Similarly, Li and Zhang [40] employ the D-S evidence theory [76] to handle the issue of fusing an agent's belief with the information from its sensors. Further, they [41] utilise the D-S evidence theory fusing the information that is from an agent's own sensors, and the information that is told to the agent by other agents. In their work, these two pieces of information are uncertainty. The work of Li and Zhang [40,41] elaborate the belief fusing under uncertainty, but disregard the belief revision when the new information is approximate, incomplete or erroneous. Rather, the issue of belief revision is put into account in the work of Parsons and Giorgini [58,61]. Basically, BDI agents are based on use of mental attitudes: Beliefs, Desires and Intentions [67,68]. The work of Parsons and Giorgini, and the work of Li and Zhang both do not include degrees of desire and intention, which are more problematic. · Probability theory. (1) Lee and McCartney [39] use probabilistic models to handle the issue of intelligent interface agents' acquiring plans of using resources from user's uncertain behaviours. (2) Thiebaux et al. [71] exploit Nilsson's probabilistic logic [56] to handle the uncertainty about the environment an agent situates and the uncertainty about the e€ect of an action that the agent performs. Additionally, it is worth specially mentioning that in 1999 Shoham [79] pointed out that intelligent agent could be based on probabilistic reasoning. · Fuzzy theory. (1) For the issue similar to that Thiebaux et al. [71] use Nilsson's probabilistic logic [56] to handle, Pereira et al. [63] give another proposal based on possibility theory proposed by Dubois and Prade [8]. (2) El-Nasr et al. [13] use fuzzy logic to model emotions of agents. Emotions are an important aspect of human intelligence. In fact, emotions play a considerable role in the human decision-making process. In their model, fuzzy logic representation is used to map events and observations to emotion state. For example, for a pet, if its anger is High Intensity and its fear is Medium Intensity and the event is dish was taken away then its behaviour is growl, where High Intensity and Medium Intensity are fuzzy linguistic terms. Moreover, some researchers develop some uncertain reasoning models specially for multi-agent reasonings. (1) Kraus and Subrahmanian [37] develop a family of logics that a reasoning agent may use to perform successively sophisticated type of reasoning about uncertainty in the world, about the actions that may occur in the world (either owe to the agent or those initiated by other agents), about the probabilistic beliefs of other agents, and how these proba-

46

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

bilistic beliefs are changing over time. (2) Xiang [88] propose a probabilistic framework for cooperative multi-agent distributed interpretation and optimisation of communication. (3) Wong and Butz [86] propose the multi-agent probabilistic reasoning model. Unlike Xiang's model, their model is able to process input in truly asynchronous fashion. In addition, other various uncertainties in agents have also been dealt with. The following are some examples. (1) Xuan and Lesser [89] incorporate uncertainty in agent commitments. Commitments play a central role in multiagent coordination. (2) Mudgal and Vassileva [53] discuss bilateral Negotiation with incomplete and uncertain information. Automated negotiation is an important aspect of agent-mediated e-commerce to reach satisfactory agreement among negotiation agents in business transaction. Mudgal and Vassileva model negotiation among negotiation agents by means of probabilistic in¯uence diagram [77], which is a kind of uncertain reasoning model and allow an agent to make rational choices. (3) Possibility-based approaches, proposed by Garcia-Calves, Gimenez-Funes, Godo, RodrõguezAguilar, Matos and Sierra (cf. [17,20,48]), provide some ways to perform multi-agent reasoning under uncertainty. In these approaches, uncertainties due to the lack of knowledge about other agents' behaviours are modelled by possibility distributions. Based on information which is induced from a case base composed of previous negotiation behaviours, the possibility distributions are generated by choosing the most similar situation and the most similar price from the case base with the current environment. Sometimes, approximate reasoning techniques are used to turn ®ne the distributions. Finally, the possibilistic decision model (e.g., [8]) is used to choose the most preferred decision with the highest global utility. The approaches can handle negotiation based on a set of mutually in¯uencing two parties and many issues in the world. (4) The work of Luo et al. [46], and the work of Tyan et al. [72] both bridge fuzzy constraint satisfaction problems [9,74] and multiagent systems for di€erent purposes. (5) Pinto et al. [64] extend the situation calculus with actions that have a non-deterministic or uncertain nature. The situation calculus is originally proposed by McCarthy [49,50] as a logical framework for the representing knowledge about actions and change they provoke on the world. 5.3. Heterogeneous uncertain reasoning in multi-agent systems Since there exist many heterogeneous uncertain reasoning models, di€erent agents in a multi-agent system could make use of di€erent expert systems with heterogeneous uncertain reasoning models. Therefore, the issue of transformations of heterogeneous uncertain information between heterogeneous uncertain reasoning models must be put into account. For example, suppose two agents have di€erent sensors. That is, some information cannot

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

47

be sensed by one agent, but can be done by another agent. Thus, one agent must get some information through another agent. Now an agent gets some information via its own sensor, then it uses its own uncertain reasoning model to analysis the information. Further, it sends another agent the result of the analysis. If before making a decision or performing a certain action the another agent needs to further analysis the result by its own special uncertain reasoning model, the analysis result must be transformed from one uncertain reasoning model to the another uncertain reasoning model. Barwise and Etchemendy [2] develop a computational architecture for heterogeneous reasoning. Their study is motivated by the fact that reasoning, problem solving, and the general process of acquiring knowledge are typically complicated, collaborative, and heterogeneous activities instead of isolated and homogeneous a€airs involving an agent using a single form of representation. At this point, our work is similar to theirs. However, no heterogeneous uncertain reasoning is involved in their work. On the other hand, heterogeneous reasoning in their work is not involved in our work. So, we hope that our theory and their theory can together advance the theory and practice of collaborative, heterogeneous reasoning in multi-agent environment, and pave a way for the application of these methods to other application domains. 5.4. Information sharing Generally speaking, this issue of information sharing and exchanging is important in developing multi-agent systems. In fact, many contributions have been made to this topic in the area of multi-agent systems, such as the knowledge sharing e€ort [55], i.e. sharing of knowledge (objects, attributes, etc.); global planning [12], i.e. sharing of control information; and the sharing of social knowledge de®ning the structure and type of the multi-agent system [21]. The issue of information transformations between di€erent logic models in multi-agent systems is also concerned in the work of Lu and Ying [42]. Moreover, the issue of information sharing is studied by Su et al. [81] based on a modal logic. However, the issue of sharing heterogeneous uncertain information is not involved in these studies. One of the most important e€orts on knowledge sharing, which must be specially mentioned, is Knowledge Interchange Format (KIF) proposed by Genesereth, Fikes, Patil, Patel-Schneider, Mckay, Finin, Gruber and Neches (see e.g. [18,19,62]). KIF aims at attacking the heterogeneous language problem. KIF can be considered as an interlingua for knowledge sharing and communication among heterogeneous agents. Actually, a sending agent translates knowledge from its application-speci®c representation into the interlingua for communication purpose, and then a receiving agent translates knowledge from the interlingua into its application-speci®c representation.

48

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

KIF provides for the expression of arbitrary sentences in the ®rst-order predicate calculus, the representation of knowledge, the representation of nonmonotonic reasoning rules, and the de®nition of objects, functions, and relations. Nevertheless, KIF does not support for heterogeneous uncertain knowledge and contexts. A recent work similar to KIF is Constraint Interchange Format (CIF) proposed by Preece et al. [66]. Knowledge held in individual agents can be transformed into the common constraint language CIF for sharing. It is also similar to KIF that CIF does not support for heterogeneous uncertain knowledge and contexts, either. 5.5. Heterogeneities Various Heterogeneities in a multi-agent system are involved in some works [30,38,65,84]. Some e€orts have been made for collaboration of heterogeneous multi-agent systems (e.g., [30,82]). However, heterogeneous reasoning models have not been involved in the works mentioned above. 5.6. Di€erences between expert systems and agents In 1988, Bond and Gasser [3] de®ned a multi-agent system as a loosely coupled network of autonomous entities, called agents, which have individual capabilities, knowledge and resources, and which interact to share their knowledge and resources, and to solve problems being beyond their individual capabilities. According to this de®nition, expert systems and agents, and distributed expert systems and multi-agent systems look very similar. Firstly, between them there are just some small di€erences as follows: · Agents get information about the changes of their environment through their sensors, while expert systems do through users acting as middle man [87]. · Agents perform actions directly upon their environment, while expert systems do not work in this way, rather give feedback or advice to a third party [87]. · In a multi-agent system, each agent may have its private goal and so acts sel®shly towards the goal, while in a distributed expert system all expert systems have a common goal and work together towards the goal [40]. Of course, in a multi-agent system it is also allowed for all agents to have a common goal. Secondly, to some extent expert systems are agents. One the one hand, although generally speaking expert systems do not interact directly with the environment they situate, they are able to perform autonomously inference action. In other words, expert systems have some extent of autonomy, which is

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

49

one of essential characteristics of agents. On the other hand, as Parsons and Wooldridge [60] mentioned, some special expert systems, real-time (typically process control) expert systems, are agents because such expert systems can interact directly with the environment they situate. Thirdly, a sub-expert system in a distributed expert system can be regarded as a purely communication agent to some extent. In fact, Ferber [14, (see p. 12)], who is one of the founders of multi-agent discipline, de®nes a purely communicating agent as a computing entity which · is in an open computing system (assembly of applications, networks and heterogeneous systems), · can communicate with other agents, · is driven by a set of its own objectives, · possesses resources of its own, · has only a partial representation of other agents, · possesses skill (services) which it can o€er to other agents, · has behaviour tending towards attaining its objectives, taking into account the resources and skills available to it and depending on its representations and on the communications it receives. A sub-expert system in a distributed expert system almost holds all properties in the above de®nition of a purely communicating agent. In summary, to some extent expert systems can be viewed a kind of simple agents. As a result, even not in the situation where expert systems work for agents, the topic of research in this paper could be regarded as a special topic of research on multi-agent systems. 5.7. Correctness of the certainty factor model The issue of the correctness of the certainty factor model in the sense of probability theory has been discussed by several researchers. (1) Adams [1] and Schocken [73] proved that the formula for parallel propagation in the model is partially consistent with probability theory. (2) In [93,95] and the work in this paper, we proved that if the formula for parallel propagation in the model is viewed as a binary operation on … 1; 1† (denoted as CF ), and the pure probability parallel propagation formula is regarded as a binary operation on …0; 1† (denoted as P ), then algebraic structures …… 1; 1†; CF † and ……0; 1†; P † are isomorphic. In other words, the parallel propagation in the certainty factor model is equivalent to the pure probability one. (3) In [44], we prove that under the assumption of conditional independence, the formula for sequential propagation of certainty factors can be derived from the de®nition of the certainty factor strictly according to probability theory. In other words, the formula is completely consistent with probability theory. (4) Heckerman [28] is the ®rst one to reformulate the certainty factor model to understand its derivation from probability theory.

50

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

However, the relationship between the certainty factor model and probability theory is still not clear. In the following we will make it clearer. According to probability theory, Duda et al. [10] showed the following two lemmas: Lemma 4. If H and S are conditionally independent given E and :E; that is, P …H j E ^ S† ˆ P …H j E†; P …H j :E ^ S† ˆ P …H j :E†;

…13† …14†

then P …H j S† ˆ P …E j S†… P …H j E†

P …H j :E†† ‡ P …H j :E†:

…15†

Formula (15) actually is the sequential propagation formula of probabilities. Lemma 5. If S1 and S2 are conditional independent given H and :H ; that is, P …S1 ^ S2 j H † ˆ P …S1 j H †P …S2 j H †; P …S1 ^ S2 j :H † ˆ P …S1 j :H †P …S2 j :H †;

…16† …17†

then formula (6) holds. Formula (6) actually is the parallel propagation formula of probabilities. The following theorem reveals that under the framework of probability theory if the original de®nition of the certainty factor is kept, then the original version of the sequential propagation formula can also be kept, but the parallel propagation formula must be changed a little. Theorem 7. Let the certainty factor noted by CF …B; A†; be given by 8 P …B j A† P …B† > > > > < 1 P …B† CF …B; A† ˆ 0 > > > > : P …B j A† P …B† P …B†

of any proposition B given an event A, deif

P …B j A† > P …B†;

if

P …B j A† ˆ P …B†;

if

P …B j A† < P …B†:

…18†

(1) Sequential propagation formula. Under conditions (13) and (14), the following formula holds: ( CF …H ; E†  CF …E; S† if CF …E; S† P 0; CF …H ; S† ˆ …19† CF …H ; :E†  CF …E; S† if CF …E; S† < 0:

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

51

(2) Parallel propagation formula. Under conditions (16) and (17), the following formula holds: 8 CF …H ; S1 † ‡ CF …H ; S2 † CF …H ; S1 †CF …H ; S2 † > > > > if CF …H ; S1 † > 0; CF …H ; S2 † > 0; > > > > > CF …H ; S1 † ‡ CF …H ; S2 † ‡ CF …H ; S1 †CF …H ; S2 † > > > > > > < if CF …H ; S1 † 6 0; CF …H ; S2 † 6 0; …20† CF …H ; S1 ^ S2 † ˆ P …H j S1 ^ S2 † P …H † > 1 P …H † > > > > if P …H j S1 ^ S2 † P P …H †; > > > > > P …H j S1 ^ S2 † P …H † > > > > P …H † > : if P …H j S1 ^ S2 † < P …H †; where P …H j S1 ^ S2 † is calculated by formula (6) and the parameters in (6) are given by P …H j S10 † ˆ CF …H ; S10 † ‡ …1 P …H

j S20 †

ˆ

…CF …H ; S20 †

here CF …H ; S10 †; CF …H ; S20 † CF …H ; S20 †.

CF …H ; S10 ††P …H †;

…21†

‡ 1†P …H †; 2 fCF …H ; S1 †; CF …H ; S2 †g

…22† and

CF …H ; S10 †

>0>

Proof. (1) Sequential propagation formula. In [44], we prove that under the assumption of conditional independence, i.e., (13) and (14), the sequential propagation formula (19) of certainty factors can be derived strictly from the de®nition (18) of the certainty factor and the sequential propagation formula (15) of probabilities. (2) Parallel propagation formula. Adams [1] and Schocken [73] prove that under conditions (16) and (17), the ®rst two branches of the parallel propagation formula (20) of certainty factors can be derived strictly from the de®nition (18) of the certainty factor and the parallel propagation formula (6) of probabilities. The last two branches are straightforward from the de®nition (18) of the certainty factor and the parallel propagation formula (6) of probabilities.  Notice that the developers of the EMYCIN model were unable to derive a sequential propagation formula from the de®nition of the certainty factor, according to probability theory. Rather, formula (19) was proposed merely as an approximate formula for the certainty factors. Its rationality was justi®ed by demonstrating that it satis®ed certain intuitive properties consistent with the basic notion of certainty factors.

52

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

The following theorem describes the reformulation of Heckerman under the framework of probability theory. Notice that in the reformulation nothing in the original version of the certainty factor model is kept. Theorem 8. Let the certainty factor of any proposition B given an event A, denoted by CF …B; A†; be given by CF …B; A† ˆ

P …B†…1

P …B j A† P …B† P …B j A†† ‡ P …B j A†…1

P …B††

:

…23†

(1) Sequential propagation formula. Under conditions (13) and (14), the following formula holds CF …H ; S† ˆ

CF …H ; E†

2CF …H ; E†CF …H ; :E†CF …E; S† : CF …H ; :E† CF …E; S†…CF …H ; E† CF …H ; :E†† …24†

(2) Parallel propagation formula. Under conditions (16) and (17), the following formula holds CF …H ; S1 ^ S2 † ˆ

CF …H ; S1 † ‡ CF …H ; S2 † : 1 ‡ CF …H ; S1 †CF …H ; S2 †

…25†

Proof. (1) Sequential propagation formula. In [28], he proves that under the assumption of conditional independence, i.e., (13) and (14), the sequential propagation formula (24) of certainty factors can be derived strictly from the de®nition (23) of the certainty factor, and the sequential propagation formula (15) of probabilities. (2) Parallel propagation formula. In [28], he proves that under conditions (16) and (17), the parallel propagation formula (25) of certainty factors can be derived strictly from the de®nition (23) of the certainty factor and the parallel propagation formula (6) of probabilities.  The following theorem means that under the framework of probability theory, if we want to keep the original version of the parallel propagation formula in the certainty factor model, the de®nition of the certainty factor must be changed to the isomorphism (11) from the certainty factor group to the subjective Bayesian group. Theorem 9. Let the certainty factor of any proposition B given an event A, denoted by CF …B; A†; be given by

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

8 > > > …1 P …B††P …B j A† > > : …1 P …B j A††P …B†

53

1=a

1

if

1 P P …B j A†x > P …B†;

if

0 6 P …B j A† 6 P …B†: …26†

Then under conditions (16) and (17), formula (1) holds. Proof. It is straightforward from Theorem 6.



Under the assumption of conditional independence, i.e., (13) and (14), the sequential propagation formula of certainty factors can be derived strictly from the de®nition (26) of the certainty factor, and the sequential propagation formula (15) of probabilities. It is more problematic, and is the subject of continuing work. The basic idea might be similar to that behind our work [44]. The reader is encouraged to try.

6. Summary The expert system is a way to give an agent some intelligence. Actually, the process of action selection of an agent could be based on expert system reasonings. Thus, since there exist various uncertain factors in a process of action selection, uncertain reasonings are necessary for intelligent agents. There are many heterogeneous uncertain reasoning models, and so di€erent agents in a multi-agent system could employ di€erent expert systems with heterogeneous uncertain reasoning models. On the other hand, the information sharing and exchanging between di€erent agents in a multi-agent system, or between the di€erent expert systems in a distributed expert system, is unavoidable. Therefore, the transformation among uncertain reasoning models is the foundation not only for a distributed heterogeneous expert system but also a multiple heterogeneous intelligent agent system. In this paper, we construct a family of isomorphic transformations which can exactly transform the uncertainties of a proposition between two wellknown uncertain reasoning models, the certainty factor model and the subjective Bayesian method, under the condition the prior probability of a proposition can take any value on ‰0; 1Š. This solves one of the key problems in the area of distributed expert systems or a multiple intelligent heterogeneous agent system. Besides, among the family of isomorphic transformation maps, di€erent ones can handle di€erent attitude degrees to which

54

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

domain experts are positive or negative when performing a transformation task between these two heterogeneous uncertain reasoning models. In the past, the issue of information/knowledge sharing and the issue of heterogeneities in multi-agent systems is involved in many studies. However, the issue of information sharing between heterogeneous uncertain reasonings in a multi-agent environment is involved in few studies. The study presented in this paper also makes clear the relationship between the certainty factor model and probability theory. That is, under the framework of probability theory, if the original version of the de®nition of certainty factors is kept, then the original version of the sequential propagation formula can also be kept, but the parallel propagation formula must be revised a little; if the original version of the parallel propagation is kept, then the de®nition of certainty factors as well as the sequential propagation formula must be changed. The issues, which are worth studying further, include the information sharing among other heterogeneous uncertain, non-monotonic and fuzzy reasoning models, the reasoning maintenance among such models, and the implementation of a multi-agent system based on these theories. In addition, under the assumption of conditional independence, i.e., (13) and (14), the sequential propagation formula of certainty factors can be derived strictly from the de®nition (26) of the certainty factor, and the sequential propagation formula (15) of probabilities, but has not yet. The reader is encouraged to try. Acknowledgements The authors would like to thank the IJAR editor in chief, P.P. Bonissone, for his valuable comments and patience. Because of these, the quality of the paper is improved a lot. References [1] J.B. Adams, Probabilistic reasoning and certainty factor, in: B.G. Buchanan, E.H. Shortli€e (Eds.), Rule-Based Expert Systems, Addison-Wesley, Reading, MA, 1984, pp. 263±271. [2] J. Barwise, J. Etchemendy, An computational architecture for heterogeneous reasoning, in: Proceedings of the Seventh Conference on Theoretical Aspects of Rationality and Knowledge, 1998, pp. 1±11. [3] A. Bond, L. Gasser (Eds.), Readings in Distributed Arti®cial Intelligence, Morgan Kaufmann, Los Altos, CA, 1988. [4] W. Brenner, R. Zarnekow, H. Wittig, Intelligent Software Agents: Foundation and Applications, Springer, Berlin, 1998.

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

55

[5] S.M. Brown, E.S. Jr., S.B. Banks, A dynamic Bayesian intelligent interface agent, in: Proceedings of the Sixth International Interfaces Conference, 1997, pp. 118±120. [6] S.M. Brown, E.S. Santos Jr., S.B. Banks, Utility theory-based user models for intelligent inferface agents, in: R.E. Merler, E. Neufeld (Eds.), Advances in Arti®cial Intelligence, Lecture Notes in Arti®cial Intelligence, vol. 1418, Springer, Berlin, 1998, pp. 378±392. [7] D. Cockburn, N.R. Jennings, ARCHON: a distributed arti®cial intelligence system for industrial applications, in: G.P. O'Hare, N.R. Jennings (Eds.), Foundations of Distributed Arti®cial Intelligence, Sixth-Generation Computer Technology Series, Wiley, New York, 1996, pp. 319±344 (Chapter 12). [8] D. Dubois, H. Prade, Possibility theory as a basis for qualitative decision theory, in: Proceedings of the 14th International Joint Conference on Arti®cial Intelligence, 1995, pp. 1924±1930. [9] D. Dubois, H. Prade, Qualitative possibility theory and its applications to constraint satisfaction and decision under uncertainty, International Journal of Intelligent Systems 14 (1999) 45±61. [10] R.O. Duda, P.E. Hart, N.J. Nillson, Subjective Bayesian methods for rule-based inference systems, in: AFIPS Conference Proceedings, vol. 45, AFIPS Press, 1976, pp. 1075±1082. [11] R.O. Duda, P.E. Hart, N.J. Nilsson, R. Reboh, J. Slocum, G. Sutherland, Development of a computer-based consultant for mineral exploration, SRI Report, Stanford Research Institute, Menlo Park, CA, October 1977. [12] E.H. Durfee, V.R. Lesser, Negotiating task decomposition and allocation using partial global planning, Distributed Arti®cial Intelligence 2 (1989) 229±243. [13] M. El-Nasr, J. Yen, T.R. Ioerger, FLAME ± Fuzzy logic adaptive model of emotions, Autonomous Agents and Multi-agents 3 (2000) 217±257. [14] J. Ferber, Multi-Agent Systems: An Introduction to Distributed Arti®cial Intelligence, Addison-Wesley, Reading, MA, 1999. [15] L. Foulloy, S. Galichet, Fuzzy sensors for fuzzy control, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 2 (1) (1994) 55±66. [16] E. Friedman-Hill, Javal Expert System Shell (Jess), Technical Report #SAND98-8206, Sandia National Labs, Livemore, CA, 1998. [17] P. Garcia-Calves, E. Gimenez-Funes, L. Godo, J.A. Rodrõguez-Aguilar, Possibilistic-based design of bidding strategies in electronic actions, in: Proceedings of the 13th European Conference on Arti®cial Intelligence, 1998, pp. 575±579. [18] M.R. Genesereth, Knowledge interchange format, in: Proceedings of the Conference of the Principle of Knowledge Representation and Reasoning, 1991, pp. 599±600. [19] M.R. Genesereth, R.E. Fikes et al., Knowledge interchange format, Version 3.1, Reference Manual, Technical Report Logic-92-1, Computer Science Department, Stanford University, 1992. [20] E. Gimenez-Funes, L. Godo, J.A. Rodrõguez-Aguilar, P. Garcia-Calves, Designing bidding strategies for trading agents in electronic auctions, in: Proceedings of the Third International Conference on Multi-Agent Systems, 1998, pp. 136±143. [21] N. Glaser, P. Morignot, Societies of autonomous agents and their reorganisation, in: Tschacher, Dauwalder (Eds.), Nonlinear Systems Approaches to Cognitive Psychology and Cognitive Science ± Dynamics, Synergetics, Autonomous Agents, in book series ``Nonlinear Phenomena in the Life Sciences'', World Scienti®c, Singapore, 1998. [22] R.F. Grove, A.C. Hulse, An internet-based expert system for reptile identi®cation, in: Proceedings of the First International Conference on the Practical Application of Java, 1999, pp. 165±173. [23] R. Grove, Design and development of knowledge-based systems on the web, in: Proceedings of the Ninth International Conference on Intelligent Systems, 2000, pp. 147±150.

56

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

[24] R. Grove, Internet-based expert systems, Expert Systems: International Journal of Knowledge Engineering and Neural Networks 17 (3) (2000) 129±135. [25] P. H ajek, Combining functions for certainty degrees in consulting systems, International Journal of Man±Machines Studies 22 (1985) 59±76. [26] R.A. Harrington, S. Banks, E.S. Jr., Development of an intelligent user interface for a generic expert systems, in: Online Proceedings of the Seventh Midwest Arti®cial Intelligence and Cognitive Science Conference, 1996. Available at http://www.cs.indiana.edu/event/maics96/. [27] R.A. Harrington, S. Banks, E.S. Jr., Uncertainty-based reasoning for a generic expert system intelligent user interface, in: Proceedings of the Eigth IEEE International Conference on Tools with Arti®cial Intelligence, 1996, pp. 52±55. [28] D. Heckerman, Probabilistic interpretations for MYCIN certainty factors, in: L. Kanaal, J. Lemmer (Eds.), Uncertainty in Arti®cial Intelligence, North-Holland, Amsterdam, 1986, pp. 167±196. [29] P. Jackson, Introduction to Expert Systems, third ed., Addison-Wesley, Harlow, England, 1999. [30] W.C. Jamison, Approaching interoperability for heterogeneous multiagent systems using high order agencies, in: P. Kandzia, M. Klusch (Eds.), Cooperative Information Agents, Lecture Notes in Arti®cial Intelligence, vol. 1202, Springer, Berlin, 1997, pp. 222±234. [31] N.R. Jennings, Towards a cooperation knowledge level for collaborative problem solving, in: Proceedings of the 10th European Conference on Arti®cial Intelligence, 1992, pp. 224±228. [32] N.R. Jennings, T. Wittig, ARCHON: theory and practice, in: N.M. Avouris, L. Gasser (Eds.), Distributed Arti®cial Intelligence: Theory and Praxis, Kluwer Academic Press, Dordrecht, 1992, pp. 179±195. [33] N.R. Jennings, J.A. Pople, Design and implementation of ARCHON's coordination module, in: Proceedings of Workshop on Cooperating Knowledge-Based Systems, 1993, pp. 61±82. [34] N.R. Jennings, The ARCHON system and its applications, in: Proceedings of the Second International Working Conference on Cooperating Knowledge-Based Systems, 1994, pp. 13±29. [35] N.R. Jennings, I. Laresgoiti, E.H. Mamdani, F. Perriolat, P. Skarek, L.Z. Varga, Using ARCHON to develop real-world DAI applications for electricity transportation management and particle accelerator control, IEEE Expert 11 (6) (1996). [36] M. Knapik, J. Johnson, Developing Intelligent Agents for Distributed Systems: Exploring Architecture, Technologies, and Applications, McGraw-Hill, New York, 1998. [37] S. Kraus, V.S. Subrahmanian, Multiagent reasoning with probability, time, and beliefs, International Journal of Intelligent Systems 10 (1995) 459±499. [38] S.E. Lander, Distributed search and con¯ict management among heterogeneous reusable agents, Ph.D. Thesis, University Of Massachusetts, Amherst, Massachusetts, May 1994. [39] J.-J. Lee, R. McCartney, Predicting user actions using interface agents with individual user models, in: H. Nakashima, C. Zhang (Eds.), Approaches to Intelligent Agents, Lecture Notes in Arti®cial Intelligence, vol. 1733, Springer, Berlin, 1999, pp. 154±169. [40] Y. Li, C. Zhang, Information fusion and decision making for utility-based agents, in: Proceedings of the Third World Multiconference on Systemics, Cybernetics and Informatics and the Fifth International Conference on Information Systems Analysis and Synthesis, 1999, pp. 377±384. [41] Y. Li, C. Zhang, Information-based cooperation in multiple agent systems, in: N. Foo (Ed.), Advanced Topics in Arti®cial Intelligence, Lecture Notes in Arti®cial Intelligence, vol. 1747, Springer, Berlin, 1999, pp. 496±498. [42] R. Lu, M. Ying, A model of reasoning about knowledge, Science in China (Series E) 41 (5) (1998) 527±534. [43] X. Luo, A study of information sharing of heterogeneous reasoning models in multi-agent environment, Ph.D. Thesis, the University of New England, Australia, 1998.

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

57

[44] X. Luo, C. Zhang, Proof of the correctness of the EMYCIN sequential propagation, IEEE Transaction on Knowledge and Data Engineering 11 (2) (1999) 355±359. [45] X. Luo, C. Zhang, H.-F. Leung, A class of isomorphic transformations for integrating EMYCIN-style and PROSPECTOR-style systems into a rule-based multi-agent system, in: H. Nakashima, C. Zhang (Eds.), Approaches to Intelligent Agents, Lecture Notes in Arti®cial Intelligence, vol. 1733, Springer, Berlin, 1999, pp. 211±225. [46] X. Luo, H.-F. Leung, J.H.-M. Lee, Theory and properties of a sel®sh protocol for multi-agent meeting scheduling using fuzzy constraints, in: Proceedings of the 14th European Conference on Arti®cial Intelligence, 2000, pp. 373±377. [47] M. Marcus, Introduction to Modern Algebra, Marcel Dekker, New York, 1978. [48] N. Matos, C. Sierra, Evolutionary computing and negotiating agents, in: Agent-Mediated Electronic Commerce, Lecture Notes in Arti®cial Intelligence, vol. 1571, 1998, pp. 126± 150. [49] J. McCarthy, Situation, actions, and causal laws, Tech. Res. Memo 2, Stanford Arti®cial Intelligence Project, 1963. [50] J. McCarthy, P.J. Hayes, Some philosophical problems from the standpoint of arti®cial intelligence, in: B. Meltzer, D. Michie (Eds.), Machine Intelligence, vol. 4, 1969, pp. 463±502. [51] J. McDermott, Making expert systems explicit, in: Proceedings of the IFIP-86, 1986, pp. 539± 544. [52] W.V. Melle, A domain-independent system that aids in constructing knowledge-based consultation programs, Ph.D. Dissertation, Report STAN-CS-80-820, Computer Science Department, Stanford University, 1980. [53] C. Mudgal, J. Vassileva, Bilateral negotiation with incomplete and uncertain information: a decision-theoretic approach using a model of the opponent, in: M. Klusch, L. Kerschberg (Eds.), Cooperative Information Agents IV ± The Future of Information Agents in Cyberspace, Lecture Notes in Arti®cial Intelligence, vol. 1860, Springer, Berlin, 2000, pp. 105±118. [54] R. Murch, T. Johnson, Intelligent Software Agents, Prentice-Hall, Englewood Cli€s, NJ, 1999. [55] M.A. Musen, Dimensions of knowledge sharing and reuse, Computers and Biomedical Research 25 (1992) 435±467. [56] N.J. Nilsson, Probabilistic logic, Journal of Arti®cial Intelligence 28 (1986) 71±87. [57] S. Parsons, A. Saotti, Integrating uncertainty handling techniques in distributed arti®cial intelligence, in: M. Clark, R. Kruse, S. Moral (Eds.), Symbolic and Quantitative Approaches to Reasoning and Uncertainty, Springer, Berlin, 1993, pp. 304±309. [58] S. Parsons, P. Giorigini, On using degrees of belief in the BDI agents, in: Proceedings of the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, 1998. [59] S. Parsons, C. Sierra, N.R. Jennings, Agents that reason and negotiate by arguing, Journal of Logic and Computation 8 (3) (1998) 261±292. [60] S. Parsons, M. Wooldridge, Rational action in autonomous agents, in: Tutorial Notes of the 14th European Conference on Arti®cial Intelligence, 2000. [61] S. Parsons, P. Giorigini, An approach to using degrees of belief in BDI agents, in: B. BouchonMeunier, R.R. Yager (Eds.), Information, Uncertainty, Fusion, Kluwer Academic Publishers, Dordrecht, 2000. [62] R.S. Patil, R.E. Fikes, P.F. Patel-Schneider, D. Mckay, T. Finin, T. Gruber, R. Neches, The DARPA knowledge sharing e€ort: progress report, in: M.N. Huhns, M.P. Singh (Eds.), Readings in Agents, Morgan Kaufmann, Los Altos, CA, 1998, pp. 243±254. [63] C.D.C. Pereira, F. Carcia, J. Lang, R. Martin-Clouaire, Planning with graded nondeterministic actions: a possibilistic approach, International Journal of Intelligent systems 12 (1997) 935±962.

58

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

[64] J. Pinto, A. Sernadas, C. Sernadas, P. Mateus, Non-determinism and uncertainty in the situation calculus, International Journal of Uncertainty, Fuzziness and Knowledge-based Systems 8 (2) (2000) 127±149. [65] M.V.N. Prasad, V.R. Lesser, S.E. Lander, Learning experiments in a heterogeneous multiagent system, IJCAI-95 Workshop on Adaptation and Learning in Multi-agent Systems, Montreal, Canada, 1995. [66] A. Preece, K. Hui, A. Gray, P. Marti, T. Bench-Capon, D. Jones, Z. Cui, The KRAFT architecture for knowledge fusion and transformation, Knowledge-Based Systems 13 (2000) 113±120. [67] A.S. Rao, M.P. George€, Modelling rational agents within a BDI-architecture, in: Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning, 1991, pp. 473±484. [68] A.S. Rao, M.P. George€, BDI agents: from theory to practice, in: Proceedings of the First International Conference on Multi-Agent Systems, 1995, pp. 312±319. [69] C. Roda, N.R. Jennings, E.H. Mamdani, ARCHON: a cooperation framework for industrial process control, in: S.M. Deen (Ed.), Proceedings of workshop on Cooperating KnowledgeBased Systems 1990, Springer, Berlin, 1991, pp. 95±112. [70] S. Russell, P. Novig, Arti®cial Intelligence: A Modern Approach, Prentice-Hall, Englewood Cli€s, NJ, 1994. [71] S. Thiebaux, J. Hertzberg, W. Shoa€, M. Schneider, A stochastic model of actions and plans for anytime planning under uncertainty, International Journal of Intelligent System 10 (1995) 155±183. [72] C.-Y. Tyan, P.P. Wang, D.R. Bahler, The design of an adaptive multiple agent fuzzy constraint-based controller (MAFCC) for a complex hydraulic system, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 4 (6) (1996) 537±551. [73] S. Schocken, On the rational scope of probabilistic rule-based inference systems, in: J.F. Lemmer, L.N. Kanal (Eds.), Uncertainty in Arti®cial Intelligence, vol. 2, North-Holland, Amsterdam, 1988, pp. 175±189. [74] T. Schiex, H. Fargier, G. Verfaillie, Valued constraint satisfaction problems: hard and easy problems, in: Proceedings of the 14th International Joint Conference on Arti®cial Intelligence, 1995, pp. 631±637. [75] M. Schroeder, G. Wagner, Vivid agents: theory, architecture, and applications, Applied Arti®cial Intelligence 14 (2000) 645±675. [76] G. Shafer, A Mathematical Theory of Evidence, Princeton University Press, Princeton, NJ, 1976. [77] R. Shachter, Probabilistic inference and in¯uence diagrams, Operations Research 36 (4) (1998) 589±604. [78] E.H. Shortli€e, Computer-Based Medical Consultations: MYCIN, Elsevier, New York, 1976. [79] Y. Shoham, What we talk about when talk about software agents, IEEE Intelligent Systems March/April 1999, 28±31. [80] E.H. Shortli€e, B.G. Buchanan, A model of inexact reasoning in medicine, Mathematical Bioscience 23 (1975) 351±379. [81] K. Su, C. Zhang, X. Luo, Reasoning about reasoning properties of knowledge for multiagents, in: Proceedings of the International Conference on Intelligent Information Processing, Beijing, China, 2000. [82] V.S. Subrahmanian, P. Bonatti, J. Dix, T. Eiter, S. Kraus, F. Ozcan, R. Ross, Heterogeneous Agent Systems, The MIT Press, Cambridge, MA, 2000. [83] K.P. Sycara, The many faces of agents, AI Magazine 19 (2) (1998) 11±12. [84] G. Vossen, The CORBA speci®cation for cooperation in heterogeneous information systems, in: P. Kandzia, M. Klusch (Eds.), Cooperative Information Agents, Lecture Notes in Arti®cial Intelligence, vol. 1202, Springer, Berlin, 1997, pp. 222±234.

X. Luo et al. / Internat. J. Approx. Reason. 27 (2001) 27±59

59

[85] T. Wittig, N.R. Jennings, E.H. Mamdani, ARCHON ± a framework for intelligent cooperation, IEE-BCS Journal of Intelligent Systems Engineering ± Special Issue on Realtime Intelligent Systems in ESPRIT 3 (3) (1994) 168±179. [86] S.K.M. Wang, C.J. Butz, Probabilistic reasoning in a distributed multi-agent environment, in: Proceedings of the Third International Conference on Multi-Agent Systems, 1998, pp. 341± 348. [87] M. Wooldridge, Intelligence agents, in: G. Weiss (Ed.), Multiagent Systems ± A Modern Approach to Distributed Arti®cial Intelligence, MIT Press, Cambridge, MA, 1999, pp. 27±77. [88] Y. Xiang, A probabilistic framework for cooperative multi-agent distributed interpretation and optimisation of communication, Arti®cial Intelligence 87 (1996) 295±342. [89] P. Xuan, V. Lesser, Incorporating uncertainty in agent commitments, in: N.R. Jennings, Y. Lesperance (Eds.), Intelligent Agent VI ± Agent Theories, Architectures, and Languages, Lecture Notes in Arti®cial Intelligence, vol. 1757, Springer, Berlin, 2000, pp. 57±70. [90] C. Zhang, Cooperation under uncertainty in distributed expert systems, Arti®cial Intelligence 56 (1992) 21±69. [91] C. Zhang, Heterogeneous transformation of uncertainties of propositions among inexact reasoning models, IEEE Transactions on Knowledge and Data Engineering 6 (3) (1994) 353± 360. [92] C. Zhang, D.A. Bell, HECODES: a framework for heterogeneous cooperative distributed expert systems, International Journal on Data and Knowledge Engineering 6 (1991) 251±273. [93] C. Zhang, X. Luo, Isomorphic transformation of uncertainties of propositions among the EMYCIN and PROSPECTOR uncertain models, in: Proceedings of the Second International Conference on Multi-Agent Systems, AAAI Press, New York, 1996, p. 465. [94] C. Zhang, X. Luo, Transformation between the EMYCIN model and the Bayesian network, in: W. Wobcke, M. Pagnucco, C. Zhang (Eds.), Agents and Multi-Agent Systems Formalisms, Methodologies and Applications, Lecture Notes in Arti®cial Intelligence, vol. 1441, Springer, Berlin, 1998, pp. 205±219. [95] C. Zhang, X. Luo, Isomorphic transformations of uncertainties for incorporating EMYCINstyle and PROSPECTOR-style systems into a distributed expert system, Journal of Computer Science and Technology 14 (4) (1999) 368±392. [96] C. Zhang, M. Orlowska, On algebraic structures of inexact reasoning models, in: G.E. Lasker et al. (Eds.), Advances in Information Systems Research, 1991, pp. 58±77. [97] M. Zhang, Synthesis of solutions in distributed expert systems, Ph.D. Thesis, the University of New England, Australia, 1995. [98] M. Zhang, C. Zhang, Potential cases, methodologies, and strategies of synthesis of solutions in distributed expert systems, IEEE Transaction on Knowledge and Data Engineering 11 (3) (1999) 498±503.