Solving Conflicting Beliefs with a Distributed Belief ... - CiteSeerX

1 downloads 0 Views 64KB Size Report
disparate beliefs regarding shared information. The nature of these conflicts is dynamic and requires adequate methodologies for dynamic conflict resolution.
Solving Conflicting Beliefs with a Distributed Belief Revision Approach Benedita Malheiro* and Eugénio Oliveira** LIACC *ISEP, DEE, Rua António Bernardino de Almeida, 4200 Porto, Portugal [email protected] ** FEUP, DEEC, Rua dos Bragas, 4099 Porto CODEX, Portugal [email protected] Submitted to AAAI-99 Workshop on Agents’ Conflicts

Abstract The ability to solve conflicting beliefs is crucial for multiagent systems where the information is dynamic, incomplete, and distributed over a group of individuals. The individual agents have partial views and, frequently, hold disparate beliefs regarding shared information. The nature of these conflicts is dynamic and requires adequate methodologies for dynamic conflict resolution. The proposed conflict resolution methodologies are based on an intrinsically dynamic approach - distributed belief revision. The distributed belief revision approach adopted is based on the availability of distributed reason maintenance capabilities (to detect and remove the conflicting sets of beliefs from the agents’ knowledge bases) and on the ability to select preferred outcomes (in order to solve the detected conflicts). Two types of conflicting beliefs are addressed in this paper: Belief/Disbelief conflicts and Disbelief conflicts. The Belief/Disbelief type of conflict results from the assignment, by different agents, of opposite belief status to the same concept/proposition. The Disbelief type of conflict is generated by the reason maintenance mechanism and occurs when previously held conclusions are dropped. The methodology for solving Belief/Disbelief conflicts is, basically, a selection process based on the assessment of the credibility of the opposing belief status. The methodology for solving Disbelief conflicts is, essentially, a search process for a consensual alternative based on a "next best" relaxation strategy. These methodologies we are proposing are organized in a multi-procedure sequence ordered by the amount of knowledge used for solving the conflict. The different procedures within each methodology are sequentially applied until an acceptable conflict outcome is produced.

Introduction Multi-Agent Systems (MAS) are a natural environment for the occurrence of information conflicts – frequently, different agents hold distinct perspectives regarding the same piece of information. These disparate perspectives can either be reconcilable or incompatible, falling, respectively, into the categories of negative and positive conflicts (Oliveira, Mouta and Rocha, 1993). In particular, this paper is concerned with solving negative conflicts that occur when different agents hold incompatible views regarding shared information. Typically, conflict resolution

is based on negotiation protocols that perform a distributed search trough the space of possible solutions (Jennings, Parsons, Noriega and Sierra, 1998). Some of the negotiation processes used to reach an acceptable agreement to all the parties involved include: Choosing between conflicting views – by comparing the reasons behind each stance and choosing the strongest view; or Building a new consensual view – by searching for fully acceptable alternative foundations for the disputed information. We elected distributed belief revision as the adequate approach for detecting conflicts (inconsistent data sets), maintaining the consistency of the agents’ knowledge base and trying to solve the detected conflicts. This view of belief revision as a truth maintenance process followed by a selection mechanism of one (or more) preferred solutions was proposed by (Dragoni, Giorgini and Puliti, 1994). The distributed reason maintenance methodologies (also called distributed truth maintenance) are extremely important in the detection and removal of the conflicting data sets, they do not include any conflict resolution capabilities. As a result, specific belief revision methodologies were developed to try to solve the conflicts that occur when the agents, as a result of incomplete knowledge and partial views, hold conflicting beliefs regarding shared information. The investigation carried out was focussed on trying to solve two specific kinds of negative conflicts: Belief/Disbelief Conflicts – which occur when some agents believe while some others do not believe in the same piece of shared information; Disbelief Conflicts - when the agents detect inconsistent sets of beliefs and have, as a result, to drop previously held conclusions (reason maintenance). While in the case of Belief/Disbelief conflicts, the methodology for conflict resolution is faced with the problem of choosing a belief status, in the case of the Disbelief conflicts, the revision process has to try to find alternatives to support the previously believed conclusions. The concept of conflict resolution addressed in this work is dynamic, as opposed to static, one time conflict solving activity of typical MAS. In our setting conflicts may have multiple episodes during its existence, and they only

disappear when all of the involved agents believe in the proposition. New conflict episodes are triggered by any change regarding the conflict, either because the number of agents involved or because the perspectives themselves have changed. Every time a new episode of an existing conflict is detected a re-evaluation of the conflict is automatically performed and a new outcome is produced. Dynamic conflict resolution relies on the availability of belief revision methodologies within the system in order to abandon the previous conflict episode outcome and to adopt the new conflict episode solution. Each agent is able to maintain its individual perspective as long as it remains well founded, while, simultaneously, the global view (the outcome of the most recent conflict episode) may change as new conflict episodes are detected. The kind of MAS we aim at are composed of autonomous cooperating agents with belief revision capabilities. In particular, the agents implemented were remotely inspired by the ARCHON architecture (Wittig, 1992), and are structured in two main layers: the intelligent system layer and the cooperation layer. The intelligent system layer is a reason maintenance system built upon an Assumption based Truth Maintenance Systems (ATMS) (de Kleer, 1986), and contains the individual agent’s domain knowledge. The cooperation layer includes a self model, where the agent’s intelligent system is described (knowledge and beliefs that the agent has or is expected to have), and an acquaintances model, where the full listing of the capabilities of the other agents which are relevant to the agent’s problem solving activity is provided (tasks and results that the other agents are expected to provide or share with the agent). It is based on the information listed on the acquaintances model that the agents cooperate, performing both task sharing and result sharing. The implemented distributed reason maintenance methodologies are described in (Malheiro and Oliveira, 1996), and include communication, representation, evaluation and accommodation policies for internal and external beliefs. Furthermore, the knowledge represented in the implemented MAS is organized in domains with predefined characteristics. These properties include, among others, (i) lists of candidates for the attributes of domain concepts ordered by preference - representing alternative values, and (ii) a list of default multiple perspective processing criteria ordered by preference - specifying the set of policies, ordered by preference, that can be used to accommodate the domain's conflicting perspectives.

Belief/Disbelief Conflicts The Belief/Disbelief Conflicts result from the assignment, by different agents, of contradictory belief status to the same piece of information. To solve this type of conflict, and since conflict solving is regarded as a dynamic activity, the agents involved have to maintain two separate views: their individual perspectives and the socially generated conflict solution. While the responsibility for

generating the individual perspectives relies solely on the agent itself, the conflict resolution outcome depends on the application of Belief/Disbelief conflict solving methodologies. Three elementary processing criteria were implemented to accommodate the disparate views: The CONsensus (CON) criterion - The shared proposition will be (i) Believed, if all the perspectives of the different agents involved are believed, or (ii) Unbelieved, otherwise. The MAJority (MAJ) criterion - The shared proposition will be (i) Believed, as long as the majority of the perspectives of the different agents involved is believed, and (ii) Unbelieved, otherwise. The At Least One (ALO) criterion - The shared proposition will be (i) Believed as long as at least one of the perspectives of the different agents involved is believed, and (ii) Unbelieved, otherwise. The methodology we are proposing for solving the Belief/Disbelief conflicts is organized in two steps: first, it establishes the desired outcome of the social conflict episode, and second, it applies the processing criterion that solves the episode accordingly. During the first stage the conflict episode is analysed to establish the most reasonable outcome. The necessary knowledge is extracted from data dependent features like the agents’ reliability (Dragoni and Giorgini, 1997) and the endorsements of the beliefs’ foundations (Galliers, 1992), allowing us to select, at runtime, the most adequate processing criterion. This dynamic selection process is based on assessment of the credibility values associated with each belief status. Two credibility assessment procedures were designed: The Foundations Origin based Procedure (FOR) – where the credibility of the conflicting perspectives is determined based on the strength of the foundations endorsements (observed foundations are stronger than assumed foundations) and on the reliability of the source agents (can also be regarded as the last stage of an implicit argumentation); and The Reliability based Procedure (REL) – where the credibility of the conflicting views is based on the reliability of the foundations source agents. The methodology for solving Belief/Disbelief conflicts starts by applying the FOR procedure. If the FOR procedure is able to determine the most credible belief status, then the selected processing criterion is applied and the episode is solved. However, if the result of the application of the FOR procedure is a draw between the conflicting perspectives, the Belief/Disbelief conflict solving methodology proceeds with the application of the REL procedure. If the REL procedure is able to establish the most credible belief status, then the selected processing criterion is applied and the episode is solved. Finally, if none of the above procedures is able to solve the conflict episode the Belief/Disbelief conflict solving methodology tries a last resource: the Global Domain Relaxation

Procedure (GDR). This last procedure is domain dependent and not data dependent as the FOR and REL procedures, since it does not take into account the strengths and weaknesses of the conflicting perspectives that contribute to each belief status. The sequential application of these procedures is ordered by the amount of knowledge used to establish the resulting belief status. It starts with the FOR procedure which calculates the credibility of the conflicting perspectives based on the strength of the endorsements and on the reliability of the sources of the foundations. It follows with the REL procedure which computes the credibility of the conflicting beliefs solely based on the reliability of the sources of the foundations. And finally, it ends with the GDR procedure which is completely independent of the data involved in the conflict episode. There is no guarantee that this methodology, consisting on the application of this sequence of procedures, will solve the conflict episode. We will now describe the procedures for solving Belief/Disbelief conflicts mentioned above.

The Foundations ORigin based Procedure (FOR) The individual agents’ perspectives are based on their respective sets of foundations (ATMS based agents), and resulted from some process of observation, assumption or external communication. Since communicated perspectives also resulted from some process of observation, assumption or communication of other agents, ultimately, the foundations set of any perspective is solely made of observed and assumed propositions. Galliers (Galliers, 1992) refers to this kind of information as endorsements. Within this procedure, the credibility granted, a priori, to observed foundations and assumed foundations was, respectively, 1 and 1/2. These initial values are then affected by the reliability of the source agent. As a result, each perspective is assigned a credibility value equal to the average of the credibility values of its foundations affected by the reliability values of the source agents. The credibility of any perspective is then a value between 0 and 1. A credibility value of 1 means that perspective is 100% credible (it solely depends on observed foundations), whereas a credibility value of 0 means that no credibility whatsoever is associated with the perspective. The FOR procedure calculates the credibility values attached to the Believed and to the Unbelieved status of the information item and chooses the basic multiple perspective processing criterion whose outcome results in most credible belief status. If the most credible belief status is: (i) Unbelieved then the CON criterion is applied to the episode of the conflict; (ii) Believed then, if the majority of the perspectives are in favour of believing in the proposition the MAJ criterion is applied; else the ALO criterion is applied to the episode of the conflict. The agents’ reliability is affected by the outcome of the

episodes processed so far. An episode winning agent increases its reliability (the individual agent perspective coincides with the outcome of the episode) while an episode loosing agent decreases its reliability (the individual agent perspective is contradictory to the result of the episode). At launching time the agents are given a reliability value of 1, which, during runtime, may vary between 0 and 1. A reliability value of 1 means that the information communicated by the agent has been the most credible, and a value near 0 means that the information issued by the agent has been less than credible.

The RELiability based Procedure (REL) Each conflicting perspective has a credibility value equal to the average of the reliability values of the agents that provided the foundations for that perspective. The different perspectives in favour of each one of the belief status are summed up and the REL procedure chooses the basic multiple perspective processing criterion whose outcome results in the most credible belief status. If the most credible belief status is: (i) Unbelieved then the CON criterion is applied to the episode of the conflict; (ii) Believed then, if the majority of the perspectives are in favour of believing in the proposition the MAJ criterion is applied, else the ALO criterion is applied to the episode of the conflict. The reliability of the agents also affected by the outcome of the conflict episodes solved so far. Episode winning agents increase their reliability while episode loosing agents decrease their reliability.

The Global Domain Relaxation Procedure (GDR) The default domain multiple perspective processing criterion is pre-defined by the knowledge engineer according to the characteristics of the represented domain knowledge: The CON criterion is selected whenever the consensus of the involved agents perspectives about a belief is mandatory (e.g., only when every agent confirms that the power has been shut off will the system report that the power has been shut off); The MAJ criterion is selected whenever the belief in a shared proposition depends on the majority of the involved agents (e.g., only after receiving the confirmation of a warning message regarding some possible malfunctioning device from the majority of the agents will the system act); The ALO criterion is chosen whenever a single believed perspective is enough for making the shared proposition believed (e.g., the occurrence of single serious alarm message is enough for triggering some system action). The application of a less demanding multiple perspective processing criterion to the domain of a conflict may eventually solve the conflict episode. This approach is very raw and depends on the characteristics of the domain:

some domains may allow such a relaxation while others don’t. The ranking of the multiple perspective processing criteria according to their degree of demand is: first, the CON criterion, second, the MAJ criterion, and finally, the ALO criterion. If this procedure is allowed the default domain criterion is relaxed according to the order specified above.

Disbelief Conflicts In multi-agent systems with reason maintenance capabilities, the detection of inconsistent (invalid sets of) beliefs within the system body of knowledge triggers the reason maintenance procedure. As a result, previously believed conclusions may become unbelieved. Although this activity is essential to the maintenance of well founded beliefs, the system should make an effort to try to believe in its conclusions as much as possible. To solve this type of conflicts the system needs to know how to provide alternative support to the invalidated conclusions. This search for "next best" solutions is a relaxation mechanism called Preference ORder based Procedure (POR). The preference order procedure is based on the relaxation of the values of the attributes of the foundations that support the detected Disbelief conflict, according to a predefined preference ranking. As it was already referred, each knowledge domain has, for some domain concepts, lists of ordered candidate attribute values. Each list contains the set of possible instances ordered by preference (the first element is the best candidate, the second element is the second best, and so forth) for the attribute values of the specified domain concept, representing the alternative values. When a Disbelief conflict occurs it means that the originally built support (containing the current instances of the attributes) became invalid, and a search for a new alternative support based the "next best" strategy has to be started. This is achieved by looking for the "next best" instances for the foundations of the invalidated proposition, which, if founded, will try to provide a new valid support for the concept. When there are alternative instances for the foundations of concepts, the agents concerned exchange their next best candidates together with the corresponding preference order. These preference order values are affected by the agents’ reliability. In the case of the foundations maintained by a single agent, the process is just a next best candidate retrieval operation. In the case of foundations maintained by groups of agents, a consensus candidate has to be found: if the gathered proposals are (i) identical - the new foundation has been found; (ii) different - the agents that proposed higher preference order candidates generate new next best proposals (propose the following lower preference order candidates), until they run out of candidates. The alternative foundations found through this procedure are then assumed by the system. The credibility measure of the resulting new foundations is a function of the lowest preference order candidate used and of the involved agents’ reliability. The reliability of the agents is

not affected by the Disbelief conflicts resolution activity. The Disbelief conflict solving methodology starts by applying the POR procedure. If the POR procedure is able to determine new foundations for the conflicting belief, the conflict episode was solved. However, if the POR procedure was unable to solve the conflict, the Disbelief conflict solving methodology proceeds with the application of the GDR procedure already presented. The sequential application of the described POR and GDR procedures is again structured according to the amount of information used by the procedure. First it starts by the POR procedure which is based on the availability of next best candidates for the foundations. Finally, it applies the already presented GDR procedure. There is no guarantee that the methodology presented above will solve the conflict episode.

Related Work The belief revision based methodologies for solving conflicting beliefs presented above have been inspired by the work of several authors. Already in 1990, (Sycara, 1990) used belief revision together with persuasive argumentation to change the agents’ utilities in order to reach agreements when conflicts where detected. Other authors, such as Galliers (Galliers, 1992) or Gaspar (Gaspar, 1991), relied on belief revision models to solve contradictions. More recently, in a distributed belief revision scenario, Dragoni and Giorgini (Dragoni and Giorgini, 1997) proposed the use of a belief function formalism to establish the credibility of the beliefs involved in conflicts and to choose the most credible belief. The idea of using endorsements for choosing between foundational assumptions (determining preferred revisions) was proposed by Galliers in (Galliers, 1992). She suggested that foundational assumptions could be endorsed according to their process of origin. Potential endorsements she has identified include: (1) communicated assumptions - received either from sensors in first-hand or from other agents in secondhand; (2) given assumptions – resulting either from specific information widely believed or from information without any particular source; and (3) hypothetical assumptions without any evidence other than a possible grounding for a belief under consideration. As a result, competing sets of assumptions could then be compared by applying simple heuristics such as “belief states founded upon first-hand evidence are stronger than those founded in any other combination of assumptions”, and so forth. In Gaspar’s (Gaspar, 1991) model for belief revision the agents determine the preferred belief revisions according to a belief change function. The proposed belief change function is based on a set of basic principles and heuristic criteria that allow the agents to establish preferences

between beliefs when a contradiction is detected. The basic principles are the sincerity of the sending agent, the confidence of the receiving agent and the credulity of the receiving agent. The set of possible criteria proposed to order the beliefs include: Highest credibility of all the sources of belief support – the formulas are ordered according to the credibility of the sources of the formula. The sources of support of a formula are the agents that believe a formula, and the support value granted to a formula by an agent corresponds to the credibility the agent has regarding the topic of the formula; Recency – Belief priority is given to the last formula received; and Arrogance – the formulas are ordered according to the support provided by the agent itself. The support values are derived from the credibility the agent assigns to the respective topics of the formulas. Dragoni and Giorgini (Dragoni and Giorgini, 1997) also refers to the source of belief as an important feature for distributed belief revision and proposes a belief function formalism to establish which one of the conflicting beliefs is the most credible and who is the most reliable source of information. The belief function formalism accepts as input a list of couples plus the degree of reliability of each source (in the range [0,1]), and outputs the degree of credibility (also in the range [0,1]) for any subset of the represented pieces of information. The proposed belief function formalism transforms an array of reliability values, representing the probability that the given sources are reliable, into an array of credibility values. Finally, the FOR and REL procedures, used to solve Belief/Disbelief conflicts, can be regarded as the final stage of an implicit argumentation based negotiation process (for definitions consult the respective proposals of (Sycara, 1990), (Jennings, Parsons, Noriega and Sierra, 1998) or (Vreeswijk, 1997)). When beliefs are exchanged they are sent together with their sets of foundations. These sets of foundations constitute the full arguments’ list in favour of the communicated belief. The foundations also include the rules (if any) used to infer the communicated belief – as suggested by (Parsons, Sierra and Jennings, 1998). When a Belief/Disbelief conflict occurs, the application of the FOR or the REL procedure is a way of establishing which is the most credible set of foundations – which is the strongest argument.

Conclusion The concept of dynamic conflict resolution addressed in this paper classifies a conflict between beliefs as a multiple episode occurrence, which terminates only when all of the conflicting agents believe in the shared information. New episodes result from the changes detected in the circumstances regarding the conflicts, either because the number of agents involved or because the individual agent

perspectives have changed, which result in the reevaluation of the conflicts. The re-assessment of the conflicting perspectives includes dropping the result of the previous episode and generating the new episode outcome. The methodologies we have designed for the resolution of the identified types of negative conflicts include: A methodology for solving Belief/Disbelief conflicts - a selection process based on the assessment of the credibility values of the opposing belief status, and A methodology for solving Disbelief conflicts - a search process for a unique consensual alternative based on a "next best candidate" strategy. These methodologies rely on the availability of belief revision capabilities since dynamic conflict resolution requires the MAS to abandon previous conclusions in order to adopt new ones. Although the conflicts addressed here may be considered specific of the studied framework, the methodologies developed are general and can also be applied to more general kinds of conflicts. The Belief/Disbelief conflicts represent the negative type of conflicts where a reasonable choice between two opposite results has to be made, while on the other hand, the Disbelief conflict is a negative type of conflict (resulting from the detection of a set of invalid beliefs) where the solution lays on the search for an alternative consensus. The search for a consensus is a methodology well suited for the general type of negative conflicts, especially when more than two incompatible perspectives are in conflict - the involved agents have to try to find out an alternative support for a joint belief. The implemented methodologies try to solve the detected conflicts but cannot, beforehand, guarantee whether their effort will be successful or not.

References de Kleer, J. 1986. An Assumption-Based Truth Maintenance System. Artificial Intelligence 28(2):127-224. Dragoni, A. F. and Giorgini, P. 1997. Belief Revision through the Belief Function Formalism in a Multi-Agent Environment. Third International Workshop on Agent Theories, Architectures and Languages, LNAI N. 1193, Springer-Verlag, 1997. Dragoni, A. F. Giorgini, P. and Puliti, P. 1994. Distributed Belief Revision versus Distributed Truth Maintenance. In Proceedings of the Sixth IEEE Conference on Tools with Artificial Intelligence, IEEE Computer Press. Galliers, R. 1992. Autonomous belief revision and communication, Gärdenfors, P. ed. Belief Revision. Cambridge Tracts in Theoretical Computer Science, Cambridge University Press, N. 29. Gaspar, G. 1991. Communication and Belief Changes in a Society of Agents: Towards a Formal Model of an Autonomous Agent, Demazeau, Y. and Müller, J. P. eds. Decentralized A. I., Vol. 2, North-Holland Elsevier Science Publishers. Jennings, N. R., Parsons, S., Noriega, P. and Sierra, C.

1998. On Argumentation-Based Negotiation. In Proceedings of the International Workshop on Multi-Agent Systems, Boston, USA. Malheiro, B. and Oliveira, E. 1996. Consistency and Context Management in a Multi-Agent Belief Revision Tesbed. Wooldridge, M., Müller, J. P. and Tambe, M. eds. Agent Theories, Architectures and Languages, SpringerVerlag, Lecture Notes in AI, Vol. 1037. Oliveira, E., Mouta, F. and Rocha, A. P. 1993. Negotiation and Conflict Resolution within a Community of Cooperative Agents. In Proceedings of the International Symposium on autonomous Decentralized Systems, Kawasaki, Japan. Parsons, S., Sierra, C. and Jennings, N. R. 1998. Agents that reason and negotiate by arguing, Journal of Logic and computation, 8(3):261-292. Sycara, K. 1990. Persuasive Argumentation in negotiation, Theory and Decision. 28:203-242. Vreeswijk, A. W. 1997. Abstract argumentation systems. Artificial Intelligence 90(1-2). Wittig, T. ed. 1992. ARCHON: An Architecture for MultiAgent Systems, Ellis Horwood.

Suggest Documents