Artificial Intelligence Review 17: 169–222, 2002. © 2002 Kluwer Academic Publishers. Printed in the Netherlands.
169
Explanation and Argumentation Capabilities: Towards the Creation of More Persuasive Agents B. MOULIN2,3, H. IRANDOUST2,3, M. BÉLANGER1 & G. DESBORDES2
1 Defence Research Establishment Valcartier, Department of National Defence, Canada; 2 Computer Science Department, Laval University, Ste Foy, QC G1K 7P4, Canada; 3 Research Center on Geomatics, Casault Pavilion, Laval University (E-mail:
bernard.moulin,
[email protected],
[email protected]) Abstract. During the past two decades many research teams have worked on the enhancement of the explanation capabilities of knowledge-based systems and decision support systems. During the same period, other researchers have worked on the development of argumentative techniques for software systems. We think that it would be interesting for the researchers belonging to these different communities to share their experiences and to develop systems that take advantage of the advances gained in each domain. We start by reviewing the evolution of explanation systems from the simple reasoning traces associated with early expert systems to recent research on interactive and collaborative explanations. We then discuss the characteristics of critiquing systems that test the credibility of the user’s solution. The rest of the paper deals with the different application domains that use argumentative techniques. First, we discuss how argumentative reasoning can be captured by a general structure in which a given claim or conclusion is inferred from a set of data and how this argument structure relates to pragmatic knowledge, explanation production and practical reasoning. We discuss the role of argument in defeasible reasoning and present some works in the new field of computer-mediated defeasible argumentation. We review different application domains such as computer–mediated communication, design rationale, crisis management and knowledge management, in which argumentation support tools are used. We describe models in which arguments are associated to mental attitudes such as goals, plans and beliefs. We present recent advances in the application of argumentative techniques to multiagent systems. Finally, we propose research perspectives for the integration of explanation and argumentation capabilities in knowledge-based systems and make suggestions for enhancing the argumentation and persuasion capabilities of software agents.
1. Introduction Persuasion (Karlins and Abelson 1970; O’Keefe 1990) is a skill that human beings have used since a long time ago, as soon as they had to interact together in order to make their partners perform certain actions or collaborate in various activities. In human societies, persuasion has been raised to elaborate forms in order to influence people in a variety of ways. Social influence is incredibly diverse and can be observed in many established disciplines and
170
B. MOULIN ET AL.
professional activities including law, politics, advertising, public relations, fundraising, communications and sales, to name a few. A large body of research in social psychology deals with the theme of persuasion. The Balance Theory (Heider 1946) states that when tensions arise between people or in a person, they attempt to reduce these tensions through self-persuasion or try to persuade others. From the perspective of Social Judgement Theory (Sherif and Hovland 1961), attitude change is mediated by judgemental processes: a person’s attitude is affected by the judgements she makes about whatever is being advocated. The judgemental process is viewed as composed of two steps: 1) upon reception of a message, an individual immediately evaluates where the message falls in relation to her own position; 2) the individual adjusts her particular attitude either toward or away from the received message (O’Keefe 1990). The Source Credibility Theory (Hovland et al. 1953) states that people are more likely to be persuaded when the source presents itself as being credible. Experiments tend to confirm that the credibility of the persuader increases the persuasive force of an argument (Bell et al. 1984). When it comes to computer systems, it is not common to think about programs having persuasive capabilities because usually people view computer programs as mere tools that they can use to perform tasks in an automatic way. However, close examination of research and development in the domains of knowledge-based systems, decision support systems and software agents tends to change this perspective. There are various application domains in which a computer program (we will use the term “artificial agent” or simply “agent”) must be able to make another artificial or human agent adopt a mental attitude such as “accept a piece of information or advice” or “adopt or abandon a goal”. During the past twenty years, knowledge-based systems (KBS) have been developed in various application domains to perform different tasks (such as control, diagnosis, advice giving, configuration and design) in administrative, industrial, commercial and military sectors. Because of the need to improve the decision making process and the quality of decisions, knowledge-based decision support systems (Holsapple and Whinston 1996) have increased in importance in companies and governmental agencies during the past decade. Decision makers deal with complex tasks such as analysis of complex situations, elaboration of action plans, and evaluation and selection of courses of actions. The incomplete and uncertain character of the information at hand and, in some cases, the instability of the situation and its surrounding environment are some of the factors that make these tasks so complex. Although several decision support systems have been developed and deployed, they are usually not used efficiently by decision makers because of a lack of confi-
EXPLANATION AND ARGUMENTATION CAPABILITIES
171
dence in the recommendations provided by these systems. First, given that she will be held responsible for her decision, the user will tend to discard recommendations that she does not fully understand (Hollnagel 1987). But the lack of confidence can also be explained by the fact that a system’s recommendation, although technically credible, may be unacceptable for the user because different from the possible alternatives that she had foreseen (Guida et al. 1997). According to (Jiang et al. 2000) the user’s reluctance is due to the fact that the system’s recommendations are based on a decision making process which is different from that of human decision makers. These are all important obstacles to the acceptance and practical use of decision support systems. Hence, a system must be able to convince its user that its recommendation is relevant, justified and useful. To do so, the system has to explain how it reached certain conclusions and support those conclusions with sound arguments. Explanations provide insight into the system’s knowledge and capabilities. It has been shown that users are more inclined to adopt the system’s recommendations when they know how the system reached its conclusions. Ye (1995) carried out an experimental investigation in order to assess the value of different explanation types for auditors using an expert system to audit the accounts of a company. The results of this study showed that explanation facilities can have a positive impact on user acceptance. Ye indicates: “Having had the opportunity to learn about the system’s reasoning processes through the explanations, auditors appeared more convinced about the soundness of the system’s conclusions, as demonstrated by their increased belief in those conclusions”. Alvarado (1990) who developed a sophisticated model of argument comprehension for a system capable of analyzing editorial texts, claims that computers must also be able to argue (Alvarado 1990: 1): “As computer systems become more widely used to aid decision making and give expert advice, they should exhibit the same cognitive skills possessed by their human expert-counterparts. That is, computers should be able not only to evaluate given situations and present their beliefs on possible courses of actions, but also to justify their beliefs, understand opposing beliefs, and argue persuasively against them. We should not accept advice from human experts who cannot explain or defend their own points of view. Similarly, we should not accept advice from computer systems that lack these abilities”. From a theoretical point of view, explanation and argumentation are two notions that are relatively close and difficult to distinguish (Thomas 1981). For Hughes (1992: 76): “The purpose of an explanation is to show why and how some phenomenon occurred or some event happened; the purpose of an argument is to show that some view or statement is correct or true. Explana-
172
B. MOULIN ET AL.
tions are appropriate when the event in question is taken for granted, and we are seeking to understand why it occurred. Arguments are appropriate when we want to show that something is true, usually when there is some possibility of disagreement about its correctness.” Walton (1996) indicates: “We argue that both explanation and argument contain reasoning, but that the key to the distinction lies in how that reasoning is used for a purpose. The purpose of an argument is to settle an open issue with another arguer with whom one is engaged in dialogue. Basically, the goal of an argument is to use reasoning to get this partner in dialogue to become committed to a proposition to which he was not committed at the beginning of the dialogue. The purpose of an explanation is to take something unfamiliar to this co-participant in dialogue and make it make sense to him by relating it to something with which he is familiar (or, at least, that makes some sense to him already). This is really the crux of the difference between argument and explanation, and the key to identifying each of them in a given case of discourse” (Walton 1996: 30). Explanation and argumentation are equally used in group collaborative problem solving in which participants exchange information and different point of views in order to achieve a common goal: the resolution of a problem. Group participants discuss a situation at hand, explain certain issues, contribute different opinions and present various propositions that may contradict other participants’ views. Participants have to provide sound arguments to show why a particular proposition is supported and possibly why competing hypotheses must be disregarded. Hence, certain participants have to be persuaded to change some of their mental attitudes (beliefs, goals) when they face contributions that contradict their previous beliefs or their understanding of the situation. If a person judges that the arguments presented in favor of new mental attitudes are stronger than the arguments supporting her previous contention, there are chances that she will adopt the new mental attitudes. In that case, persuasion equals to the participant’s realization of the strength of certain arguments over her previous opinion.1 Artificial agents should also be equipped with explanation and argumentation capabilities in order to be able to convince their users of the validity of their recommendations. As we will see, artificial agents can use argumentative techniques to negotiate solutions and make other artificial agents change their mental attitudes (beliefs, goals). Such techniques may also enable artificial agents to report to their users about the negotiation process in terms that are similar to human argumentation activities. Currently, there exists no artificial agent or knowledge-based system that possesses both argumentation and explanation capabilities. We will try to show the interest of this new research direction by reviewing the evolution of research domains related to
EXPLANATION AND ARGUMENTATION CAPABILITIES
173
explanation and argumentation and by highlighting the elements that show a possible convergence of these domains. We start by reviewing the evolution of explanation systems from the simple reasoning traces associated with early expert systems to recent research on interactive and collaborative explanations (Section 2). We then discuss the characteristics of critiquing systems that test the credibility of the user’s solution (Section 3). The rest of the paper deals with the different application domains that use argumentative techniques. First, we discuss how argumentative reasoning can be captured by a general structure in which a given claim or conclusion is inferred from a set of data and how this argument structure relates to pragmatic knowledge, explanation production and practical reasoning (Section 4). We discuss the role of argument in defeasible reasoning and present some works in the emergent field of computer-mediated defeasible argumentation (Section 5). We review different application domains such as computer-mediated communication, design rationale, crisis management and knowledge management, in which argument support tools are used (Section 6). We describe models in which arguments are associated to mental attitudes such as goals, plans and beliefs (Section 7) before presenting recent advances in the application of argumentative techniques to multi-agent systems (Section 8). Finally, we propose research perspectives for the integration of explanation and argumentation capabilities in knowledge-based systems and make suggestions for enhancing the argumentation and persuasion capabilities of software agents (Section 9).
2. Knowledge-Based System Explanations The explanation facilities of early expert systems can be thought of as the first attempts to make users adopt new mental attitudes towards the system’s recommendations. Since the computer-based explanations of MYCIN (Shortliffe 1976), a medical diagnosis system, there is common agreement that knowledge-based systems should provide explanations to achieve user acceptance. Explanation facilities have been designed to teach, to clarify the system’s intentions or to convince the user of the relevance of the system’s results (Hayes and Reddy 1983). In all cases, from a design perspective, an explanation has to be planned and then transmitted in an appropriate way to the user. An explanation is an object to be designed and a communicative goal to be achieved (Lemaire 1992). Thus, providing explanations covers two closely related problems, one which concerns the representation of the knowledge needed to support explanations and one which concerns the techniques relative to explana-
174
B. MOULIN ET AL.
tion production (Swartout and Moore 1993). In the next three sections, we briefly review these two themes which have characterized the development of knowledge-based systems’ explanation facilities and try to show the evolution of explanation systems towards collaborative models. 2.1. Knowledge for explanation Reasoning systems can provide three types of explanation that are trace explanations, strategic explanations and deep explanations (Chandrasekaran et al. 1989; Southwick 1991). This classification is based on Clancey’s (1983a) characterization of the epistemological roles played by knowledge in rule-based expert systems, namely structure, strategy and support. Structural knowledge defines a given rule in terms of condition and consequence, strategic knowledge determines when a given rule must be applied and finally deep knowledge justifies a rule by linking it to a causal model. We can add to this list of reasoning explanations Swartout and Smoliar’s (1987) terminological explanations. In order to explain its reasoning, the system has to make its resolution methods and/or domain knowledge explicit. In first-generation expert systems, the only explanation a system could provide was a trace of the inference rules that led to a given conclusion. The system could provide an answer to how and why questions by examining the execution trace (Scott et al. 1977; Davis 1982). Such explanations, called reasoning trace explanations, have a limited use because of the knowledge needed to interpret them. However, the major problem with these explanations is that they do not provide enough information with respect to the system’s general goals and resolution strategy. Consequently, with second-generation expert systems researchers tried to abstract from rule representations in order to provide explanations that place a system’s specific actions in context (Swartout and Moore 1993). By making the system’s higher-level control and planning information explicit, some of these systems provide strategic explanations. Instead of showing what piece of knowledge is being applied by an expert system (trace), the latter displays its problem-solving strategy: why information is gathered in a certain order, why one knowledge piece is invoked before others and how reasoning steps contribute to high-level goals. In neomycin (Clancey 1983b, Hasling et al. 1984), control between the different processing stages is represented by meta-rules. This enables the system to explain the overall strategy that it follows while performing a specific task. Southwick (1988) provides a description of the problem-solving process by presenting an ongoing trace of the topic goals which are explored by the system. In the Generic Task approach (Chandrasekaran 1986; Chandrasekaran et al. 1989; Tanner 1995), a set of general problem-solving methods is defined and the knowledge relative
EXPLANATION AND ARGUMENTATION CAPABILITIES
175
to the operations performed in each generic method is built into explanation routines. Many researchers proposed to use a deep model of the domain to design explanations (Chandrasekaran and Mittal 1983; Klein and Finnin 1987). The xplain system (Swartout 1983; Neches et al. 1985; Swartout et al. 1991) provides deep explanations by using the trace that an automatic programmer leaves behind and a high-level knowledge base representing domain facts and problem-solving plans. The separation of domain model and procedural knowledge makes explicit the strategic, structural and support knowledge in the domain (Southwick 1991). Deep explanations can exploit several knowledge bases. In the Reconstructive Explanation approach (Wick and Thompson 1992; Wick et al. 1995), the knowledge base used for explanation construction is different from the one used by the problem solver, enabling the system to produce clearer explanations. Using this approach, Guida et al. (Guida et al. 1997; Guida and Zanella 1997; Zanella and Lamperti 1999), propose a justification system2 which exploits three domain models corresponding to the three epistemological types of knowledge: a teleological model which represents knowledge concerning the goals to be pursued throughout the process supported by the decision support system (DSS); a causal model which represents knowledge about the relations between the actions suggested by the DSS and the goals to be pursued at the task level and a behavioral model which contains deep knowledge about the physical or abstract processes that characterize the task level environment. This multi-modeling knowledge representation enables the system to provide a variety of justifications. Hermann et al. (1998) argue that a system does not need to provide in depth explanations if certain aspects of the system or of the domain are made transparent. The authors propose a four-layer explanatory model which includes a domain-layer (descriptive system-independent knowledge about the domain), a system-layer (general aspects of the system), a processlayer (dynamic behavior of the system) and a function-layer (functions accomplished by the system). Very few empirical studies have investigated the influence of the explanation type on the user’s perception of the system. Ye and Johnson (Ye 1995; Ye and Johnson 1995) who examined the impact of alternative types of explanations on users’ acceptance of expert system-generated advice, note that justification (i.e. explanation that provides support knowledge and requires a deep understanding of the domain) seems to be the most effective type of explanation to bring about changes in users’ attitudes towards the system. According to these authors, users request justification more frequently and spend more time examining justification than other types of explanations.
176
B. MOULIN ET AL.
Viewing explanations as argument structures (Toulmin 1958; Toulmin et al. 1984), the authors observe that while trace explanations are insufficient to explain the system’s reasoning processes, strategic explanations that remind the user of the higher-level objectives provide information at a wrong level. Only justification-type explanations in which the data are supported by an underlying rationale can demonstrate that the system’s conclusions are based on sound reasoning. Basing their remark on (Toulmin et al. 1984), the authors write: “If the user does not understand the underlying rationale of an inferential leap that the system is making (hence the controversy), only justification provides the right type of information that can help bridge the gap. Providing justification for the system’s conclusions may or may not completely resolve the controversy, but making it available at least presents the opportunity to allow the system’s side of the story to be understood” (Ye and Johnson 1995: 163). 2.2. Explanation as a communicative goal Explanation is not only a matter of giving access to the knowledge contained in the system, it is also a goal to be achieved with respect to a person seeking advice or information. Viewed as a communication activity, explanations bring forth certain aspects that are related to the form, structure and the cooperative nature of this type of interaction. To be acceptable, systemgenerated explanations have to be expressive, adapted to the user’s knowledge and responsive to the user’s needs. A major part of research on explanation has thus been devoted to customizing explanations, generating well-formed explanations and designing explanatory dialogues, which show the designers’ willingness to adapt the form and content of explanations to the user’s perspective and to cooperate with her during problem solving. It is this evolution of explanation systems towards user-adapted and collaborative explanations that we present in the following sub-sections. 2.2.1. User modeling The idea that computer systems would interact more effectively with users if they had knowledge about their characteristics has received much attention in human-computer interaction and artificial intelligence research. The process of gathering information about users of computer systems and using this information to provide services or information adapted to the specific requirements of individual (or group of) users is called user modeling (McTear 1993). The user model in which this information is represented can be used to tailor the system’s responses to the user’s needs (Paris 1988) and to interpret the user’s input (Carberry 1988); hence, its relevance to explanations and explanatory dialogues. Explanations will be better understood if they are
EXPLANATION AND ARGUMENTATION CAPABILITIES
177
adapted to the user’s knowledge of the domain and if they take into account the user’s goals, plans and preferences. The information contained in a user model may be relative to the user’s knowledge about the system and its procedures (Trenouth and Ford 1990) or be domain-independent and relate to the user’s profile (antecedents, interests, general knowledge) (Benyon and Murray 1993; Parnagama et al. 1997). User models can be classified along several dimensions (McTear 1993). A primary distinction can be made between models for individual users and models for generic or stereotypical classes of users (Rich 1989). A system may have several models of the same user who may be expert in one domain and novice in another. A second characteristic of user models corresponds to the way user models react to new information about users. Dynamic models can be altered during the course of interaction or at the end of the session whereas static models remain unchanged once constructed. By nature, dynamic models are short term in that they contain information that can change over time, while static models are long term given that their design is worthwhile only if their information can be preserved for future usage. Finally, user models can be defined through explicit acquisition during the user’s interaction with the system or through implicit acquisition using inference methods (Chin 1993). Many research projects have explored how user models can be constructed, refined and updated during the course of a session (Bunt 1990; Kass 1991; Kay 1993; Lehman and Carbonell 1989; Wu 1991). However, several researchers have questioned the feasibility of detailed user models and their impact on the system’s reasoning and generation of responses (Sparck-Jones 1989). The information in a user model is generally incomplete and may even be uncertain if based on default assumptions. Furthermore, not only it is difficult to keep tack of the current beliefs of the user (Southwick 1991), but the newly acquired information may turn out to be inconsistent with previously stored information. Maintaining consistency and handling uncertainty are two of the major problems encountered in user modeling (McTear 1993). 2.2.2. Explanation generation The expressive power of computer dialogues can make important differences in people’s perceptions, emotional reactions and motivations (Shneiderman 1992). Producing natural, clear and easy to read explanations which at the same time faithfully represent the system’s reasoning and knowledge has been a great challenge for the design of explanation facilities. In general, two approaches towards explanation generation can be distinguished: a bottom-up approach which modifies and enhances an existing structure and a top-down approach which constructs an explanation by
178
B. MOULIN ET AL.
decomposing explicative goals (Giboin 1995). Coherent explanations can be generated using the first approach when the conceptual relations that need to be explained are properly represented in the system’s knowledge structures. An example of this approach is the process trace strategy (Paris 1988) or the graph traversal technique (Suthers 1991) which follows the existing links in a knowledge base. The second approach is exemplified by McKeown’s work (McKeown 1988) in which a communicative goal is achieved by means of standard patterns of discourse structure, called schemata. Each schema is composed of an ordered sequence of rhetorical predicates which guide the system’s search in the knowledge base. The information thus retrieved is then ordered and structured by discourse markers. Using this technique, it is possible to generate different explanations for the same knowledge base. However, given that all the information relative to the explanation construction has been compiled, it is impossible to modify an explanation if the user is not satisfied (Moore and Swartout 1990). The top-down strategy is also exemplified by the plan-based approach where explanation goals are achieved by means of planning operators (Cawsey 1993; Maybury 1993; Moore and Swartout 1990). Several researchers combined the advantages of both strategies to enhance generation mechanisms (Suthers 1991; Lemaire 1992; Mooney et al. 1991) or to tailor the explanations to the user’s level of expertise (Paris 1989, 1991). A recent tendency in explanation generation is the use of multiple communicative devices where textual explanations are combined with graphics or animation. These explanations are referred to as multimedia or multimodal explanations (Feiner and McKeown 1991; Maybury 1993; Daniel et al. 1999). Parallel to the development of multimedia explanations, some researchers have focused on the adaptation of the explicative discourse to the user’s vocabulary and domain knowledge (McKeown 1993; Slotnick and Moore 1995; Papamichail 1998). 2.3. Interactive and cooperative explanations Many issues that relate to explanations in human context have been addressed in works on user-adapted explanations. These include modeling the structure of human explanations (Goguen et al. 1983), providing examples in response to the user’s request for information (Rissland 1983; Rissland et al. 1984), anticipating the type of questions that a user might ask (Lehnert 1978; Hughes 1986), planning responses so as not to mislead the inquirer (Joshi and Weber 1984), identifying prior situations and explanations that are relevant to the explanation being constructed (Rosenblum and Moore 1993). The study of human-human explanations has also raised an interest in sociolinguistics and ethnomethodology (Gumpertz and Hymes 1972). Mittal
EXPLANATION AND ARGUMENTATION CAPABILITIES
179
and Paris (1995) use the notion of discourse register in sociolinguistics to discuss the implications of different aspects of context (the problem solving situation, the participants involved, the mode of interaction in which communication is occurring, the discourse, the external world) on the design of explanation facilities. Forsythe (1995) uses ethnographic techniques (gathering descriptive qualitative information about real world situations) to determine users’ requirements. Cawsey (1995), however, points out that the analysis of human explanations is unlikely to be adequate to fully determine the explanation needs of a particular application and class of users because (1) human experts do not necessarily address the user’s real needs; (2) the human-system interaction context is different from the human-human context in terms of its requirements and possibilities. According to Cawsey, a mixture of techniques such as analyses of human explanations, interviews and experiments with potential users (cf. de Rosis et al. 1995) and evaluation of prototypes, might be needed. Among recent work on explanations, the design of interactive and collaborative explanations has received a great deal of attention. One of the most characteristic features of human explanation is indeed its cooperative nature and this has constituted one of the main arguments against user modeling. Ringle and Bruce (1981) argued that by focusing on user models, researchers have ignored the rich source of guidance that people use in producing explanations, namely feedback from the listener. For Draper (1987), in order to provide a good explanation, a system has to recognize the inquirer’s intentions from the discourse context. This is why many explanation systems used goal inference methods studied in pragmatic linguistics (Allen and Perrault 1980) to derive the user’s goals from a discourse segment (Pollack 1984; McKeown 1988). Viewing the explanation activity as a cooperative process “puts the burden on the explainer, who must try to deduce the asker’s goal in the context of societal conventions and evidence of the state of the asker’s knowledge and belief” (Southwick 1991). In a cooperative explanation setting, the system has to indicate to the user that her goals have been taken into account (McKeown 1988), evaluate the user’s plans and suggest better alternatives (Pollack 1986; van Beek 1987) and also consider goals and preferences which have been previously expressed (van Beek 1987) or which are available in a user model (Cohen et al. 1989). For Swartout and Moore (1993) explanations must be reactive. This means that the system must be able to respond to feedback from the user about the suitability of its explanations and recover if necessary. To handle feedback, the system must (Swartout and Moore 1993): 1) interpret the follow-up questions in the context of the ongoing interaction, which includes the previous
180
B. MOULIN ET AL.
Figure 1. Example of interactive explanation (EDGE – Cawsey 1993).
explanations; 2) determine the origin of the failure so as to correct misunderstandings; and 3) have multiple strategies to achieve its communicative goals so that alternative responses can be provided. This mechanism enables the system to overcome the limitations of its user model. Cawsey (1993) shows that the system can even update its user model through follow-up questions as illustrated in Figure 1. Karsenty and Brézillon (1995) argue that explanations must be conceived of as part of a problem-solving process, and not just as a parallel phenomenon that does not modify the course of reasoning. This view is different from previous proposals of cooperative models, in that the role of explanation is not to help the user reach the ‘right’ interpretation, the one possessed by the machine, but to adjust both agents’ context until a compatible interpretation can be found. This is why the authors note the importance of providing spontaneous, as opposed to reactive, explanations. The use of explanations as a negotiation device that can mediate between the explainer’s and the explainee’s beliefs has also been discussed in Cawsey et al. (1992) and de Rosis et al. (1995). In Baker’s view, explanations are special sets of mutual beliefs arrived at by negotiation. Baker (1992) distinguishes cooperative and collaborative explanations as being based on two different concepts: “Cooperation involves willingness on the part of interlocutors to communicate their goals with a view to possibly obtaining shared goals, willingness to adopt the other’s goals when they do not conflict with one’s own, and the possible execution of plans to achieve goals by dividing task responsibilities. Collaboration involves continued effort to maintain a shared understanding of a problem (task) representation, the existence of shared goals, and joint action designed to achieve them.”
EXPLANATION AND ARGUMENTATION CAPABILITIES
181
Explanation can be thought of as a “phenomenon which emerges from cooperative dialogue by a process of negotiation” (O’Malley 1987). Baker and his colleagues (1994) introduce the notion of “negotiated explanations” in which: • participants have at least the common goal to reach an agreement; • negotiated objects may take various forms (meaning of statements, concept definitions, methods, solutions to a given problem, etc.); • several objects might be negotiated simultaneously. Explanations are negotiated especially in cases where the participants can both contribute to a common task, when they are close to a peer-to-peer relationship. Baker et al. (1994) claim that “explanations are not knowledge structures to be translated into communicative acts, adapted to users and transmitted to them, but rather qualitatively new structures, to which both participants in the explanation dialogue may contribute. It is the joint construction of these new structures called collaborative explanations which gives negotiative explanation dialogues their emergent quality. Clearly, this view of explanation presupposes a degree of autonomy, and equal rights on the part of the explanation seeker and giver, which departs from the view of explanation as a communication from one who knows/understands to one who does not know/understand. But this is precisely the case with expert systems which are used by experts or with help-systems which provide help to users who already possess some knowledge, but not the specific knowledge required to understand the current problem.”
3. Critiquing as a Means of Cooperative Problem-Solving There is evidence that competent users do not have a passive attitude towards system-generated solution or advice. Experts make early judgements about a problem and rapidly generate partial solutions (Woods et al. 1990). They also have preconceptions about the consequences of a solution and the constraints it must satisfy (Pollack et al. 1982). Therefore, rather than a solution, such users want the system to provide an advisory interaction which would help them evaluate possible answers (Hollnagel 1987) and understand why the system’s solution is better than the one they expected. The system-user dialogue can therefore be viewed as a negotiation process (Pollack et al. 1982) where solving the user’s problem is finding a mutually agreed-upon solution (Karsenty and Brézillon 1995). One way of helping the user progress in the problem solving process is to assess her own solutions. This is the type of interaction in use in critiquing systems. These systems compare the user’s plans and solutions (Coombs and Alty 1984; Langlotz and Shortliffe 1983; Miller 1984) with their own and thus help the user evaluate her own solution.
182
B. MOULIN ET AL.
Critiquing systems use two inputs: (1) the problem description provided by the user or displayed by the computer, and (2) the proposed solution to the problem (diagnosis, design, document). This second entry is what distinguishes a critiquing system (“critic” for short) from either an expert system or an expert advisory system (Silverman 1992). Once the solution has been examined, the critic provides feedback which is specific to the user’s solution. Criticism and explanation are provided before, during or after the task so that the user may improve her solution or performance. Critics are in widespread use in different types of knowledge-based systems. The use of critique and advisory support in decision support systems can be seen in Mili’s (1988) decad (decision critique and advisor) system as a facility that watches for errors and advises the user with regard to further actions. Fischer and Mastaglio (1991) developed a general architecture for knowledge-based critics that fully support cooperative problem solving. In their conceptual design, the user’s solution is analyzed by the critic and suggestions to improve it are made until the user is satisfied. While explanation and justification systems try to show the correctness and usefulness of system recommendations, a critiquing system tests the credibility of a user’s solution by examining the knowledge and judgement that she used to reach the solution. This knowledge is tested by the critic in terms of its clarity, coherence, correspondence (agreement with reality) and workability (Silverman 1992). The user acquires knowledge through the active process of mutual exchange of viewpoints with the critic. Critiquing systems provide a fertile ground for argumentation given that critical discussion is a prototype of argumentative discourse. Vahidov and Elrod (1999) propose a framework in which a decision support system has both negative and positive critiquing agents. Two software agents, called the ‘angel’ and the ‘devil’, act respectively as the opponent and the proponent of the user-suggested proposition. The devil plays the role of the devil’s advocate and criticizes the proposed solution while the angel strengthens it (Figure 2). The agents use argumentation to substantiate their claims (relate them to the current situation and the warrants) and to respond to each other. The ongoing conflict between these antagonist software agents enables the decision maker to assess the choices in terms of advantages and disadvantages and thus enhances decision quality. Kasper (1996) confirms that systems that challenge the decision-maker’s positions and assumptions and whose design is based on dialectic inquiry are very effective for user calibration (i.e. the correspondence between the subjective decision confidence one assigns to a decision and the objective quality of the decision). The “variants of dialectic inquiry improve calibration by generating competing hypotheses that result in the listing of evidence
EXPLANATION AND ARGUMENTATION CAPABILITIES
183
Figure 2. Example of critique (Vahidov and Elrod 1999).
that challenges, refutes, and/or disconfirms one’s mental representation of a problem. The identification and resolution of differences, characteristic of dialectic reasoning, are essential for user calibration, especially when problem novelty is great” (Kasper 1996: 225).
4. Argument Structure The argumentative process is based on an argument structure that we can globally define as a relationship between a set of premises and a conclusion. An argument structure can be a general inference rule as in the topoï theory, which is more concerned with the pragmatic knowledge underlying language use, or a model of inferential relations as in Toulmin’s model, which can account for practical reasoning. 4.1. Topoï Anscombre and Ducrot (1983) who studied the interpretative rules involved in the use of argumentative connectors such as but and however, emphasized several linguistic constructions which involve a typical relation between statements or propositions that can be called “an obligation to conclude”. As an illustration, let us examine the sentence: The weather is nice, but I am tired.
184
B. MOULIN ET AL.
As Ducrot (1991) indicates: “The speaker, after having uttered the first proposition p, expects the addressee to draw a conclusion r. The second proposition q, preceded by a but, tends to avoid this conclusion by signalling a new fact that contradicts it. The whole movement would be: “p; you are thinking of concluding r; do not do so, because q”. Anscombre and Ducrot (1983) introduced the notion of topos which is a general rule associating a claim and a related conclusion (Anscombre 1995). The statements of a topos can be associated with a grade taken on an argumentative scale. For example, we can formulate two topoï: The better the weather is, the more you go out. The more tired you are, the less you go out. Using these topoï, we can understand why the but-sentence “but I am tired” is a rebuttal for the conclusion that could have been drawn from the information “The weather is nice”. Several authors used such an approach to study argumentative discourse (see for instance Moeschler 1985). This topoï-based approach has also been adapted to the representation of gradual knowledge in knowledge-based systems. The argumentative orientation of a statement is obtained by applying a “gradual inference rule” to certain elements of that statement; this inference rule (or topos) is considered by the locutor to be general and agreed upon by its interlocutors (Raccah 1996). Gradual knowledge3 in the form of topoï has been used in a variety of knowledge-based applications such as knowledge acquisition, knowledge validation and explanations (Galarreta and Trousse 1996). For example, Dieng (1989) used topoï to generate qualitative explanations about the inferences made by an expert system applied to dyke design. Instead of presenting the quantitative formulas used in the expert system’s rules, explanations used topoï to adapt the corresponding information to the user. For instance, instead of presenting rules containing mathematical formulas, the explanation module generated sentences such as “The bigger the swell is, the thicker the dyke is”. Such argumentative techniques have also received the attention of researchers working on user-system explanatory dialogues (Cavalli-Sforza and Moore 1992, Quillici 1991) and advisory support systems (Grasso 1997; Carenini et Moore 1999). 4.2. Toulmin’s model Toulmin’s model has been used in several research works on argumentation. Using the vocabulary of legal domain, Toulmin proposed a structure composed of six elements that reflect the procedure by which claims can be argued for. Here are these elements:
EXPLANATION AND ARGUMENTATION CAPABILITIES
185
Figure 3. An illustration of Toulmin’s argument structure.
1. Claim (C). An assertion or a conclusion presented to the audience and which has potentially a controvertial nature (it might not meet the audience’s initial beliefs). 2. Data (D). Statements specifying facts or previously established beliefs about a situation about which the claim is made. 3. Warrant (W). Statement which justifies the inference of the claim from the data. 4. Backing (B). Set of information which assures the trustworthiness of a warrant. A backing is invoked when the warrant is challenged. 5. Qualifier (Q). A statement that expresses the degree of certainty associated to the claim. 6. Rebuttal (R). A statement presenting a situation in which the claim might be defeated. This structure may be represented using typical natural language markers: Given D (and Since W), Therefore C, unless R. W Because B. Figure 3 illustrates the argument structure proposed by Toulmin. This diagram says that: X is Brazilian, therefore he speaks Portuguese, unless he has never lived in his homeland. Brazilians speak Portuguese because Brazil was a Portuguese colony.
186
B. MOULIN ET AL.
As Stranieri and Zeleznikow (1999) observe, an argument structure can provide a useful basis for knowledge representation within artificial intelligence because: argumentation reflects practical reasoning; arguments capture many types of inference mechanisms; arguments are linked with explanations; arguments capture plausible reasoning and arguments combine to form chains of reasoning. In the next two sub-sections, we discuss how Toulmin’s model can be used as an explanation structure and how it can be extended to account for different inferential mechanisms. 4.2.1. Using Toulmin’s model for explanations Using Toulmin’s model, Ye (1995) analyzed the characteristics of three types of explanations: trace (or line of reasoning), justification and strategy. He indicates that Toulmin’s model is significant in that it highlights the discrete response steps that an expert system explanation facility should follow in order to answer user’s queries in a convincing way. For example let us consider the typical format of a rule used in expert systems: IF Premisex (certainty factory ), THEN Conclusionz . This structure obviously corresponds to the schema (subscript variables represent the correspondence between the elements of these structures): GIVEN Datax , THEREFORE (Qualifiery ) Claimz . Certain rules might include the equivalent of a rebuttal as for example: IF Premisex AND NOT Premisey (certainty factorz ), THEN Conclusionw . This structure corresponds to the schema: GIVEN Datax , THEREFORE Qualifierz Claimw , UNLESS Rebuttaly . These components of rules are typically used in trace-based explanations. However, a rule encodes problem-solving knowledge but not the background knowledge that leads a human expert to that rule. Hence, a system composed of simple rules lacks the information to justify why a conclusion follows naturally from its premises. According to Toulmin’s model, a rule should be enhanced with a Warrant that the system can invoke to justify the leap from the premises to a conclusion. Furthermore, if the system provides the user with the ability to challenge the Warrant, a Backing should also be added to the rule. If the Backing is also challenged by the user, a new round of argument begins. Theoretically, this process can go on until either the Warrant is accepted or no further Backing can be offered. Ye (1995) observes
EXPLANATION AND ARGUMENTATION CAPABILITIES
187
that Toulmin’s framework shows where justification for a line of reasoning should be focused. This can help explanation designers to identify the type of knowledge that needs to be acquired from experts and represented in the system. Ye (1995) observes that in the context of auditing, explanation serves the function of argument. “Because the primary purpose of argument is to advance a problematic claim and have it accepted, for the argument to be considered successful, the audience must be willing to replace an existing belief with a new belief. The change is unlikely to take place, unless the audience agrees with the evidence (Data) and endorses the principle that is expressed or implicit in the Warrant”. Bench-Capon et al. (1991) report favorable user response from explanations generated in this way through the use of their logic programs. Hahn (1991) proposes a system for cooperative problem solving in which two or more persons negotiate through exchanges of arguments. Using an argument editor, group participants with complementary expertise can use the different argument components (evidence, conclusion, warrant, qualifier, backing) to support or criticize solutions and hypotheses put forward by their partners. The interactive argumentation enables the group of persons to insert new knowledge that can either be tuned in with the main hypotheses or block a derivation. Thus, the participants gradually build an argument graph that can then be used as a knowledge structure for explanations. The explanations thus produced are visible and can be criticized. They can be more or less detailed depending on whether the argument structure is reduced to facts (evidence, qualifier) or if deeper domain knowledge (warrant, backing) is made explicit. Also, the explanations are dynamic given that they are the product of a pragmatic question-answering dialogue between an explainer and an explainee. At the beginning of a session, there are no or several solutions to a problem and therefore several explanations. After the negotiation process a group opinion is reached. Between the two, there are different phases of argument exchange where explanations are provided either to strengthen or counter competent hypotheses and explanations, or to create new dependencies between prior explanations. The problem solving process represented by the dynamic configuration of the argumentation graph, can be viewed as a succession of ‘explanation episodes’ which can later be customized. The systematic critique of explanations plays an important role in the argumentative process and constitutes the most interesting aspect of the proposed model.
188
B. MOULIN ET AL.
4.2.2. Extending Toulmin’s model Toulmin’s model has been used and sometimes extended in order to facilitate the organization of complex knowledge for different purposes: retrieving information (Dick 1987; Marshall 1989; Ball 1994); explaining logic programming conclusions (Bench-Capon et al. 1991); representing user-defined tasks (Matthijssen 1995); engaging users in a dialogue (BenchCapon 1998) and representing family law knowledge in a manner that facilitates rule/neural hybrid reasoning (Stranieri et al. 1999). Johnson et al. (1993) suggest that arguments can be classified into five categories that they label Type 1 to Type 5. For each type, the backing corresponds to a distinct type of expertise and to a particular kind of reasoning. Type 1 arguments reflect axiomatic reasoning. Data and claim for these arguments are analytic truths. The supporting evidence derives from a system of axioms such as Peano’s axioms of arithmetic. Examples of what Aristotle called analytic reasoning4 would be captured as Type 1 backing and not as a different type of proof. A Type 2 argument asserts a particular diagnosis on the basis of empirical judgements that derive from a number of observations which are similar to the current evidence. Type 3 arguments are characterized by backings which reflect alternate representations of a problem. Type 4 arguments differ from Type 3 arguments in that the alternate representations are conflicting. In this case, the argument involves supporting evidence that is conflicting. An assertion is made by creating a composite representation. Type 5 arguments refer to paradigms that reflect a process of inquiry. Stranieri and Zeleznikow (1999) mention that by discerning those arguments which are based on axiomatic backing from those based on dialectical and empirical evidence, Johnson et al. (1993) have demonstrated a mechanism that developers of intelligent reasoners can use to select an appropriate inference method. Bench-Capon (1998) introduced an additional component to Toulmin’s structure, the presupposition component which represents assumptions made that are necessary for the argument but are not the object of dispute and remain outside the core of the argument. This extension is useful to model dialog games where the participants are exposed to discourse utterances that represent presuppositions that are not central to the discussion. The presuppositions can become critical in the discussion if the interlocutors do not share them. Farley and Freeman (1995) extended the warrant component in order to develop a model of dialectical reasoning that would be more formal than that proposed by Toulmin. Their main objective was to develop a system that could model the burden of proof concept in legal reasoning. The burden of proof governs the extent to which evidence is required in order to draw
EXPLANATION AND ARGUMENTATION CAPABILITIES
189
a conclusion. Two types of warrants called wtype1 and wtype2 are distinguished. The wtype1 warrant classifies the relationship between assertion and data as explanatory or sign. Causal links (i.e. “Lack of rain causes drought”) are examples of explanatory warrants because they explain an assertion given data. Other types of explanatory warrants include definitional relationships or property/attribute relationships. A sign relationship represents a correlational link between data and assertion. The wtype2 warrant represents the strength with which the assertion can be drawn from data. Freeman distinguishes default type warrants which represent default relationships such as “birds fly”, evidential warrants which are less certain and sufficient warrants which are certain and typically stem from definitions. Freeman (1994) also explicitly introduced reasoning types (modus ponens, modus tolens, abduction and contra positive abduction) in her model. She observed that some reasoning types are stronger than others. Modus ponens and modus tolens are assigned a strong link qualification if used with sufficient warrants, whereas the same reasoning types are assigned a credible qualification if used with evidential warrants. Hence, reasoning types interact with warrant types to control the generation of arguments according to reasoning heuristics. Stranieri and Zeleznikow (1999) suggest that warrants communicate two distinct meanings. On the one hand, the warrant (i.e. Brazilians speak Portuguese) indicates a reason for the relevance of a fact (i.e. X is Brazilian), on the other hand, the warrant can be interpreted as a rule which, when applied to the fact (i.e. X is Brazilian) leads us to infer the claim (i.e. X speaks Portuguese). On the basis of the distinction between these two roles that a warrant has in an argument, Stranieri and Zeleznikow (1999) explicitly identified three features that are left implicit in Toulmin’s formulation: an inference procedure, algorithm or method used to infer an assertion from datum; reasons which explain why a data item is relevant for a claim; reasons that explain why the inference method used is appropriate. These features are useful because they enable any inference procedure to be encoded in an argument structure, thus facilitating the development of hybrid reasoning systems. This approach has been applied to four domains in law: property proceedings following divorce by Stranieri et al. (1999), determination of refugee status by Yearwood and Stranieri (1999), and determining eligibility for government funded legal aid and copyright law. Stranieri and Zeleznikow (2000) propose an agent-based approach to help regulate copyright. Five knowledge-based systems are described that are sufficiently flexible to protect authors rights without denying the public access to works for fair use purposes. Let us mention here that not all inferential methods are based on a strict warrant-data-conclusion relation. Lodder (1997) shows the importance of
190
B. MOULIN ET AL.
both structural and procedural arguments. A structural argument indicates an explicit relation between two statements. A procedural argument is a statement that, itself or in combination with other non-structural arguments, contributes to the acceptance of another statement, without using a structural argument. For example, procedural arguments are often used in courts. To justify a decision, the authority invokes a series of statements and then uses the “magic sentence” ‘In view of the above’ to link the conclusion to these statements without making explicit the exact relations between the statements and the conclusion.
5. Defeasible Reasoning The problem of understanding argumentation and its role in human reasoning has been addressed by many researchers in different fields such as philosophy, logic and more recently artificial intelligence. Several philosophers and logicians developed logics that paved the way to the modern formalizations of argumentation. We only mention here some of these works, the detailed references of which can be found in Chevenevar et al. (2000): Lorenz’ strip diagrams, Lorenzen and Hintikka’s game-theoretic semantics, MacKenzie’s dialog systems, Alchourròn’s work on defeasibility deontic logic and belief revision, Adam’s logic of conditionals, Glymour and Thomason’s as well as Kraus, Lehmann and Magidor’s works on non-monotonic logic, Geffner and Pearl’s work on preferred-model semantics for any ordered set of defeasible conditionals, Dubois and Prade’s representation of defaults in possibilistic logic, Nute’s defeasible conditional logic and Delgrande’s conditional logic based on the idea of irrelevance. In the following, we present defeasible reasoning models (Pollock 1992, 1994) as well as some systems which support defeasible argumentation. The objective of defeasible logics is to construct “defeasible proofs”, called arguments, that can be partially ordered according to differences in their conclusive forces. This body of work has mainly focused on the acceptability of arguments as well as on the formalization of the criteria that determine the ‘strength of arguments’ and thus characterize undefeated arguments (Chevenevar et al. 2000). An argument is a representation of a sequence of inferences leading to a conclusion. While in standard logic an argument represents a definite proof, in defeasible reasoning an argument may represent just a provisional reason to believe a proposition: it does not provide a proof, but only a plausible support to the proposition. Considering that different arguments may support contradictory conclusions, the main problem is to determine which conclusions should be believed in the presence of conflicting arguments.
EXPLANATION AND ARGUMENTATION CAPABILITIES
191
Dung (1995) presented a theory of argumentation whose central notion is the acceptability of arguments. An argument is an abstract entity whose role is determined by its so-called attack relations to others arguments. According to Dung, the idea of argumentational reasoning is that a statement is believable if it can be argued successfully against attacking arguments. Thus the beliefs of a rational agent are characterized by the relations between its “internal arguments” supporting its beliefs and the “external arguments” supporting contrary beliefs. In this context, argumentation is the process of constructing arguments for propositions representing an agent’s beliefs. Dung shows that most of the major approaches to non-monotonic reasoning in AI and logic programming are special forms of his theory of argumentation and that argumentation can be viewed as a special form of logic programming with negation as failure. Schroeder (1999) followed the work done by Dung (1995) and Prakken and Sartor (1997) on logic programming and formal argumentation systems. Schroeder used two types of negation: implicit negation not a to express the lack of evidence for a, and the explicit negation ¬a to state that there is an explicit proof for ¬a. An extended logic program is a (potentially infinite) set of rules in the form L0 ← L1 , . . ., Ll , not Ll+1 , not Lm (0 ≤ l ≤ m) where each Li is an objective literal. Literals of the form not Ll are called default literals. Let P be an extended logic program. An argument for a conclusion L is a finite sequence A = [rs , rt ] of ground instances of rules ri ∈ P such that several conditions hold (Schroeder 1999). Schroeder defines the notions of undercut and rebut, attack and defeat of arguments in the same way. Pollock (1992) postulated that reasoning operates in terms of reasons, which can be associated to support arguments. Pollock distinguished nondefeasible reasons and defeasible reasons which are defined in terms of defeaters. Defeaters are new reasons that attack the justification power of a given defeasible reason. He also distinguished rebutting and undercutting defeaters. A rebutting defeater is a reason that attacks a conclusion by supporting the opposite position, while an undercutting defeater is a reason that attacks the connection existing between a reason and a conclusion. A proposition is warranted if it emerges undefeated from an iterative justificatory process in which the same argument can be successively defeated and reinstated. Loui (1987, 1995) proposed a system of argumentation where defeat of arguments is recursively defined in terms of interference, specificity, directness and evidence. Verheij (1996) analyzed the properties and roles of rules and reasons in argumentation. He studied the conditions in which an argument should be defeated at a particular stage of the argumentation process. He developed CumulA, a formalism for defeasible argumentation that allows inference (i.e.
192
B. MOULIN ET AL.
‘forward argumentation’ corresponding to drawing conclusions) and justification (i.e. ‘backward argumentation’ corresponding to adducing reasons). Using preference relations between arguments, Amgoud and Cayrol (2000) accounted for two complementary views on the notion of argument acceptability: acceptability based on the existence of direct counter-arguments and acceptability based on the existence of defenders. An argument is said to be acceptable if it is preferred to its direct defeaters or if it is defended against its defeaters. Kraus and his colleagues (1995) proposed a logic of argumentation to reason under uncertainty. In this framework, propositions are labeled with a representation of arguments supporting their validity. Arguments can be aggregated to provide more substantial support to a proposition’s validity. The authors define strength mappings from sets of arguments to a selected set of linguistic qualifiers. Hence, propositions can be assigned to classes such as certain, confirmed, probable, plausible, supported and open. This approach provides a uniform framework incorporating a number of numerical and symbolic techniques to assign subjective confidences to propositions on the basis of their supporting arguments. Simari and Loui (1992) proposed a system for defeasible reasoning integrating a general theory of warrant. In this framework, an argument produces a warrant for its claim as the result of a process of exchanges of counterarguments and rebuttals that subject the claim to dispute. This system was used in different works (see Chevenevar et al. (2000) for detailed references): Fallapa proposed a belief revision system using a special kind of argument; Bodanza and Simari analyzed defeasible reasoning in disjunctive databases; Delrieux introduced a system of plausible reasoning, Augusto studied the construction of arguments for temporal reasoning. Loui and Norman (1995) showed the importance of dialectics, which in argumentation refers to a form of disputation in which a serializable resource is distributed so that one party’s use of that resource is informed of the result of the other party’s prior use of the resource. The serialized resource is typically search for arguments or time for presentation of arguments. Loui and his team tried to characterize reasoning patterns in legal argumentation and to conceptualize argument games which help clarify how resource-bounded reasoning should be performed. Certain researchers proposed dialectical approaches to automate the construction of an argument and its counter-argument typically representing knowledge as facts and rules and using a non-monotonic logic. For example, Farley and Freeman (1995) and Gordon (1995) model a dialogue between two parties as a series of arguments and counter-arguments, rebuttals and agreements.
EXPLANATION AND ARGUMENTATION CAPABILITIES
193
Vreeswijk (1997) proposed a thoughtful critique of previous argumentation models and presented a new approach to argumentation called abstract argumentation system. An abstract argumentation system is a collection of “defeasible proofs”, called arguments, that is partially ordered by a relation expressing differences in conclusive force. An unstructured language without logical connectives such as negation, makes arguments not (pairwise) inconsistent, but (groupwise) incompatible. Incompatibility and difference in conclusive force cause defeat among arguments. The aim of the theory is to find out which arguments eventually emerge undefeated. Vreeswijk’s theory assumed that the premises of the argumentation process are undefeasible. Baroni and colleagues (2000) extend Vreeswijk’s theory by recognizing that premises can be defeated and by relaxing the implicit assumption about their strength. Lin and Shoham (1989) also proposed an abstract argument system and showed how well-known systems of default reasoning could be reformulated in their framework. Computer-mediated defeasible argumentation aims at developing programs that can mediate argument exchanges between users. Such systems can administer and supervise the argumentation process, keeping track of the raised issues, assumptions, counter-arguments and justifications. Several such systems have been developed: IACAS (Vreeswijk 1995), Room5 (Loui et al. 1997), Dialaw (Lodder 1998), Argue! (Verheij 1998). These systems are compared in (Verheij 1998). IACAS, Room5 and Dialaw are issue-based as in Rittel’s issue-based information system (Kunz and Rittel 1970), whereas in Argue! there is no central issue and both inference and justification are allowed. All these systems implement argumentation defeasibility. IACAS and Room5 have a notion of reasons for and against conclusions. Dialaw and Argue! use the notions of rebuttal and undercut defeaters. IACAS and Dialaw have a notion of rules underlying argument steps. In Room5 and Dialaw argumentation is considered to be a game with participants. IACAS and Argue! are evaluative in the sense that the status of arguments can be determined by the system. IACAS and Dialaw have text-based interfaces, Argue! has a graphical interface and Room5 has a template-based interface. In Dialaw, the dynamic aspect of argumentation is shown by the view of the sequence of moves, whereas in IACAS, Room5 and Argue! only a view of the current stage of the argumentation process is visible.
6. Argument Support Tools Argument structures are used in several domains in order to record, organize and display the content of debates in which human participants exchange
194
B. MOULIN ET AL.
opinions on various topics. Argument structures are useful in that they provide users with an easy and intuitive way of grasping the main elements of a discussion. Moreover, by storing the information in an appropriate manner and making it available for future use, argument tools allow to represent and evaluate the general state of affairs in a given knowledge domain. In cognitive psychology, argumentation-related research aims primarily at understanding how argument structure relates to cognitive processes such as reasoning, problem solving and learning. Resnick and colleagues (1993) used argument graphs based on Toulmin’s model in order to analyze discussions of contentious issues (such as the use of nuclear power) between university students. Argument graphs were extended to include different types of reasoning such as underlying supports and backings. The authors concluded that although these argumentative discussions took place in informal situations, they conformed to argumentation norms: almost all utterances played some functional role in the argument structure. Argumentation Maps (Horn 1998) have been used by a team of logicians to display the logical structure of various fundamental debates and their wider theoretical context, compressing a huge amount of information into a pictorial form. In many domains, it is not possible to easily gather a definitive body of expertise for problem-solving (Clark 1990). When experts disagree, it is not easy or even possible to identify the “right answer”. This problem arises in domains which involve human judgement, such as design tasks and risk assessment (Alexander and Evans 1988). In such domains, the process of argumentation between experts plays a crucial role in sharing knowledge, identifying inconsistencies and focusing attention on certain areas for further examination. In order to address such problems, Clark (1991) developed an approach to problem-solving based on Toulmin’s model and applied it to the domain of geological risk assessment. In this approach, which is an alternative to the classical expert system model, problem-solving is considered as a cooperative activity based on the interaction of different, possibly conflicting chains of reasoning. The interaction involves an exchange of information between the system and user, discussing why a particular risk is valid. Clark (1991) developed the Optimist System which involved the ability to compare different user’s opinions, to modify the system’s model of users’ opinions and to allow the user to express her disagreement with the system’s choices. This early argumentation-based expert system was used by expert geologists, “all of whom were able to dispute and correct the system’s reasoning to their satisfaction” (Clark 1990). Adopting the principle of consistency, namely that a rational agent will make similar decisions in similar situations, Clark (1991) used precedents as a form of justification for the system’s arguments.
EXPLANATION AND ARGUMENTATION CAPABILITIES
195
Precedents were stored in and retrieved from a case-base, allowing a simple form of case-based reasoning (Kolodner 1993). This enabled the argumentation system to generate justifications for its decisions in order to respond to the user’s challenges. Early research on hypertexts explored the use of argumentation schemes to structure information. Although it had no reasoning capabilities, Euclid (Smolensky et al. 1987) was a hypertext system used to organize arguments in simple structures that users could manipulate. The IBIS (Kunz and Rittel 1970) approach is based on the principle that the design process is fundamentally a conversation (or deliberation) between stakeholders. The hypertext system gIBIS (Conklin and Begeman 1987; Conklin and Begeman 1988) was designed to support design deliberations among cooperating experts. Using gIBIS, users could document their deliberations in a structured way, constructing a network of issues, positions and arguments in the deliberation on the screen. Although these early systems could assist users in recording and structuring their opinions, they could not construct and check arguments by themselves because of the lack of a formal representation of arguments. Following the early works on the use of hypertext systems to support expert deliberations, there has been a convergence of interest of several design research communities, especially in the domains of human-computer interaction and of software engineering, on the question of how to represent design rationale (Moran and Caroll 1996; Buckingham Shum et al. 1994, 1997). A design rationale is a representation of the reasoning behind the design of an artefact. Inspired by argumentation approaches such as Toulmin’s model and Rittel’s issue-based information system (IBIS) (Kunz and Rittel 1970), these approaches are based on the conviction that visualizing the structure of an argument can provide insight into its strengths and weaknesses, thus facilitating its more rigorous construction and its communication to participants. The structure of design rationale is usually composed of typed nodes (such as “issue”, “position”, “argument”) related by typed links (such as “supports”, “objects-to”, “replaces”, “responds-to”, “generalizes”). Several graphical notations have been proposed to represent the argument structure of the design rationale and several systems are proposed to visualize them (BCSGA 2000). In an organization, the goals of managing design rationale include lower maintenance costs on products and better designs due to the earlier discovery of inconsistencies and miscommunications. However, reported practical experiences aiming at capturing design rationale showed limited success or even failure (Shipman and Marshall 1999). Shipman and McCall (1997) indicate that there is a widespread agreement that most design tasks produce rationale that can be adequately represented in argumentation schemas, using
196
B. MOULIN ET AL.
“post-generation analysis”, i.e. after the design has been completed. Unfortunately, there are significant difficulties in getting designers to use such schemas to structure their thinking while they are performing real design tasks. This can be explained by the fact that average users have difficulties to chunk and categorize information in the terms required by the argumentation scheme (issue, argument, position, etc.). In order to deal with this problem, Shipman and Marshall (1999) propose that system designers: 1) work closely with users to better understand the use situation and the representations that best serve it; 2) identify which other services the system can offer to users with regard to trade-offs introduced by additional formalization; 3) provide facilities that use automatically recognized (but undeclared) structures to support common user activities. While the capture of argumentative rationale remains problematic, retrieval of relevant rationale is an area where the argumentation approach has excelled. The structured representations indeed provide practical ways of indexing and retrieving rationale (Shipman and Marshall 1999). One example of an effective hypertext-based retrieval strategy for argumentation is “issuebased indexing” of rationale using the IBIS method. Another successful use of the design rationale approach is the documentation of information about design decisions: What decisions were made? By whom? When and Why? This enables people outside the project team to understand what is done in the project. This approach can also be used to secure intellectual property generated in a project. Crisis management is another domain where the capture of specialists’ arguments might be useful. For example, Mani and his colleagues (Mani et al. 2000) work on the development of a system that would automatically generate briefings which present arguments along with supporting evidence to decision makers. This work is part of a larger DARPA-funded project: the Genoa project (GENOA 2000) aims at improving analysis and decision making in crisis situations by providing tools which allow analysts to collaborate on the development of structured arguments in support of particular conclusions and which help them predict likely future scenarios. These tools leverage a corporate memory (van Heijst 1996) that acts as a repository of knowledge gained as a result of analytical tasks. Argumentation knowledge can also be used to develop strategic organizational support systems which enable and support unstructured reasoning and communication within organizations (Sillince 1996). Such argumentationbased unstructured transactions can be used to maintain a cognitive model of the organization. The model can be subjected to a prescriptive theory in order to diagnose good and bad patterns in organizational cognition. Because semi-autonomous groups need to form and then dissolve when tasks have
EXPLANATION AND ARGUMENTATION CAPABILITIES
197
been completed, the relationship between group members (who may be inside or outside the organization) can be formalized by means of contracts. Such contracts initiate changes to the organization’s business model. Contracts are monitored in order to enable adequate supervision and control to be exercised. An organizational interface controls when collaboration takes place (via argumentation, contracting, or other communicative acts). Sillince and Saeedi (1999) discuss the interest of integrating argumentation support systems (ASS) into computer-mediated communication systems. They provide an overview of various tools which can be used to analyze arguments (playing various roles such as critic, tutor, diagrammer, dialog scheduler, knowledge elicitor, information sharer, design assistant and policy assistant) or to generate arguments (playing various roles such as structurer, modeler, negotiator, case transformer, conflict resolver, devil’s advocate, dialogue enabler, and legal argument generator). The authors show how ASSs can support various functions found in communication-mediated systems such as electronic mail, negotiation support, workgroup project management, contract management, bulletin board management, idea generation, tasking, indexing, electronic meeting support, collaborative hypertext-based writing. Sillince and Saeedi (1999) indicate: “Because argumentation support systems are based upon an informal logic model, they are able to capture soft and tacit information. The encouragement of multiple viewpoints means that organizational conflicts can be surfaced and even engineered by means of critic agents. The performance measurement model for assessing the evidential and argumentational value of each contribution enables information sharing to be monitored and rewarded. A strength of argument model enables the assessment of argumentation quality. The graphical representation models developed by ASSs enable the externalization and communication of debates and enable coordination using a public screen”. Interestingly, the systems developed in the field of computer-mediated defeasible argumentation (CMDA) (Section 5) aim at providing users with similar functionalities to those offered by argumentation support systems. Whereas the latter approaches aim at recording and organizing textual arguments proposed by debating participants in data structures, the CMDA approaches aim at managing and coaching the argumentative process. Although CMDA approaches require a supplementary effort from users who have to specify the relations between the proposed argument components (justification, attack, etc.), they could be used to enhance argumentation support systems developed for design rationale and computer-mediated communication systems. These techniques could also be used to explain software agents’ negotiations to users.
198
B. MOULIN ET AL.
7. Arguments and Mental Attitudes Several researchers in artificial intelligence have worked on argumentative discourse in order to build computer systems that can recognize argument structures. For example, argument systems have been built to understand editorial texts (Alvarado 1990), to engage in political dialogs (Birnbaum et al. 1980) or to model legal arguments (Ashley 1990). Alvarado (1990) proposed a theory of argument comprehension and implemented it in the system OpEd which is able to read editorial text fragments in the domain of politicoeconomics and to answer questions about their argumentative content. OpEd manipulates several knowledge structures such as beliefs and belief relationships, goals, plans, scripts, causal chains of reasoning and economic entities. We discuss here only the main elements on which the argument model is based. Alvarado (1990) postulates the existence of argument units as fundamental constructs of argument knowledge, which consists of configurations of belief attack and support relationships, where the content of each belief refers to abstract goal/plan situations. Beliefs are evaluations about plans or expectations about the effects of plans on goals. The objective is to build an internal conceptual model of editorial arguments in the form of an argument graph (Flowers et al. 1982) which explicitly represents whether beliefs in the editorial are involved in: 1) support relationships, because they provide evidence of one another; or 2) attack relationships, because they contradict one another. Three kinds of beliefs are characterized in OpEd: • evaluative beliefs, judgements about the goodness or badness of domainspecific plans; • causal beliefs, expectations about: a) the possible causes of the failure or achievement of domain specific-goals; and b) the positive or negative effects that may result from implementing domain-specific plans; • beliefs about beliefs, predications about evaluative and causal beliefs. Attack relationships between beliefs are characterized in terms of contradictions involving: 1) planning situations that cannot occur at the same time; and 2) opposite effects of a plan on goals that are interrelated. Support relationships between beliefs capture the ways in which causal domainknowledge, analogies and examples can be used to justify: 1) why plans should/shouldn’t be implemented; and 2) why plans achieve, fail to achieve or thwart goals. The argument graph makes explicit the relationships associating the beliefs in an editorial text. Arguments in editorial texts are not instances of single attack or support relationships, but rather configurations of such relationships. These configurations are represented in terms of memory structures, called argument units (AUs), which can be used in combination with domain-specific knowledge to refute an opponent’s argument about a plan on the basis of goal
EXPLANATION AND ARGUMENTATION CAPABILITIES
199
achievements and goal failures. In OpEd, the comprehension process involves the recognition, access, instantiation and application of AUs. Alvarado (1990) proposes a taxonomy of AUs: • AU based on unrealized successes (opposite effect, actual cause, wrong solution, similar unrealized success, prototypical unrealized success) • AU based on realized failures (rebuttals based on major-goal failures, on equivalent goal failures, on negative spiral failures; goal-failure rebuttals involving analogies and examples); • AU based on realized successes (possible success, similar success, prototypical success, major success, similar major success, prototypical major success); • AU based on unrealized failures (impossible failure, undisturbed success, excluded failure, similar unrealized failure, prototypical unrealized failure). Large editorial texts are composed of configurations of instantiated AUs. Alvarado distinguished two basic types of such configurations: • Breadth-of-Support configuration: an argument in which two or more AUs are combined to present an arguer’s evaluation of a plan from multiple perspectives; • Depth-of-Support configuration: an argument in which an AU is concatenated with a support structure to elaborate on the goals’ successes or failures underlying an arguer’s evaluation of a plan. OpEd analyzes an editorial text by: 1) extracting the beliefs, beliefs’ relationships and AUs which underlie the text, and 2) integrating those structures into argument graphs. In order to answer a question about an editorial text, OpEd categorizes the question content into one of the five conceptual question categories: belief holder, causal belief, belief justification, affect/belief and top-belief/AU. Each conceptual question category leads to the selection of search and retrieval processes which use indexing structures to access the argument graph. Search and retrieval processes use knowledge dependencies that exist among the constructs composing the argument graph. Alvarado’s theory provides a perspective on argument modeling which is an alternative to the logical approaches that we previously presented. M. G. Dyer, in his preface of Alvarado’s book (1990) states: “A common tactic in arguments and editorials is to accuse one’s opponent of being ‘illogical’. However, protocols indicate that people do not reason by syllogisms, nor do they employ carefully constructed proof-style chains of deductive reasoning. In this book, Alvarado does not emphasize the role of formal logic per se. Instead, he is interested in modeling how a refutation or accusation is recognized as such, and how this affects the process of comprehension”.
200
B. MOULIN ET AL.
Alvarado’s approach provides an interpretation of arguments in terms of agents’ beliefs and accounts for the way in which those arguments relate to the agents’ goals and plans. Alvarado also provides a mechanism to structure arguments in higher-level constructs, the argumentative units, which can help to represent argumentation strategies used by people. This approach could provide useful insights for the development of sophisticated argumentation models for software agents.
8. Argumentation and Software Agents Several recent works have proposed argument models that software agents can use to carry out negotiation activities. Argumentation is a key mechanism to bring about agreement in non-cooperative situations when agents have incomplete knowledge about each other or about the environment. Among the early works in the domain, let us mention Sycara’s persuader system (Sycara 1990), the first agent-based system used to support conflict resolution through negotiation and mediation between agents which are not fully cooperative. Construction of arguments was performed by integrating case-based reasoning, graph search and approximate estimation of agents’ utilities. Sillince (1994) also investigated conflict resolution between software agents. These agents would make claims using tactical rules (such as fairness and commitment) and would support inconsistent beliefs until another agent attacked their beliefs with a strong argument. Recent works in multi-agent systems aim at developing negotiation protocols and agents’ models that make use of argumentation techniques. Six types of arguments are commonly thought to have a persuasive force in human negotiations (Karlins and Abelson 1970; O’Keefe 1990). Kraus et al. (1998) propose to use them to enable software agents to negotiate persuasively: threat, promise of future reward, appeal to past reward, appeal to “precedents as counterexamples” (to convey to the persuadee a contradiction between what she says and past actions), appeal to prevailing practice (to convey to the persuadee that the proposed action will further her goals since it has furthered others’ goals in the past), appeal to self-interest (to convince a persuadee that taking this action will enable achievement of a high-importance goal). Kraus et al. (1998) propose a logical model to represent the mental states of agents based on a representation of their beliefs, desires, intentions and goals. They think of argumentation as an iterative process emerging from exchanges among agents aiming at bringing about a change in intentions (a mechanism of persuasion). In order to negotiate effectively, a software agent needs mechanisms to: 1) manage its own beliefs, desires, intentions
EXPLANATION AND ARGUMENTATION CAPABILITIES
201
and goals (Thomas et al. 1991); 2) to reason about other agents’ beliefs, desires and intentions; and 3) influence other agents’ beliefs, intentions and behavior. When agents are non-collaborative, the process of argumentation is an iterative exchange of proposals toward reducing conflict and promoting the achievement of individual goals. Arguments are used by an agent playing the role of persuader as a means to change the preferences, intentions and actions of the agent to be persuaded. Agents manipulate formulae ϕ that express beliefs, desires, intentions, goals and preferences. Agents can exchange messages of different types using arguments: Request (ϕ1 ,ϕ2 ), Reject (ϕ1 ,ϕ2 ), Accept (ϕ1 ,ϕ2 ) where ϕ1 is respectively requested, accepted or rejected with argument ϕ2 , the argument being expressed as a formula. Agents can also exchange simple messages: Request (ϕ), Reject (ϕ), Accept (ϕ), Declare (ϕ). Argument structures can be expressed in terms of agents’ mental attitudes. Kraus et al. (1998) defined axiom schemas for each of the argument types used by software agents in negotiation. For brevity’s sake we do not present the details of the formalism here. For example, the message corresponding to a threat can be represented in a simplified way by the following formula: [t3 , Sendj,i Request ([t, Do(i,α)], ¬[t, Do(i,α)] → [t4 , Do(j,β)])] which is interpreted in the following way. At time t3 Agent j sends to Agent i the request that Agent i does α at time t, with the argument: if Agent i does not do α at time t, then Agent j will do β at time t4 . The different formulae expressing the possible circumstances in which such a threat message can be emitted by an agent are specified as preconditions of the message. We do not present these preconditions, but provide the corresponding textual interpretation (Kraus et al. 1998): Suppose that Agent j requested that Agent i do action α at time t and Agent i refused. Agent j assumes that Agent i refused to do α probably because α contradicts one of Agent i’s goals. If (according to Agent j’s beliefs) there is an action β that Agent j can perform that contradicts another goal of Agent i, and this last goal is preferred by Agent i over the first goal, then Agent j threatens Agent i that it will do β if Agent i will not do α. Kraus et al. (1998) indicate that over repeated encounters, agents might analyze each other’s patterns of behavior in order to establish an analog to the human notion of credibility and reputation. This might influence the agent’s evaluation of arguments. By observing the reaction of Agent B to its arguments, Agent A can update and correct its model of Agent B, thus refining its planning and argumentation knowledge. Using their logical model, Kraus et al. implemented a general Automated Negotiation Agent (ANA) which acts and negotiates in a simulated multi-agent environment. The user can define the agents’ mental states as well as the inference rules for argumentation generation. Once created, the agents try to accomplish their desires, using arguments when needed. This approach allows the user
202
B. MOULIN ET AL.
to test different argument types and to assess their impact on the effectiveness of the agents’ negotiation capabilities. The user can also evaluate ways of selecting the most appropriate argument at any stage of the negotiation. Kraus et al. (1998) indicate that their work differs from research on defeasible reasoning in the sense that with ANA they do not focus on the logical defeasibility of arguments. Rather they concentrate on how argumentation can guide negotiation by supplying a mechanism to agents to influence other agents’ beliefs and actions and to achieve coordination in situations where agents are self-interested. Parsons and his colleagues (Parsons et al. 1998) propose a generic framework based on an argumentation model inspired by (Kraus et al. 1995) which enables a system to simulate an agent’s internal reasoning (the agent arguments with itself in order to establish its own beliefs) and to support negotiation activities between agents. The minimal negotiation capabilities for an agent are to propose and to respond. A response can take the form of a critique or of a counter-proposal. This is yet not sufficient to justify one’s position nor to persuade another agent to change its position. That is where arguments are needed. Several categories of arguments can be used: threats, rewards and appeals (Sycara 1990; Kraus et al. 1998). Parsons et al. (1998) propose a negotiation process which proceeds by exchanging proposals (kinds of solution to a problem), critiques (which can take the form of remarks, comments, or even counter-proposals), explanations (which can be arguments as well as statements), and meta-information. The role of meta-information is to focus the local search of agents for solutions. By supplying information about why it had a particular objection to a proposal, an agent might help another agent to focus its search for another, more acceptable suggestion. Explanations are a form of meta-information, but Parsons et al. (1998) allow meta-information to be supplied in the absence of any proposal or critique. The process of negotiation starts when an agent sends a proposal to other agents. The recipient agents may either accept it, critique it, make counter-proposals or provide meta-information. The first agent can either make a new proposal, give clarifying meta-information, respond to one of the counter-proposals (if any) with a new counter-proposal. This process continues until a proposal or counter-proposal is acceptable to all the parties involved or until the negotiation breaks down without an agreement. Following the approach proposed by (Kraus et al. 1995), Parsons and his colleagues propose an argumentation system which uses two notions of defeat (defeat by undercut and defeat by rebut) and defines five acceptability classes for rating arguments. The system works by constructing a series of logical steps (i.e. arguments) for and against propositions of interest and may be seen as an extension of classical logic: an argument in classical logic is a sequence
EXPLANATION AND ARGUMENTATION CAPABILITIES
203
of inferences which prove that a conclusion is true, whereas an argument in this system of argumentation is a sequence of inferences which prove that a conclusion may be true. Parsons et al. (1998) provide an agent architecture which relies on “multicontext systems”, a multi-context system being a framework which allows distinct theoretical components to be defined and interrelated (Giunchiglia and Serafini 1994). Different contexts are used to represent different components of an agent architecture and to specify the interactions between the components by means of bridge rules established between contexts. Each agent can be seen as a multi-context system in which each of the architecture’s blocks is represented as a separate unit, an encapsulated set of axioms and an associated deductive mechanism, whose interrelationships are precisely defined via bridge rules (i.e. inference rules connecting units). Since each context contains a set of statements expressed in a particular logic along with the axioms of that logic, it is possible to move directly to an implementation in which the various contexts are concurrent theorem provers which exchange information. Arguments are built by using the inference rules of the various units and the bridge rules between units. There is an important difference between Parson et al.’s system and other argumentation systems (Dung 1995; Kraus et al. 1995; Loui 1987; Pollock 1994). In these systems the grounds of an argument are just the formulae from which the argument is built and it is taken for granted that an agent can build the necessary proof based on these grounds, when desired. However, Parsons et al. (1998) suggest that this assumption does not necessarily hold in a multi-agent system in which different agents have different inference rules within their units and different bridge rules between them. Parsons et al. conclude that the grounds of an argument must contain complete proofs including the inference rules and the bridge rules that are used. Hence, the notation for arguments must be enriched in order to identify which inference and bridge rules are employed. Amgoud et al. (2000) present a model of inter-agent dialogue based on argumentation and claim that this model is more general than the approach presented in Parsons et al. (1998). Amgoud et al. (2000) use an argumentation model inspired by Dung’s work (1995) and enhanced with the notion of preference between arguments (Amgoud and Cayrol 2000). They use the technique of dialogue games to analyze discourse: 1) each participant has objectives and can use legal moves to reach them; 2) game moves take the form of illocutions, 3) objectives are matters such as persuading the other player of the truth of a proposition. Each player/participant aims at convincing the other one. He can assert or retract facts, challenge the other player’s assertions, ask whether something is true or not and request that inconsistencies be resolved. In addition to players, there is a “commitment store”
204
B. MOULIN ET AL.
which records the statements that players have made and the challenges they have issued. The player who ends the dialog by issuing the last argument is said to have won. In Amgoud et al.’s approach (2000), dialogue rules are formulated in terms of the arguments that each player can construct and the players are equipped with an argumentation system. The authors show how the proposed model can capture a wider range of dialogue types than the classical negotiation model, including the dialogue categories proposed by Walton et al. (1995): persuasion, inquiry, information seeking, deliberation. Jung et al. (2001) emphasize that although most currently implemented argumentation systems have performed well in small-size applications, no systematic investigation on large-scale argumentation systems have been done. One key open question is understanding if (and when) argumentation actually speeds conflict resolution convergence. These authors propose a computational model of argumentation suitable for large-scale experimental investigations based on distributed constraint satisfaction. Here is the basic idea of this approach: when an agent communicates to other agents its assignments of its local variables, it also includes the local constraints that led to the assignments as a justification. These communicated local constraints are used by other agents to propagate constraints in order to attempt to speed up the conflict resolution process. Jung et al. (2001) implemented different negotiation strategies and conducted experiments with a distributed sensor network. These experiments illustrate that argumentation can indeed lead to a significant improvement in convergence of conflict resolution if the right strategy is chosen. A rather surprising result is that, even in a cooperative setting, the most cooperative argumentation strategy is not the best in terms of convergence in negotiation. Sawamura, Umeda and their colleagues (Sawamura et al. 2000; Umeda et al. 2000) view argumentation as a fundamental principle to model reasoning methods (such as conflict-resolving reasoning, dialectical reasoning, cooperative reasoning) that agents can use to reach a consensus. They adopt a multi-paradigm logic which allows for the interplay between two logics: the dialectical logic (Routley Meyer 1976) and the logic of the extended logic programming (Prakken Sartor 1997). These logics allow the existence of conflicting arguments such as p and ¬p in an agent’s knowledge base. Arguments and counter-arguments are generated from the agents’ knowledge bases based on dialectical reasoning which can result in different kinds of agreements such as compromise and concession. Based on these logics, Umeda and his colleagues (2000) propose a logical formulation of several reasoning schemes such as the conflict-resolving scheme, compromise scheme, the concession scheme, the co-operative scheme and the elementary dialectical reasoning scheme of conjunctive form. In order to show how argu-
EXPLANATION AND ARGUMENTATION CAPABILITIES
205
ments can be generated by a group of agents, the authors present an argumentation protocol which consists in three main phases: 1) generating arguments and counter-arguments, 2) establishing co-operation 3) making dialectical agreements. The authors developed argument-based agents systems to simulate argumentation in various domains (dispute on the creation of a nuclear power plant, negotiations between a selling agent and shopping agents based on argumentation, negotiation to schedule a meeting). This approach based on a multi-paradigm logic has great potential for the development of agents having persuasive capabilities. However, as the authors recognize themselves (Umeda et al.: 39), it would be quite useful to take into account the results of works done in social psychology in order to develop plausible argumentative behaviours for the agents. Several studies have been done on the interpretation of political debates (Trognon Larrue 1994) and the interactions in collective labour negotiations (Kostulski Trognon 1998). Finally, Schroeder (2000) claims that visualization of argumentation is a promising approach to make complex reasoning of single or multiple agents more intuitively understandable without knowledge of the foundations of logic. One objective of the Ultimate Ratio Project (Schroeder 2000) is to visualize the argumentation process in a form understandable by people. To this end, the author uses the dynamic construction of proof trees as a metaphor for the process of argumentation. The tree construction is a representation of the argument and of the execution event (the control flow) in the logic program. The system provides an interactive animation of the argumentation process of a single agent, allowing the user to navigate through the argumentation space as well as the argumentation process which is displayed as a 3D tree. In addition, the user can navigate through the static space of all possible arguments. For the visualization of multi-agent argumentation, agents are represented by “avatars” and their conversation is displayed by animating the messages that they exchange. The user observes message exchanges on screen and can visualize the negotiation state of the avatars. It is also interesting to examine recent works on reputation management which is gaining some importance in electronic communities. Commercial web portals such as OnSaleExchange and eBay use reputation management mechanisms: they allow users to rate and submit textual comments about sellers. The reputation of a seller is the average of the ratings obtained from its customers and is determined by the portal’s central authority. Other systems such as Yenta (Foner 1997) require that users give a rating for themselves and the rating mechanism relies either on a central agency (direct ratings) or on a group of trusted users (collaborative ratings). In the Social Interaction Framework (Schillo et al. 1999), an agent evaluates the reputation of another agent based on direct observations as well through other witnesses. However,
206
B. MOULIN ET AL.
it is not clear how to find such witnesses. Yu and Singh (2000) propose an approach which gives full control to users who can choose when to reveal their ratings, help an agent find trustworthy agents and speed up the propagation of information through the social network of agents. Yu and Singh (2000) indicate: “Most current multi-agent systems assume benevolence, meaning that the agents implicitly assume that other agents are trustworthy and reliable. Approaches for explicit reputation management can help the agents finesse their interactions depending on the reputations of other agents”. Such reputation management mechanisms could be integrated in argumentation systems, providing a certain way to evaluate the credibility of agents presenting controversial arguments.
9. Discussion In this section we discuss some issues relative to the integration of argumentation and explanation capabilities into knowledge-based systems, as well as the advantages of enhancing the argumentation and persuasion capabilities of software agents. 9.1. Integrating explanation and argumentation capabilities in knowledge-based systems As we saw in the previous sections, argumentation and explanation techniques have been used separately for different applications and for various purposes. However, in human interactions, and more specifically in cooperative problem-solving contexts, argumentation and explanation are often intertwined. Reactive and proactive explanations can be embedded in argumentation episodes while explanatory dialogues can be interrupted by argument exchanges. In fact, new discursive attitudes are triggered as the participants react or adapt themselves to the discourse context. This context includes: the propositions put forward by the participants, the previous discourse or previous discussions related to the same subject, the nature of the topic being discussed (highly important, controversial, etc.), the intellectual or emotional impact of the debated issue on the participants, the amount of information and knowledge the participants have or think they have about a given subject, the social relations the participants have with each other, etc. Individuals adopt a certain attitude towards the issues at stake and intervene consequently with a specific type of discourse, or do not intervene at all. It would be interesting to explore how such mental and discursive attitudes would be triggered in software programs. In order to carry on natural conversations, artificial agents will have to tune their behavior with
EXPLANATION AND ARGUMENTATION CAPABILITIES
207
the ongoing discourse and its surrounding context and react to it by shifting roles (explainer, questioner, arguer, listener, critique, etc.) as humans do. The choice of a certain attitude implies the search of relevant knowledge structures and the organization of this information into the desired type of discourse. Now, can the same knowledge base be used to generate argumentative and explanatory discourse? Do arguments and explanations actually consist respectively of a forward (from premises to a conclusion) and a backward (from the claim or conclusion to the premises) reasoning on the same set of propositions or do they involve different types or levels of knowledge? If the same knowledge is used, it would be interesting to investigate how it can be structured for different discursive purposes. From a knowledge engineering point of view, it would be interesting to investigate how an argument structure such as Toulmin’s model can be used to organize the explanatory knowledge stored in knowledge-based systems. Several issues can be addressed. Considering argumentative relations such as support and attack, can we organize a knowledge base which explicitly represents these relations by associating the various facts or propositions contained in the base? Do we need other types of argumentative relations? Which mechanisms must be implemented so that this knowledge can be used both for explanatory and argumentative purposes? How sophisticated the knowledge structures and reasoning mechanisms should be? A simple system would use predefined argumentative relations associating the elements of its knowledge base. A more sophisticated system would be able to create new argumentative relations and to revise the existing ones. An even more sophisticated system would be able to reason about other agents’/users’ mental states and plans, using reasoning mechanisms and structures similar to those proposed by Alvarado (1990). Another point is that the system will have to adapt its explanations, critiques and arguments to the situation as well as to the user. We saw that a great deal of research has been devoted to this subject as far as explanations are concerned: what exactly does the system have to explain, how much detail must be included in that explanation, what would be the most appropriate explanation strategy in this situation, etc.? More dilemmas have to be solved when previous explanations have failed to achieve their communicative goal. If they are to be used in cooperative settings, argumentation systems will also have to adapt their arguments to the user’s profile. Although some aspects will be common, the problem of customizing arguments will be quite different from that of tailoring explanations. The problem is still different in the case of critiquing systems which in fact have to counter the natural tendencies of the user. But, this means again that the user’s characteristics have to be taken into account. Positive and negative critiques are constructive only when
208
B. MOULIN ET AL.
adequately used, that is at the right time and properly calibrated with regard to the person to whom they are addressed. Another aspect which has received a lot of attention in explanation research but has not yet been addressed in the more recent field of argumentation systems, concerns the different argumentative techniques and strategies used by arguing humans. Yet, the theoretical basis is of course already available. Perelman and Obrechts-Tytecta’s (1958) treatise provides an in depth analysis of argumentation and rhetoric that can be used as a basis for genuine arguing systems/agents. These authors classify common argumentative techniques into five main categories: 1) quasi-logic arguments, 2) arguments based on “the structure of reality”, 3) arguments founding “the structure of reality”, 4) dissociations of notions and 5) interaction of arguments. Each category contains a number of argument schemes. For example, quasi-logic arguments are those arguments that look like logical and mathematical proofs but which lack their exact and formal character. This category includes, among others, arguments based on a contradiction or incompatibility relation (e.g. strategies that help avoid an incompatibility or aim at presenting two claims as compatible or incompatible); arguments based on a definition or an identity relation; transitive arguments (ex. “my friends’ friends are my friends”), arguments based on an inclusion or a part-whole relation, comparison-based arguments, probabilities-based arguments, etc. These schemes are quite useful in that they characterize the most commonly used argumentative techniques in human dialogues. For example, they could be used to develop techniques for tutoring systems in order to train people to argue or negotiate. Software agents could also use such schemes to generate their argumentative plans. New research is also needed to better understand the properties of “persuasive argumentation”. What are the different strategies for convincing a user to adopt a specific mental attitude? Are there argumentative or explicative strategies that are more successful than others? Is it possible to determine and formalize prototypical sequences of generic arguments? Can we identify typical situations in which certain strategies can be used with better chances of success? Another interesting issue would be to explore how negotiation argumentation models such as (Kraus et al. 1998) could be used to enable software agents create formal models of users’ mental attitudes and argumentative activities, and how they could exploit them in argumentation and explanation activities. Integrating argumentation and explanation facilities in knowledge-based systems raises many issues that are very challenging from a design point of view. But, adopting a cognitive perspective, there are several interesting
EXPLANATION AND ARGUMENTATION CAPABILITIES
209
questions that will have to be addressed at some point: How do individuals manipulate their mental representations when explaining or arguing? How can two parties reach a common representation? Do we use the same representations for explanation purposes as those used for other high-level tasks such as planning or decision-making? The study of explanation and argumentation will not only contribute to a better understanding of mental representations per se, but will investigate the subject in relation with different social environments. This research field is undoubtedly a very promising ground for multidisciplinary collaboration. 9.2. Enhancing the argumentation and persuasion capabilities of software agents We saw that argumentation techniques have been recently used to develop negotiation protocols between software agents. Although few operational systems (Kraus et al. 1998) have already been implemented, it is clear that there is a need for such systems. It is reasonable to anticipate an increasing demand for sophisticated software agents, given the widespread use of the World Wide Web and the accelerated development of e-commerce applications. Currently, most of the software agents used on the web perform elementary tasks such as information search or monitoring and elementary financial transactions. In the years to come, users will require that their software agents be able to perform more complex tasks, and commercial negotiation will probably be one of the first areas to be considered. These new applications will exploit the argumentative techniques currently developed for software agents. It is clear that if a user sends one or several software agents to different locations in order to negotiate different issues in her name, she will require that the agents be able to report about the negotiations in which they will be involved. Currently, the agents bring back to their users the results of simple searches or transactions. However, if a new generation of software agents is developed to participate in more sophisticated negotiations, it is likely that the agents will not be able to complete their negotiations without the intervention of their users, at least when important financial transactions will be involved. Software agents will need to be able to explain what happened during their interactions with other agents, providing their users with a summary of the main steps of the negotiation (the main arguments and counter-arguments contributed by each party), to present plausible alternative actions, to describe the options and to explain which might be preferred. The user will choose an option or give a new “negotiation mandate” to her agent. Hence, reporting
210
B. MOULIN ET AL.
capabilities will be necessary for this new generation of agents (Moulin 1998). Earlier, we discussed models using arguments for negotiation purposes and models using arguments for defeasible reasoning. Within a negotiation argumentation model (Kraus et al. 1998), a persuader agent associates an argument to a request as a means to influence a persuadee agent’s decision by explicitly indicating what could happen (in terms of mental attitudes of either agents) if the persuadee agent does not accept the request. Within a defeasible argumentation model, an agent uses an argument as a proof which asserts that some propositions are (defeasible) reasons to believe or disbelieve other propositions, arguments being partially ordered by relations expressing differences in conclusive force. Hence, a defeasible argumentation model mainly deals with beliefs related by support and attack relations whereas a negotiation argumentation model deals with the ways of persuading other agents to adopt mental attitudes (beliefs, goals, intentions). Both mechanisms will be needed for negotiating agents. Also, when reporting about an on-going argumentation, an agent will need to provide its user with explanations about the background situation and the decisions it made in relation to the topics discussed with the other agents. If the argumentation is mainly based on negotiation, explanations could be based on the argumentation schemas used by the agents such as the threat or the promise of future reward used in Kraus et al. (1998). If the argumentation is mainly based on assessing the relative strengths of chains of propositions (facts and beliefs), explanations could be based on the argument structures (propositions or beliefs related by support and attack relations) used during the argumentation process. Agents using dialectical reasoning (Sawamura et al. 2000; Umeda et al. 2000) might be quite useful to generate such explanations. Of course, further studies will be needed in order to assess the reactions of users to such explanations. The same negotiation, explanation and reporting capabilities will be needed in domains in which negotiation and/or argumentation are crucial, such as conflict management, mediation, strategic games and other activities requiring persuasion skills. In Section 3, we mentioned an application in which positive and negative critiquing agents (the “angel” and the “devil”) argued about a user-proposed solution (Vahidov and Elrod 1999). The arguments and counter-arguments set forth by the agents help the user assess the situation in terms of advantages and disadvantages. Critiquing agents could be designed and implemented, taking advantage of a multi-paradigm logic as in Sawamura et al. (2000) and Umeda et al. (2000). Pushing this approach a little further, we may think of systems which, thanks to the simulated debate performed by critiquing
EXPLANATION AND ARGUMENTATION CAPABILITIES
211
software agents, will enable their users to anticipate other people’s reaction to their solutions. Politics, marketing and law are good candidate application areas for such systems. It would be feasible to create a system which enables users to participate in argumentation activities with other users as well as with software agents specialized in specific domains. In addition to their specialized domain knowledge, the software agents would have argumentation and explanation capabilities. Other agents might represent users and participate in the debates. Such agents would represent their users’ interests and contribute to the group problem solving activities. In a way similar to e-commerce, these agents would need several capabilities: summarize the debates in order to inform their users; detect inconsistencies or conflicts, decide when it is necessary to inform the user; present various courses of actions and explain their advantages and disadvantages; assess or critique the user’s decision. Software agents could be used to structure the debates in which users as well as other software agents participate. Certain agents could be used to summarize the debates, presenting the main arguments and counterarguments contributed by each party. Certain agents could record and classify arguments or sequences of arguments in a case base which could be used for future reference. This base would be used by certain agents to advise their users about the actions to be taken or the positions to be defended. The arguments and argument sequences recorded during the debates of groups of users could be analyzed in order to detect argumentation patterns which were successful or unsuccessful during the debates. Identifying such patterns could help designers to enhance the argumentation capabilities of software agents by introducing new argumentation strategies or heuristics in the systems. A long-term research goal would be to develop an argumentation analysis system (AAS) which is able to identify argumentation patterns, assess the impact of using these strategies in debates and develop new argumentation strategies accordingly. Such a goal might seem impossible to reach considering the complexity of human debates. However, it is realistic to consider it for debates involving software agents only. If software agents could learn how to develop new argumentation strategies or how to adapt known argumentation strategies to new situations, a new step would be taken toward the creation of truly persuasive agents.
10. Conclusion In this paper we reviewed a large body of literature dealing with two main domains of research that we think should be investigated in conjunction: explanation and argumentation systems. We described the evolution of expla-
212
B. MOULIN ET AL.
nation systems from the simple reasoning trace facilities to recent research on collaborative and interactive explanations. We discussed the characteristics of critiquing systems that test the credibility of the user’s solution. We reviewed research works on defeasible reasoning and the development of formal models of argumentation which have been applied to computermediated defeasible argumentation. We examined some application domains such as computer-mediated communication, design rationale, crisis management and knowledge management in which argumentation support tools are used. We presented recent applications of argumentative techniques to the fields of multi-agent systems. Finally, we presented potential research directions. Several research issues need to be investigated which will help researchers to better understand the mechanisms underlying the activities of explanation and argumentation, and which will help designers to develop better systems using efficient explanation and argumentation techniques. These systems will have improved interaction capabilities, providing their users with better advice and more persuasive arguments.
Notes 1. It is clear that in certain circumstances a person may hide her own beliefs and officially accept a proposition without being convinced of the truthfulness or correctness of the proposition. Acting in such a way, a person is not sincere. Because we are interested in decision support systems and knowledge-based systems, we assume that these systems act sincerely. Hence, if a system accepts another system’s proposition, it will change its mental model accordingly. 2. “Note that the concept of justification is distinct from the related, but substantially different, issue of explanation. Justification is the task of convincing the user that the output of a software system is correct and useful, providing sound specific support and credible reasons. Explanation instead, is concerned in demonstrating to the user (generally not the end user) how final and intermediate results have been achieved, showing the internal processes and structures involved in generating them” (Guida and Zanella 1997). 3. Gradual knowledge can be modeled in several ways (Després 1996). One can distinguish linguistic approaches (Raccah 1996) and logical approaches. In the logical approaches we can distinguish symbolic models (such as non monotonic logic) and numeric models (fuzzy logic, theory of possibilities). For example, using the theory of possibilities, Dubois and Prades (1987) study gradual inference strategies. The linguistic approach uses a classical inference strategy (modus ponens), but the rules which are manipulated by the inference engine are gradual and expressed as topoï. 4. The Greek philosopher Aristotle distinguished two types of proofs (Barnes 1984) that he called analytic and dialectic proofs. Analytic proofs are reached by the application of sound inference rules to axioms. In contrast, dialectic proofs concern opinions that are adhered to with variable intensity.The objective of an exponent of this type of reasoning is to convince or persuade an audience to accept the claims advocated. Modern logic has been mainly concerned with analytic proofs.
EXPLANATION AND ARGUMENTATION CAPABILITIES
213
Web addresses (ARGSYS 2000). Computer-Supported Collaborative Argumentation Resource Site: http:// kmi.open.ac.uk/people/sbs/csca/ (BCSGA 2000). Bibliography on Computer Supported Graphical Argumentation: http:// kmi.open.ac.uk/people/sbs/csca/arg-biblio.html. (DIALARG 2000). Homepage on Dialectical Argumentation and Computational Dialectics: http://www.metajur.unimaas.nl/∼bart/links/argument.htm (GENOA 2000). Project GENOA Homepage: http://www.isx.com/newwebpage/isxcorp/ programs/Genoa.html
References Alexander, S. M. & Evans, G. W. (1988). The Integration of Multiple Experts: A Review of Methodologies. In Turban E. & Watkins P. R. (eds.), Applied Expert Systems, 47–53. North Holland: Amsterdam. Allen, J. F. & Perrault, C. R. (1980). Analyzing Intention in Utterances. Artificial Intelligence 15: 143–178. Alvarado, S. J. (1990). Understanding Editorial Text: A Computer Model of Argument Comprehension. Kluwer Academic Publishers. Amgoud, L. & Cayrol, C. (2000). A Reasoning Model Based on the Production of Acceptable Arguments. In Linköping Series of Articles in Computer and Information Science, 5 (http://www.ida.liu.se/ext/epa/cis/ufn-00/01/tcover.html). Amgoud, L., Maudet, N. & Parsons S. (2000). Modeling Dialogues Using Argumentation. In Proceedings of the 4th International Conference on MultiAgent Systems (ICMAS2000), 31–38. Boston, USA. Anscombre, J-C. (1995). Théorie des topoï. Kimé: Paris. Anscombre, J-C. & Ducrot O. (1983). L’argumentation dans la langue. Magrada: Bruxelles. Ashley, K. D. (1990). Modeling Legal Argument: Reasoning with Cases and Hypothetical. The MIT Press. Baker, M. (1992). The Collaborative Construction of Explanations, Actes des Deuxièmes Journées du PRC-GDR-IA du CNRS, 25–40. Sophia Antipolis: France (In French). Baker, M., Dessalles, J-L., Joab, M., Raccah, P-Y., Safar, B. & Schlienger, D. (1994). La génération d’explications négociées dans un système à base de connaissances, Actes des Cinquièmes Journées du PRC-GDR-IA, 296–316. Teknéa: Toulouse. Ball, W. J. (1994). Using Virgil to Analyse Public Policy Arguments: A System Based on Toulmin’s Informal Logic. Social Science Computer Review 12: 26–37. Baroni, P., Giacomin, M. & Guida, G. (2000). Extending Abstract Argumentation Systems Theory. Artificial Intelligence 120: 251–270. van Beek, P. (1987). A Model for Generating Better Explanations. Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics. Palo Alto, California. Bell, R. A., Zahn, C. J. & Hoper, R. (1984). Disclaiming: A Test of Two Competing Views. Communications Quaterly 32: 28–36. Bench-Capon, T. J. M., Lowes, D. & McEnery, A. M. (1991). Argument-Based Explanation of Logic Programs. Knowledge Based Systems 4: 177–183. Bench-Capon, T. M., Coenen, F. & Orton, P. (1993). Argument-Based Explanation of the British Nationality Act as a Logic Program. Computers, Law and AI 2: 53–66.
214
B. MOULIN ET AL.
Bench-Capon, T. J. M. (1998). Specification and Implementation of Toulmin Dialogue Game. In Hage, J. C., Bench-Capon, T., Koers, A., de Vey Mestdagh, C. & Grutters, C. (eds.) Jurix 1998, 5–20. Foundation for Legal Knowledge Based Systems, Gerard Noodt Institut: Nijmegen, The Netherlands. Benyon, D. & Murray, D. (1993). Applying User Modeling to Human-Computer Interaction Design. Artificial Intelligence Review 7: 199–225. Birnbaum, L., Flowers, M. & McGuire, R. (1980). Towards an AI Model of Argumentation. In Proceedings of the AAAI Conference, 313–315. Buckingham, S. S. & Hammond, N. (1994). Argumentation-Based Design Rationale: What Use at What Cost? International Journal of Human-Computer Studies 40: 603–652. Buckingham, S. S., MacLean, A., Bellotti, V. & Hammond N., (1997). Graphical Argumentation and Design Cognition. Human Computer Interaction 12: 267–300. Bunt, H. C. (1990). Modular Incremental Modeling of Belief and Intention. Proceedings of the Second International Workshop on User Modeling. Carberry, S. (1988). Modeling the User’s Plan and Goals. Computational Linguistics 14: 23– 37. Carenini, G. & Moore, J. (1999). Tailoring Evaluative Arguments to User’s Preferences. Proceedings of the Seventh International Conference on User Modeling (UM-99). Banff, Canada, June 20–24. Cavalli-Sforza, V. & Moore, J. D. (1992). Collaborating on Arguments and Explanations. In Working Notes for the AAAI Spring Symposium on Producing Cooperative Explanation. Stanford University, CA. Cawsey, A., Galliers, J., Reece, S. & Sparck-Jones, K. (1992). The Role of Explanations in Collaborative Problem Solving. In Brézillon, P. (ed.) Proceedings of the ECAI-92 Workshop on Improving the use of Knowledge-Based Systems with Explanations. Université Paris: Paris, 6. Cawsey, A. (1993). Planning Interactive Explanations. International Journal of Man-Machine Studies 38. Cawsey, A. (1995). Developing an Explanation Component for a Knowledge-Based System: Discussion. Expert Systems with Applications 8: 527–531. Chandrasekaran, A. B. & Mittal, S. (1983). Deep versus Compiled Knowledge Approaches to Diagnostic Problem-Solving. International Journal of Man-Machine Studies 19: 425–436. Chandrasekaran, B. (1986). Generic Tasks in Knowledge Based Reasoning. IEEE Expert 1: 23–30. Chandrasekaran, B., Tanner, M. C. & Josephson, J. R. (1989). Explaining Control Strategies in Problem Solving. IEEE Expert 4: 9–24. Chesnevar, C. I., Maguitman, A. G. & Loui, R. P. (2000). Logical Models of Argument. ACM Computing Surveys (to appear). Chin, D. N. (1993). Acquiring User Models. Artificial Intelligence Review 7: 185–197. Clancey, W. J. (1983a). The Epistemology of a Rule-Based Expert System: A Framework for Explanation. Artificial Intelligence 20: 215–251. Clancey, W. J. (1983b). The Advantages of Abstract Control Knowledge in Expert System Design. Proceedings of the National Conference on Artificial Intelligence. Clark, P. (1990). Representing knowledge as Arguments: Applying Expert system Technology to Judgemental Problem-Solving, In Addes, T. & Muir, R. (eds.) Research and Development in Expert Systems III, 147–159. Cambridge University Press. Clark, P. (1991). A Model of Argumentation and its Application in a Cooperative Expert System. Ph.D. Thesis, Turing Institute, University of Strathclyde, Glasgow, U.K.
EXPLANATION AND ARGUMENTATION CAPABILITIES
215
Cohen, R., Jones, M., Sanmugasunderam, A., Spencer, B. & Dent, L. (1989). Providing Responses Specific to a User’s Goals and Background. International Journal of Expert Systems 2. Conklin, J. & Begeman, M. L. (1987). gIBIS: A Hypertext Tool for Team Design Deliberation. In Smith, J. B. & Halasz, F. (eds.) Hypertext’87, 247–251. Association for Computing Machinery. Conklin, J. & Begeman, M. L. (1988). gIBIS: A Hypertext Tool for Exploratory Policy Discussion. Transactions on Office Information Systems 6: 303–331. Coombs, M. & Alty, J. (1984). Expert Systems: An Alternative Paradigm. International Journal of Man-Machine Studies 20: 21–43. Daniel, B. H., Bares, W. H., Callaway, B. C. & Lester, J. C. (1999). Student-Sensitive Multimodal Explanation Generation for 3D Learning Environments. Proceedings of the 16th National Conference on Artificial Intelligence. Davis, R. (1982). Teiresias: Applications of Meta-Level Knowledge. In Davis, R. and Lenat, D. B. (eds.) Knowledge-Based Systems in Artificial Intelligence. McGraw-Hill. Després, S. (1996). Enoncés graduels, prédicats graduels ou topoï? In (Raccah 1996), 57–77. Dhaliwal, J. S. (1993). An Experimental Investigation of the Use of Explanations Provided by Knowledge-Based Systems. Unpublished Ph.D Thesis, University of British Columbia, Management Information System Division. Dieng, R. (1989). Generation of Topoi from Expert Systems. CCAI 6:4, P.Y. Raccah (ed.), Gand. Dick, J. P. (1987). Conceptual Retrieval and Case Law. In Proceedings of the First International Conference on Artificial Intelligence and Law, 106–115. ACM Press. Draper, S. W. (1987). A User-Centered Concept of Explanation. Proceedings of the 2nd Workshop of the Explanation Special Interest Group, 15–23. Dubois, D. & Prades, H. (1987). Théorie des possibilités: Applications àla représentation des connaissances en informatique. Masson: Paris. Ducrot, O. (1991). Dire ou ne pas Dire, Principes de sémantique linguistique. Herman, Collection Savoir: Paris. Dung, P. H. (1995). On the acceptability of Arguments and Its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and n-Person Games. Artificial Intelligence 77: 321–357. Farley, A. M. & Freeman, K. (1995). Burden of Proof in Legal Argumentation. Proceedings of the Fifth International Conference on Artificial Intelligence and Law, 156–164. ACM Press: USA. Feiner, S. K. & McKeown, K. R. (1991). Automating the Generation of Coordinated Multimedia Explanations. IEEE Computer 24: 33–41. Fischer, G. & Mastaglio, Th. (1991). A Conceptual Framework for Knowledge-Based Critiquing Systems. Decision Support Systems 7: 355–378. Flowers, M., McGuire, R. & Birnbaum, L. (1982). Adversary Arguments and the Logic of Personal Attacks. In Lehnert, W. G. & Ringle, M. G. (eds.) Strategies for Natural Language Processing. Lawrence Erlbaum: Hillsdale, NJ. Foner, L. (1997). Yenta: A Multi-Agent, Referral-Based Matchmaking System. In Proceedings of the First International Conference on Autonomous Agents, 301–307. Forsythe, D. E. (1995). Using Ethnography in the Design of an Explanation System. Expert Systems with Applications 8: 403–417. Freeman, K. (1994). Toward formalizing Dialectical Argumentation. PhD thesis, Department of Computer Science and Information Science, University of Oregon.
216
B. MOULIN ET AL.
Galarreta, D. & Trousse, B. (1996). Place de l’argumentation dans la conception d’outils d’assistance à une activité de résolution de problème. In (Raccah 1996), 79–103. Giboin, A. (1995). Les explications destinées aux utilisateurs de systèmes à base de connaissances. Bulletin de l’AFIA 20: 21–30. Gilbert, N. (1989). Explanation and Dialogue. Knowledge Engineering Review 4: 205–231. Giunchiglia, F. & Serafini, L. (1994). Multi-Language-Hierarchical Logics (or: How We Can Do Without Modal Logic). Artificial Intelligence 65: 29–70. Goguen, J. A., Weiner, J. L. & Linde, C. (1983). Reasoning and Natural Explanation. International Journal of Man-Machine Studies 19: 521–559. Gordon, T. F. (1995). The Pleadings Game: An exercise in Computational Dialectics. Artificial Intelligence and Law 2: 239–292. Grasso, F. (1997). Using Dialectical Argumentation for User Modeling In Decision Support Systems. In Jameson A., Paris, C. & Tasso, C. (eds.) User Modeling: Proceedings of the Sixth International Conference, UM97. Springer: Vienna, New York. Guida, G., Mussio, P. & Zanella, M. (1997). User Interaction in Decision Support Systems: The role of justification. SMC’s Conference Proceedings, 3215–3220. Orlando, Florida, October. Guida, G. & Zanella, M. (1997). Bridging the Gap between Users and complex Decision Support Systems. Proceedings of the 3rd International Conference on Engineering of Complex Computer Systems, 229–238. Como, Italy, September. Gumpertz, J. J. & Hymes, D. (1972). Directions in Sociolinguistics: The Ethnography of Communication. Holt, Rinehart and Winston. Hahn, U. (1991). Erklärung als argumentativer Gruppendiskurs. Proceedings of Erklärung im Gespräch - Erklärung im Mensch-Maschine-Dialog. Springer-Verlag. Hasling, D. W., Clancey, W. J. & Rennels, G. (1984). Strategic Explanations for a Diagnostic Consultation System. International Journal of Man-Machine Studies 20: 3–19. Hayes, P. J. & Reddy, D. R. (1983). Steps Toward Graceful Interaction in Spoken and Written Man-Machine Communication. International Journal of Man-Machine Studies 19: 231– 284. Heider, F. (1946). Attitudes and Cognitive Organization. Journal of Psychology (21): 107–112. van Heijst, G., van der Spek, R. & Kruizinga, E. (1996). Organizing Corporate Memories. Workshop of the Tenth Banff, Workshop on Knowledge Acquisition for Knowledge-Based Systems, Banff, Canada (http://ksi.cpsc.ucalgary.ca/KAW/KAW96/KAW96Proc.html). Hermann, J., Kloth, M. & Feldkamp, F. (1998). The Role of Explanations in an Intelligent Assistant System. Artificial Intelligence in Engineering 12: 107–126. Hollnagel, E. (1987). Commentary: Issues in Knowledge-based Decision Support. International Journal of Man-Machine Studies 27: 743–751. Holsapple, C. W. & Whinston, A. B. (1996). Decision Support Systems: A Knowledge-Based Approach. International Thomson Publishing Company. Horn, R. E. (ed.) (1998). Can Computers Think? The Debate. MacroVU Press: www.macrovu.com. Hovland, C. I., Janis, I. L. & Kelley, H. H. (1953). Communication and Persuasion. Yale University Press: New Haven, CT. Hughes, S. (1986). Question Classification in Rule-Based Systems. Proceedings of Expert Systems 1986. Hughes, W. (1992). Critical Thinking. Broadview Press: Petersborough, ON. Jiang, J. J., Muhanna, W. A. & Klein, G. (2000). User Resistance and Strategies for Promoting Acceptance across System Types. Information and Management 37: 25–36.
EXPLANATION AND ARGUMENTATION CAPABILITIES
217
Johnson, P. E., Zualkernan, I. A. & Tukey, D. (1993). Types of Expertise: An Invariant of Problem Solving. International Journal of Man Machine Studies 39: 6–41. Joshi, A. & Webber, B. (1984). Living up to Expectations; Computing Expert Responses. Proceedings of AAAI-84, 169–175. Austin, Texas. Jung, H., Tambe, M. & Kulkarni, S. (2001). Argumentation as Distributed Constraint Satisfaction: Applications and Results. In Müller, J. P., André, E., Sen, S., & Frasson, C. (eds.) Proceedings of the Fifth International Conference on Autonomous Agents, 324–331. Montreal Canada. Karlins, M. & Abelson, H. I. (1970). Persuasion: How Opinions and Attitudes are Changed. Springer Verlag: Berlin. Karsenty, L. & Brézillon, P. J. (1995). Cooperative Problem Solving and Explanation. Expert Systems with Applications 8: 445–462. Kasper, G. M. (1996). A Theory of Decision Support System Design for User Calibration. Information Systems Research 7: 215–232. Kass, R. (1991). Building a User Model. User Model and User Adapted Interaction 1: 203– 258. Kay, J. (1993). Reusable Tools for User Modeling. Artificial Intelligence Review 7: 241–225. Klein, D. & Finin, T. (1987). What’s in a Deep Model? Proceedings of IJCAI-87. Milan, Italy. Kolodner, J. (1993). Case-Based Reasoning. Morgan Kaufmann. Kostulski, K. & Trognon, A. (eds.) (1998). Processus de coordination dans les situations de travail collectif. Presses Universitaires de Nancy: Nancy. Kraus, P., Ambler, S., Elvang-Goransson, M. & Fox, J. (1995). A Logic of Argumentation for Reasoning under Uncertainty. Computational Intelligence 11: 113–131. Kraus, S., Sycara, K. & Evenchik, A. (1998). Reaching Agreements Through Argumentation: a Logical Model and Implementation. Artificial Intelligence 104: 1–69. Kunz, W. & Rittel, H. W. J. (1970). Issues as Elements of Information Systems, Working paper, Center of Planning and Development Research, University of California, Berkeley. Langlotz, C. P. & Shortliffe, E. H. (1983). Adapting a Consultation System to Critique User Plans. International Journal of Man-Machine Studies 19: 479–496. Leake, D. B. (1995). Towards a Goal-Driven Integration of Explanation and Action. In Ram, A. & Leake, D. G. (eds.) Goal-Driven Learning. MIT Press. Lehman, J. F. & Carbonell, J. G (1989). Learning the User’s Language: A Step Towards Automated Creation of User Models. In Kobsa, A. & Wahlster, W. (eds.) User Models in Dialogue Systems. Springer Verlag: Berlin-New York. Lehnert, W. G. (1978). The Process of Question Answering. Lawrence Erlbaum Associates. Lemaire, B. (1992). Construction d’explications: Utilisation d’une architecture de tableau noir. Actes des 4èmes Journées Nationales du PRC-GDR-Intelligence Artificielle. Marseille. Lin, F. & Shoham, Y. (1989). Argument Systems: A Uniform Basis for Nonmonotonic Reasoning. In Proceedings of First International Conference on Knowledge Representation and Reasoning, 245–255. Toronto. Lodder, A. R. (1997). On Structure and Naturalness in Dialogical Models of Argumentation. In Hage, J. C. et al. (eds.) Legal Knowledge-Based Systems. JURIX: The Eleventh Conference, 45–58. GNI: Nijmegen. Lodder, A. R. (1998). Procedural Arguments. In Oskamp, A. et al. (eds.) Legal KnowledgeBased Systems. JURIX: The tenth Conference, 21–32. GNI: Nijmegen. Loui, R. P. (1987). Defeat Among Arguments: A System of Defeasible Inference. Computational Intelligence 3: 157–365. Loui, R. P. & Norman, J. (1995). Rationales and Argument Moves. Artificial Intelligence and Law 3: 159–189.
218
B. MOULIN ET AL.
Loui, R. P., Norman, J., Alteper, J., Pinckard, D., Craven, D., Lindsay, J. & Foltz, M. (1997). Progress on Room 5. A Testbed for Public Interactive Semi-Formal Legal Argumentation. Proceedings of the Sixth International Conference on Artificial Intelligence and Law, 207– 214. ACM: New York. Mani, I., Conception, K. & Van Guilder, L. (2000). Automated Briefing Production for Lessons Learned Systems. In Proceedings of Intelligent Lessons Learned Systems, a Workshop at AAAI 2000, Austin (Tx), 43–45. American Association of Artificial Intelligence (AAAI). Marshall, C. C. (1989). Representing the Structure of Legal Argument, In Proceedings of Second International Conference on Artificial Intelligence and Law, 121–127. ACM Press: USA. Matthijssen, L. J. (1999). Interfacing between Lawyers and Computers. An Architecture for Knowledge-Based Interfaces to Legal Databases. Kluwer Law International: The Netherlands. Maybury, M. T. (1993). Planning Multimedia Explanations Using Communicative Acts. Intelligent Multimedia Interfaces. MIT Press. McKeown, K. R. (1985). Discourse Strategies for Generating Natural-Language Text. Artificial Intelligence 27. McKeown, K. R. (1988). Generating Goal-Oriented Explanations. International Journal of Expert Systems 1. McKeown, K. R. (1993). Tailoring Lexical Choice to the User’s Vocabulary in Multimedia Explanation Generation. Proceedings of the 31st Annual Meeting of the ACL. Columbus, Ohio. McTear, M. F. (1993). User Modeling for Adaptive Computer Systems: A Survey of Recent Developments. Artificial Intelligence Review 7: 157–184. Mili, F. (1988). A Framework for a Decision Critic and Advisor. Proceedings of the 21st Hawaiian Conference on System Sciences 3: 381–386. Miller, P. L. (1984). A Critique Approach to Expert Computer Advice: ATTENDING. Pitman. Mittal, V. O. & Paris, C. L. (1995). Generating Explanations in Context: The System Perspective. Expert Systems with Applications 8: 491–503. Moeshler, J. (1985). Argumentation et conversation: Eléments pour une analyse pragmatique du discours. Hatier: Paris. Mooney, D. J., Carberry, S. & McCoy, K. F. (1991). Capturing High-Level Structure of Naturally Occurring, Extended Explanations Using bottom-up Strategies. Proceedings of the 5th International Language Generation Workshop. Dorson. Moore, J. D. & Swartout, W. R. (1990). A Reactive Approach to Explanation: Taking the User’s Feedback into Account. In Paris, C. L., Swartout, W. R. & Mann, W. C. (eds.) Natural Language Generation in Artificial Intelligence and Computational Linguistics. Kluwer Academic Publishers. Moran, T. P. & Caroll J. M. (eds.) (1996). Design Rationale: Concepts, Techniques and Use. Lawrence Erlbaum Associates: Hillsdale N.J. Moulin, B. (1998). An Awareness-Based Model for Agents Involved in Reporting Activities. In Zhang, C. & Lukose, D. (eds.), Multi-Agent Systems, Theories, Languages and Applications, Lecture Notes in Artificial Intelligence 1544, 105–121. Springer Verlag: Berlin. Neches, R., Swartout, W. R. & Moore, J. D. (1985). Enhanced Maintenance and Explanation of Expert Systems Through Explicit Models of Their Development. IEEE Transactions on Software Engineering SE-11: 1337–1351. O’Keefe, D. J. (1990). Persuasion: Theory and Research. SAGE Publications.
EXPLANATION AND ARGUMENTATION CAPABILITIES
219
O’Malley, C. (1987). Understanding Explanation. Cognitive Science research Report, n. CRSP 88. University of Sussex (UK). Papamichail, K. N. (1998). Explaining and Justifying Decision Support Advice in Intuitive Terms. Proceedings of the 13th European Conference on Artificial Intelligence. Brighton, UK, August. Paris, C. L. (1988). Tailoring Object Descriptions to the User’s Level of Expertise. Computational Linguistics 14: 64–78. Paris, C. L. (1989). The Use of Explicit User Models in a Generation System for Tailoring Answers to the User’s Level of Expertise. In Kobsa. A. & Wahlster, W. (eds.) User Models in Dialog Systems. Springer Verlag: Berlin. Paris, C. L. (1991). The Role of User’s Domain Knowledge in Generation. Computational Intelligence 7. Parnagama P., Burstein, F. & Arnott, D. (1997). Modeling the Personality of Decision Makers for Active Decision Support. In Jameson, A., Paris, C. & Tasso, C. (eds.) User Modeling: Proceedings of the Sixth International Conference, UM97. Springer: Vienna, New York. Parsons, S., Sierra, C. & Jennings, N. (1998). Agents that Reason and Negotiate by Arguing, Journal of Logic and Computation 8: 261–292. Perelman, C. & Obrechts-Tyteca, L. (1958). La Nouvelle Rhétorique: Traité de l’Argumentation. Presses Universitaires de France, translated by Wilkenson J. and Weaver P. (1969). The New Rhetoric. University of Notre Dame Press: Notre Dame, Indiana. Pollack, M. E. (1984). Good Answers to Bad Questions, Goal Inference in Expert AdviceGiving. Proceedings of the Canadian Conference on Artificial Intelligence. Pollack, M. E. (1986). A Model of Plan Inference that Distinguishes Between the Beliefs of Actors and Observers. Proceedings of the 24th Annual Meeting of the Association for Computational Linguistics. New York, NY. Pollock, J. L. (1992). How to Reason Defeasibly? Artificial Intelligence 57: 1–42. Pollock, J. L. (1994). Justification and Defeat. Artificial Intelligence 67: 377–407. Prakken, H. & Sartor, G. (1997). Argument-Based Extended Logic Programming with Defeasible Priorities. Journal of Applied Non-Classical Logics 7: 25–75. Quillici, A. (1991). Arguing over Plans. Proceedings of the AAAI Spring Symposium Series: Argumentation and Belief. Stanford, CA. Raccah, P-Y. (ed.) (1996). Topoï et gestion des connaissances. Masson: Paris. Ram, A. (1994). AQUA: Questions that Drive the Explanation Process. In Schank, R. C., Kass, A. & Riesbeck, C. K. (eds.) Case-based Explanation, chapter 7, 207–261. Lawrence Erlbaum Associates. Resnick, L. B., Salmon, M. H., Wathen, P. & Hollochak, J. B. (1993). Reasoning in Conversation. Cognition and Instruction 11: 347–364. (special issue on Discourse and Shared Reasoning). Rich, E. A. (1989). Stereotypes and User Modeling. In Kobsa. A. & Wahlster, W. (eds.) User Models in Dialog Systems. Springer Verlag: Berlin. Ringle, M. H. & Bruce, B. C. (1981). Conversation Failure. In Lehnert, W. G. & Ringle, M. H. (eds.) Knowledge Representation and Natural Language Processing, 203–221. Lawrence Erlbaum Associates: Hillsdale, NJ. Rissland, E. L. (1983). Examples in Legal Reasoning. Proceedings of the 8th International Joint Conference on Artificial Intelligence. Karlsruhe, Germany. Rissland et al. (1984). Explaining and Arguing with Examples. Proceedings of AAAI-84, Austin, Texas, 288–294.
220
B. MOULIN ET AL.
Rissland, E., Skalak, D. & Friedman, M. (1993). Bankxx: A Program to Generate Argument Through Case-Based Search. Proceedings of the Fourth International Conference on AI an Law 117–124. Amsterdam, the Netherlands. Rosenblum, J. A. & Moore, J. D. (1993). Participating in Instructional Dialogues: Finding and Exploring Relevant Prior Explanations. In Brna, P., Ohlsson, S. & Pain, H. (eds.) Proceedings of AI-ED 93: World Conference on Artificial Intelligence and Education, 145–152. Charlottesville, VA. de Rosis, F., Grasso, F., Berry, D. C. & Gillie, T. (1995). Mediating between Hearer’s and Speaker’s Views in the Generation of Adaptive Explanations. Expert Systems with Applications 8: 429–443. Routley, R. & Meyer, R. (1976). Dialectical Logic, Classical Logic and the Conistency of the World. Studies in Soviet Thought, 16: 1–25. Sawamura, H., Umeda, Y. & Meyer, R. K. (2000). Computational Dialectics for ArgumentBased Agent Systems. In Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS’2000), 271–278. IEEE Computer Society. Schanck, R. C. (1986). Explanation: A First Pass. In Kolodner, J. L & Riesbeck, C. K. (eds.) Experience, Memory, and Reasoning, 139–165. Lawrence Erlbaum Associates: Hillsdale, NJ. Schillo, M. & Funk, P. (1999). Who Can You Trust: Dealing with Deception. In Proceedings of the Workshop on Deception, Fraud and Trust in Agent Societies, International Conference on Autonomous Agent, 95–106. Scott, A. C., Clancey, W. J., Davis, R. & Shortliffe, E. H. (1977). Explanation Capabilities of Knowledge-Based Production Systems. In Buchanan, B. G. & Shortliffe, E. H. (eds.) Rule-Based Expert Systems. Addison Wesley. Schroeder, M. (1999). An Efficient Argumentation Framework for Negotiating Autonomous Agents, In Proceedings of the 9th European Workshop on Modeling Autonomous Agents in a Multi-Agent World (MAAMAW’99), 140–149. Valencia, Spain. Schroeder, M. (2000). Towards a Visualization of Arguing Agents. To appear in Journal of Future Generation Computing Systems. Elsevier. Available at: http://www.soi.city.ac.uk/ homes/msch Sherif, M. & Hovland C. I. (1961). Social Judgement. Yale University Press: New Haven, CT. Shipman, F. M. III & McCall, R. J. (1997). Integrating Different Perspectives on Design Rationale: Supporting the Emergence of Design Rationale from Design Communication. Artificial Intelligence in Engineering Design, Analysis and Manufacturing (AIEDAM) 11: 141–154. Shipman, F. M. III & Marshall, C. C. (1999). Formality Considered Harmful: Experiences, Emerging Themes, and Directions on the use of Formal Representations in Interactive Systems. Computer-Supported Cooperative Work 8: 333–352. Shneiderman, B. (1992). Designing the User Interface: Strategies for Effective HumanComputer Interaction. Addison-Wesley: Reading, MA. Shortliffe, E. H. (1976). Computer-Based Medical Consultations: MYCIN. Elsevier. Sillince, J. A. A. (1994). Multi-Agent Conflict Resolution: A Computational Framework for an Intelligent Argumentation Program. Knowledge-Based Systems 7: 75–90. Sillince, J. A. A. (1996). Argumentation and Contract Models for Strategic Organization Support Systems, Decision Support Systems 16: 325–326. Sillince, J. A. A. & Saeedi, M. H. (1999). Computer-Mediated Communication: Problems and Potential of Argumentation Support Systems. Decision Support Systems 26: 287–306. Silverman, B. G. (1992). Survey of Expert Critiquing Systems: Practical and Theoretical Frontiers. Communications of the ACM 35: 106–127.
EXPLANATION AND ARGUMENTATION CAPABILITIES
221
Simari, G. R. & Loui, R. P. (1992). A Mathematical Treatment of Defeasible Reasoning and Its Implementation. Artificial Intelligence 53: 125–157. Slotnick, S. A. & Moore, J. (1995). Explaining Quantitative Systems to Uninitiated Users. Experts Systems with Applications 8: 475–490. Smolensky, P., Bell, B., Fox, B., King, R. & Lewis, C. (1987). Constraint-Based Hypertext for Argumentation. In Smith. J. B. & Halasz, F. (eds.) Hypertext’87, 215–245. Association for Computing Machinery. Southwick, R. W. (1988). Topic Explanation in Expert Systems. In Kelly, B. & Rector, A. (eds.) Research and Development in Expert Systems V. Cambridge University Press. Southwick, R. W. (1991). Explaining Reasoning: An Overview of Explanation in KnowledgeBased Systems. The Knowledge Engineering Review 6: 1–19. Sparck-Jones, K. (1989). Realism about User Modeling. In Kobsa, A. & Wahlster, W. (eds.) User Models in Dialog Systems. Springer Verlag: Berlin. Stranieri, A., Zeleznikow, J., Gawler, M. & Lewis, B. (1999). A Hybrid Rule-Neural Approach for the Automation of Legal Reasoning in the Discretionary Domain of Family Law in Australia. Artificial Intelligence and Law, 7: 153–183. Stranieri, A. & Zeleznikow, J. (1999). A Survey of Argument Structures for Intelligent Decision Support. In proceedings of International Conference ISDSS99, International Society for Decision Support Systems, rcorded on CD-Rom, Monash University, Australia. Stranieri, A. & Zeleznikow, J. (2000). Copyright Regulation with Argumentation Agents Submitted to Journal of Information and Communication Law. Suthers, D. D. (1991). Task-Appropriate Hybrid Architectures for Explanation. Computational Intelligence 7. Swartout, W. R. (1983). XPLAIN: A System for Creating an Explaining Expert Consulting Programs. Artificial Intelligence 21: 285–325. Swartout, W. R. & Smoliar, S. W. (1987). On Making Expert Systems More Like Experts. Expert Systems 4: 96–207. Swartout, W. R., Paris, C. L. & Moore, J. D. (1991). Design for Explainable Expert Systems. IEEE Expert 6: 58–64. Swartout, W. R. & Moore, J. D. (1993). Explanation in Second Generation Expert Systems. In David, J.-M., Krivine, J.-P., & Simmons, R. (eds.) Second Generation Expert Systems. Springer Verlag Publishers. Sycara, K. P. (1990). Persuasive Argumentation in Negotiation. Theory and Decision 28: 203– 242. Tanner, M. C. & Keuneke, A. M. (1991). Explanation in Knowledge Systems: The Roles of the Task Structure and Functional Models. IEEE Expert 6: 5–57. Tanner, M. C. (1995). Task-Based Explanations. Expert Systems with Applications 8: 502–512. Thomas, S. N. (1981). Practical Reasoning in Natural Language, 2nd edn. Prentice-Hall: Englewood Cliffs, NJ. Thomas, B., Shoham, Y., Schwartz, A. & Kraus, S. (1991). Preliminary Thoughts on an Agent Description Language. International Journal on Intelligent Systems, 6: 497–508. Toulmin, S. (1958). The Uses of Argument. Cambridge University Press: Cambridge, England. Toulmin, S., Rieke R. & Janik, A. (1984). An Introduction to Reasoning. MacMillan: New York. Trenouth, J. & Ford, L. (1990). The User Interface – Computational Models of Computer Users. In McTear M. F. & Anderson T. J. (eds.) Understanding Knowledge Engineering. Ellis Horwood: Chichester. Trognon, A. & Larrue, J. (1994). Pragmatique du discours politique. Armand Collin: Paris.
222
B. MOULIN ET AL.
Umeda, Y., Yamashita, M., Inagaki, M. & Sawamura, H. (2000). Argumentation as a Social Computing Paradigm. In Proceedings of the Third Pacific Rim International Workshop on Multi-Agent Systems (PRIMA 2000). Springer Verlag Lecture Notes in Artificial Intelligence, 46–60. Vahidov, R. & Elrod, R. (1999). Incorporating Critique and Argumentation in DSS. Decision Support Systems, vol. 26, 249–258. Elsevier. Verheij, B. (1996). Rules, Reasons and Arguments: Formal Studies of Argumentation and Defeat. Ph.D. Thesis, Maastricht University, Maastricht (Netherlands). Verheij, B. (1998). Argue! An Implemented System for Fomputer-Mediated Defeasible Argumentation. In Proceedings of the Tenth Netherlands/Belgium Conference on Artificial Intelligence, 57–66. CWI: Amsterdam. Vreeswijk, G. A. W. (1995). IACAS: An Implementation of Chisholm’s Principles of Knowledge. Proceedings of the Second Dutch/German Workshop on Nonmonotonic Reasoning, 225–234. Delft University of Technology. Vreeswijk, G. A. W. (1997). Abstract Argumentation Systems. Artificial Intelligence 90: 225– 279. Walton, D. (1996). Argument Structure: A Pragmatic Theory. University of Toronto Press. Walton, D. N. & Krabbe, E. C. W. (1995). Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning. State Unviersity of New York Press: Albany N.Y. Wick, M. R. & Thompson, W. B. (1992). Reconstructive Expert System Explanation. Artificial Intelligence 54. Wick, M. R., Dutta, P., Wineinger, T. & Conner, J. (1995). Reconstructive Explanation: A Case Study in Integral Calculus. Expert Systems with Applications 8: 463–473. Wu, D. (1991). Active Acquisition of User Models: Implications for Decision-Theoretic Dialog Planning and Plan Recognition. User Model and User Adapted Interaction 1: 149–172. Ye, L. R. (1995). The Value of Explanation in Expert Systems for Auditing: An Experimental Investigation. Expert Systems with Applications 9: 543–556. Ye, L. R. & Johnson, P. E. (1995): The Impact of Explanation Facilities on User Acceptance of Expert System Advices. MIS Quarterly (June): Management Information Systems, 157– 172. Yearwood, J. & Stranieri, A. (1999). The Integration of Retrieval, Reasoning and Drafting for Refugee Law: A Third Generation Legal Knowledge Based System. In Proceedings of the Seventh International Conference on Artificial Intelligence and Law: ICAIL’99, 117–137. ACM Press. Yu, B. & Singh M. P. (2000). A Social Mechanism of Reputation Management in Electronic Communities. In Klusch, M. & Kerschberg, L. (eds.) Cooperative Information Agents IV: The Future of Information Agents in Cyberspace, 154–165. Springer Verlag. Zanella, M. & Lamperti, G. (1999). Justification Dimensions for Complex Computer Systems. Proceedings of the World Multiconference on Systems, Cybernetics and Informatics, vol. 8, 317–324. Orlando, Florida, June–August.