Rationality in Constructive Dialogue Management Kristiina Jokinen3
Computational Linguistics Laboratory Graduate School of Information Science Nara Institute of Science and Technology 8916-5 Takayama, Ikoma, Nara 630-01 JAPAN Email:
[email protected] Summary: The paper discusses rationality from the point of view of communication and argues that rationality is not a property which an agent can or must have, but rather, an eect that arises from the agent's behaviour if it seems to conform to shared assumptions about what is an appropriate and acceptable action in a given situation. A new approach to computational dialogue management, Constructive Dialogue Management, is also sketched as the basis for system design.
Introduction
Traditionally, agents are described as capable of performing purposeful, goal-directed actions (von Wright 1971), and their rationality appears in their ability to plan appropriate actions to reach the goal. In computational dialogue management, this has led to overemphasis on plan-based behaviour: the main task in communication is taken to be the discovery of the partner's goal and production of a cooperative response which helps the partner to achieve her goal (Allen and Perrault 1980). No particular means are needed to evaluate rationality of the original goal, since by default the agents' goals are taken as rational and their plans as clear. If the adopted plan becomes inappropriate later because of the changed context, reconsideration is eected only to the point where another plan decomposition is tried (Litman and Allen 1987). Thus, because the goal is xed, the agent persists with the original goal and tries 'fanatically' to achieve it as long as the reasons for adopting the goal persist, cf. (Cohen and Levesque 1990). However, the agents' environment is not static, and thus they need to attend to the usually upredictable changes that can occur due to other agents' acts or natural forces. An alternative view thus sees the agent behaviour as fundamentally reactive: the agent evaluates contextual changes and adapts her behaviour to the changed context. In AI, e.g. (Brooks 1986), reactivity usually refers to a system that includes no symbolic representation or reasoning, while in dialogue 3I
would like to thank Graham Wilcock for fruitful discussions and comments, and Yuji Matsumoto for providing an excellent environment in which to continue the research.
management, the term has been used to describe the system's ability to adapt its responses on the basis of the feedback that it gets from the user, e.g. (Moore and Swartout 1991). Although this approach relates rationality to the agent's capability to handle requirements of the changing context, it undermines the agent's capability for plan-based behaviour: if the agent only reacts to certain stimuli, she cannot have or ful l her own goals because the changing situation requires constant modi cation of the plan; imagine a busy day in the oce where continuous interruptions prevent one from continuing with one's own work. On the other hand, if the agent can pursue her own goals, she is often confronted with situations where her goals contradict those of the partner. Since the agent need not benevolently agree to take up the partner's goal every time there is such a con ict, she needs to evaluate the rationality of the goals to resolve the con ict (Galliers 1989). Moreover, she needs to deliberate over whether to react to the changed situation at all. Given the agent's resource-boundedness (Bratman et al. 1988), deliberation at every step is unsatisfactory and in practise also useless: an omniscient agent who can infer all the consequences of her beliefs and take all and only appropriate actions is an idealisation which is neither rational nor possible (Cherniak 1986). Hence, the agent must have some means to evaluate rationality of her goals and plans, a way to justify her persistence, benevolence, or non-reaction to change in a given situation. Agents are most often engaged in situations where other agents are also present.1 Rationality of actions and goals underlying the actions is thus judged in interactive communication situations where actions are governed by social obligations. On the one hand, there is a need or an obligation to react to the changing environment (open a window when it is too hot, nd food when hungry, reply when addressed, etc.), and the agents try to change or maintain the state of aairs 1 Actually, single-agent situations are not isolated either, but governed by unconscious mental images and assumed requirements of other agents, or the consciousness of the absence of other agents.
favourable to them. On the other hand, smoothness of interaction requires that agents should not prevent other agents from ful lling their goals, and thus actions which show the agents' sincerity, motivation and consideration towards the other agent(s) are favoured. Consequently, we advocate a view which we call constructive: by means of communication, the agent constructs a model of how to achieve her goals and simultaneously take into account contextual requirements and the partners' goals. The goals are independently set to accomplish some real world task (rent a car, assemble a pump, book a ight, etc.), but usually the distribution of knowledge is such that the goals cannot be successfully reached by the agent alone: she needs to communicate with other agents to obtain the missing information, cf. (Guinn 1994). The agent's acts are thus based on cooperation, and they become part of general social activity, constrained by normative obligations which concern rational agency. It is tempting to view rationality as a property that agents may or may not possess, and try to nd suitable measurements for rationality of their behaviour. However, this paper argues that rationality emerges as an eect of the agent's successful goal ful lment provided that epistemic requirements are counter-balanced with ethical consideration, and thus no measurable de nition of rationality can be given. Instead, rationality is encoded in the reasoning processes used to achieve the goal: it is a procedural concept that does not exist without the activity that drives communication in the rst place. The paper is organised as follows. We rst discuss theoretical aspects of rationality which have been overlooked in previous research: the basis of rationality in interactive communication and its relativised nature. We then describe Constructive Dialogue Model as the basis of system design: the agent's actions are seen as part of social activity, constrained by normative obligations which concern appropriate and acceptable behaviour in general. Finally, conclusions and future directions are given.
What is rationality?
Rationality and communication
In his studies of linguistic communicative interaction, (Allwood 1976) summarises the traits of a normal, rational agent in the following principles: Agenthood (intentional, purposeful and voluntary behaviour), Normality (motivated action so as not to decrease pleasure or increase pain), and Rationality (adequacy of actions and competence of the agent). These are to be understood as \statements of norms that an individual agent tries to follow in his own behavior", or \statements of assumptions that typicial socialized agents make about the behaviour of other individuals." Moreover, rational, competent agents are engaged in Ideal Cooperation, if they:
1. are voluntarily striving to achieve the same purposes, 2. are ethically and cognitively considering each other in trying to achieve these purposes, 3. trust each other to act according to (1) and (2) unless they give each other explicit notice that they are not. Cooperation does not mean that the agents always try to react in the way the partner intends to evoke.2 Rather, Ideal Cooperation captures the agents' basic attitude: their actions are based on willingness to receive, evaluate and react to the partner's contributions. Communication may include pursuing a con ict with the partner, but if such a con ict becomes so serious that it makes any cooperation impossible, communication will break down as well. Cognitive consideration refers to the epistemic side of rationality: the agent's knowledge of what is a rational way to achieve joint purposes. On the basis of this knowledge, the agent can predict the partner's reaction and plan her own acts so as to follow the principles of normal, rational agenthood. However, Allwood also emphasises the ethical aspect of the acts: the agent should not only act according to some norms, but also not act so that other agents are unable to maintain their rationality. The ethical dimension provides a counter-force to epistemic rationality which is often insucient in accounting for the rationality of apparently irrational actions. An agent may choose to increase pain instead of pleasure (e.g. not disclose information which would save her own face but cause harm or embarrasment for somebody else), or choose a method which is both inecient and unacceptable for a particular task (e.g. follow instructions of a superior knowing that they involve a long detour), but still her behaviour can be described as rational since it shows consideration towards the overall social situation. Rationality of such seemingly irrational actions is accounted for by the act's ethics: the agent should not only attempt to ful l her goals in the most rational way, but she should not prevent other agents from ful lling their goals either. Inspired by Allwood's approach to communication as rational and cooperative activity, we regard rationality as an eect that emerges if the agent's behaviour seems to conform to the principles of Ideal Cooperation. Given the goal(s) that the agents try to achieve in an interactive situation, their acts are considered rational if they successfully ful ll the goal (operational appropriateness) and do not cause obstacles to other agents (ethical acceptability). An act can be rational to dierent degrees depending on how well the agent thinks the act ts goal ful lment and ethics. If an act is inappropriate or unacceptable, its rationality can be questioned (e.g. how rational it is to increase pain or 2
As (Galliers 1989) points out, if agents are always in agreement and ready to adopt the other's goals, they are benevolent rather than cooperative.
use an inecient method to maintain acceptability), and if the act is both inappropriate and unacceptable, it is deemed irrational. The opposite of rationality is not necessarily irrationality, but an increasing nonrationality which changes to irrationality at some non xed point where acts can no more be considered either appropriate or acceptable.3 Rationality describes an act's context changing potential, not the actual contextual changes. The actual eects need not be exactly as intended (the answer may be evasive, an attempt to rent a car unsuccessful), although the act itself is considered rational: the agent is resource-bound and cannot know all the factors that in uence the intended eects at the time of acting. On the other hand, achieving the intended eects does not render an act rational either: e.g. ordering a taxi without the intention to go somewhere is considered irrational (or at least a bad joke) even though the effect of the request is achieved when the taxi arrives. Rationality is thus tied to the act's assumed function in a larger context: rational acts are instruments in achieving some goal. Given the niteness of the agent's cognitive capacity, the problem of choice now arises: which of the all possible inferences (sound and feasible inferences that the agent can produce) are operationally appropriate (focussed on in a given situation), and which of the appropriate ones are also ethically acceptable (allow the partner to maintain her rationality) to be undertaken in the given context. Agents can opt for suboptimal action (action that is not maximally appropriate), if this is believed to be more acceptable in the context, and for unethical action (not maximally acceptable), if this is believed to be more appropriate in the context. The tradeo between the two extremes is tied to the agents' values and commitments, and can be modelled by a cost-reward mechanism where inferences are assigned costs or rewards depending on whether or not they are based on the beliefs which the agent assumes to be currently focussed on and which she has committed herself to. The agent's commitment to the particular belief set which she considers pertinent to the situation is important, since the agent can hold a set of beliefs which is inconsistent as a whole. She can base rationality judgements on any consistent subset of her beliefs, since the consistent set of beliefs is not sucient for the implied action to be undertaken: it is not contradictory to describe an act as rational, yet not undertake it. For instance, one may understand that the armative action clause in job announcements is rational from the point of view of women's under-representation in higher managerial posts, yet not act according to it in practice. The agent can understand rationality of an act on the epistemic level, but to undertake such an act, she must commit herself to the beliefs that 3
Allwood also distinguishes a-rational behaviour which refers to unintentional re exes which are not intentional or purposeful.
form the basis for reasoning. Conversely, if the agent is committed to the beliefs but acts against the implied conclusion, her action is considered irrational.
Relativised nature of rationality
Although planning is needed for any successful action, the act's rationality can usually be assessed retrospectively: no careful deliberation can guarantee success of an act due to incompletness of the agent's knowledge at a given time. A statement that an act is rational is a value judgement of the act's appropriateness and acceptability rather than an objective description of the state of aairs. It tells us that the observed act is understood on an epistemic level and accepted on an ethical level, but not that the act actually is rational. It may be that the agent lacks information that would render the act irrational, such as when an agent draws a detailed map to help the partner to nd a restaurant, but later learns that the restaurant has been closed down and does not exist anymore; the agent's drawing of the map is still considered rational, although it later turned out to be waste of time. Given the niteness of cognitive capacity, the agent cannot guarantee the absolute rationality of her actions, only their relative rationality: the agent acts in a way which she believes is rational in a given situation, although the act need not actually be rational. This is often understood by the agents themselves by adding a relativising comment "according to my best knowledge". The weaker view of the agent's rationality is also related to the fact that the agent may believe that an act is rational without this being the case for the partner. It is a commonplace that rationality is relative to the goals of each agent, but what we want to argue here is the dynamic, interactive nature of this intersubjective relativization. Rationality, being a judgement of the manner in which the agent attempts to reach her goal, is negotiable: the agents can argue about the appropriateness and acceptability of a particular act by requiring and providing explanations, motivations and reasons.4 Hence, the agents can aect each other's views and opinions, and consequently, modify or even change the reasoning behind their judgements depending on whether convincing arguments can be provided that warrant the change. This ties rationality to a more general context of cultural values and social norms: rationality judgements are often shared by the community members, because the more or less uniform enivironment tends to favour particular types of actions over others. We will not push this line of argumentation further, although we want to emphasise the role that social aspects play in rationality considerations: although an agent's action may be successful in her goal satisfaction and acceptable with respect to individual people, 4
Moreover, this metalevel communication can be recursive: appropriateness and acceptability of the explanations, motivations and reasons can be questioned, too.
it need not look rational from the community's viewpoint, and thus \one-sided" rationality may be mutual irrationality, ranging from eccentricity to lunacy.
Rational agent architectures
Constructive Dialogue Management
The theoretical framework discussed above is formalised as an approach to dialogue management that we call Constructive Dialogue Management, CDM. Communication is understood as a fundamentally cooperative activity between rational agents: the participants react to contextual changes and push forward their goals in a joint activity. Rationality is achieved by reasoning about the act's operational appropriateness and ethical acceptability, and by means of communication, the agents construct a model of how to achieve their goals and simultaneously show consideration for the partner. Rational actions thus deal not only with strategic formulation and execution of plans, but with locally managed construction of a context in which the agents can jointly achieve their goals. The architecture and information ow in a CDM system is shown in Fig. 1. The communication cycle consists of four phases: accept an input (read and parse the input), analyse the input (infer the user goal, new information and topic), react to the input (evaluate and respond to the user goal), and produce an output (generate a surface contribution). Acceptance and generation are part of the system's interface with the user, while analysis and reaction make up the inference engine. The context is a dynamic knowledge base which records the system's knowledge about the ongoing dialogue (and the whole environment in which the interaction takes place). The system has also access to two static knowledge bases not shown in the gure: world knowledge about the entities and their relations, and communicative knowledge about the rules and preferences that describe rational and cooperative communication. The cycle identi es new information which changes the context and obliges the partner to evaluate the change with respect to her knowledge, then report back the result. The decision of how to react thus falls out as a result of the agent's complying with the communicative principles rather than of her planning the best possible act. The epistemic rationality is encoded in the evaluation process where the partner's response is evaluated with respect to the agent's knowledge about the current situation (contextual factors deal with the agent's own expectations, her unful lled goals, ability to take initiatives to clarify vagueness, misunderstandings or lack of understanding, and the coherence relation of the partner's act to the previous situation). Evaluation results in an operationally appropriate goal, regarded as a new joint purpose of the communication and which, according to the Ideal Cooperation principles, is tried to be jointly achieved. However, the joint purpose describes the desired next
String of words
?
ACCEPT - read input - parse input Semantic Representation
? ANALYSE
- obligations - coherence - explicitness - propositional content User Goal
?
REACT EVALUATE - basic requirements - joint purpose - application knowledge
?System Goal RESPOND - obligations - coherence - explicitness - propositional content
-Contribution1 -CentralConcept1 -NewInfo1 -UserGoal C O N T E X SystemGoal T NewInfo2
-
-Message -CentralConcept2 -Contribution2
Semantic Representation
?
GENERATE - surface generation String of words
?
Figure 1: Information ow in the system.
state in a context where no communicative obligations hold, and thus it must be speci ed with regard to ethical acceptability: the agent's communicative competence is shown in the way she realises the same goal in dierent contexts, or even changes the goal to a new one if it is not acceptable. The goal intentions are thus ltered with respect to communicative obligations which encode the ethical aspect of rationality: motivation rules describe what must be true in the context if an intention is to be included in the message, and consideration rules deal with intentions which should be included given particular facts in the context. Some obligations are given in Fig. 2 in verbal form. The set of selected intentions (the message) will be encoded in the expressive and evocative attitudes which are to be conveyed by the surface contribution. : the speaker can support the response 1. Everything that the speaker wants to know or wants the partner to do is motivated except if the speaker cannot take the initiative on it. 2. Everything that addresses what the partner wanted to know or wanted the speaker to do is motivated. 3. Everything that informs of inconsistency is motivated. Consideration: the speaker attends to the partner's needs 1. If the partner's goal cannot be ful lled (presuppositions are false, facts contradictory, no information exists), it is considerate to inform why (give explanation, add compensation, initiate repair). 2. If the partner's response is unrelated, it is considerate to inform of the irrelevance, given that the speaker has unful lled goals.
Motivation
Figure 2: Some communicative obligations.
Conclusion
In searching for the de nition of rational action, our viewpoint has been communication in general: agents are engaged in interactive communicative situations and their actions are governed by social obligations. Communication is fundamentally cooperative interaction whereby the agents plan and take initatives to ful l their goals, but also respond to the changes in the context in which the interaction takes place. The observation that rationality captures aspects of the agents' social competence led us to consider rationality as an eect that emerges when the agents comply with the requirements of general communicative principles, rather than a static property that the agents possess. Rationality is an instrumental concept, a sign of the agent's competence to plan, coordinate, and choose actions so that her behaviour looks competent and ful ls the goal(s) which motivated the action. Following (Allwood 1976), we also emphasise the ethical aspect of acts: the agents attempt to ful l their
goals in the most appropriate way, but also act so so as not to prevent other agents from ful lling their goals. The ethical dimension provides an important counterforce to epistemic rationality which only deals with the agent's knowledge of what is a rational way to achieve one's goals. Operational appropriateness and ethical acceptability form two extremes of a rationality continuum on which the agents can place their rationality judgements. An interesting future research topic is the formation of this continuum: how the agents' viewpoint, background assumptions, goals and social obligations aect the ranking of an action as appropriate and ethically acceptable.
References
J. F. Allen and C. R. Perrault. Analyzing intention in utterances. Arti cial Intelligence, 15:143{178, 1980. J. Allwood. Linguistic Communication as Action and Cooperation. Department of Linguistics, University of Goteborg, 1976. Gothenburg Monographs in Linguistics 2. M. Bratman, D. Israel, and M. E. Pollack. Plans and resource-bounded practical reasoning. Computational Intelligence, 4:349{355, 1988. R. A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 2:1:14{23, 1986. C. Cherniak. Minimal Rationality. The MIT Press. Cambridge, Massachusetts, 1986. P. R. Cohen and H. J. Levesque. Persistence, intention, and commitment. In P. R. Cohen, J. Morgan, and M. E. Pollack, editors, Intentions in Communication, pages 33{69. A Bradford Book. The MIT Press. Cambridge, Massachusetts, 1990. J. R. Galliers. A theoretical framework for computer models of cooperative dialogue, acknowledging multiagent con ict. Technical Report 172, University of Cambridge, Computer Laboratory, 1989. C. I. Guinn. Meta-Dialogue Behaviors: Improving the Eciency of Human-Machine Dialogue - A Computational Model of Variable Initiative and Negotiation in Collaborative Problem-Solving. PhD thesis, Duke University, 1994. D. Litman and J. F. Allen. A plan recognition model for subdialogues. Cognitive Science, 11:163{ 200, 1987. J. D. Moore and W. R. Swartout. A reactive approach to explanation: Taking the user's feedback into account. In C. L. Paris, W. R. Swartout, and W. C. Moore, editors, Natural Language Generation in Arti cial Intelligence and Computational Linguistics, pages 3{48. Kluwer Academic Publishers, Norwell, Massachusetts, 1991. G. H. von Wright. Explanation and Understanding. Routledge and Kegan Paul, London, 1971.