User Modelling in A Flexible and Robust Interface - CiteSeerX

1 downloads 0 Views 213KB Size Report
Jung-Jin Lee and Robert McCartney. Department of Computer Science and ..... Karl Wurst of the University of Connecticut. A Generic Plans and Meta-Rules.
User Modelling in A Flexible and Robust Interface Jung-Jin Lee and Robert McCartney

Department of Computer Science and Engineering University of Connecticut Storrs, CT 06269-3155, U.S.A [email protected] and [email protected] Phone: 203-486-0005 Fax: 203-486-4817 Keywords:

Intelligent Interface, Plan Recognition, User Modeling

Abstract

The eld of Human Computer Interaction (HCI) proposes several kinds of models for predicting aspects of user's behavior. Although these models deal with user's behavior with interactive computer systems, they do so rather poorly. HCI should be designed to be versatile enough to handle a broad range of interactions between a man and a machine, rather than depending on a pre-de ned user model. A goal in HCI is to develop cooperative systems: for a system to cooperate with a user, it must be able to determine what the user is trying to accomplish, that is, it must recognize user plans. For a system to exhibit the intelligence required in a dialogue with a user it needs the ability to understand and reply properly to a wide range of responses. Additionally, if a system fully participates in a dialogue, it can support ad hoc requests from a user, maintain the topic of discourse, and keep a user from digressing out of the subject. Toward this goal of participation, we focus on handling a user's answer to a question posed by a system. The adaptability of a system is essential when the interaction between a user and the system is a dialogue. To be exible and incremental, a system should be able to adapt itself by reconstituting a plan. We use meta-rules as a strategy of controlling an irregular discourse; these can be applied to unexpected responses from a user to analyze if the responses are incomplete, ambiguous, or inexact in relation to the expectation. Further adaptability in the system is provided by the process of plan creation.

1 Introduction The eld of Human Computer Interaction (HCI) proposes several kinds of models for predicting aspects of user's behavior. Although these models deal with user's behavior with interactive computer systems, they do so rather poorly. HCI should be designed to be versatile enough to handle a broad range of interactions between a man and a machine, rather than depending on a pre-de ned user model. A goal in HCI is to develop cooperative systems: for a system to cooperate with a user, it must be able to determine what the user is trying to accomplish, that is, it must recognize user plans. For a system to exhibit the intelligence required in a dialogue with a user it needs the ability to understand and reply properly to a wide range of responses. Additionally, if a system fully participates in a dialogue, it can support ad hoc requests from a user, maintain the topic of discourse, and keep a user from digressing out of the subject. Toward this goal of participation, we focus on handling a user's answer to a question posed by a system. By its nature, a dialogue-capable interface needs to be modeled dynamically [4]. The adaptability of a system is essential when the interaction between a user and the system is a dialogue. To be exible and incremental, a system should be able to adapt itself by reconstituting a plan. We use meta-rules as a strategy of controlling an irregular discourse; these can be applied to unexpected responses from a user to analyze if the responses are incomplete, ambiguous, or inexact in relation to the expectation. Further adaptability in the system is provided by the process of plan creation [8] which enhances the plan library over time.

2 Related Work Our system called PIUM (Plan-based Interface Using Meta-rules) [11] is an initiative interface, that is, an interface that actively participates in a dialogue. The work of PIUM encompasses the research of mainstream plan recognition and user modeling: providing more intelligent interfaces for interactive environments. Work has been done addressing problem-solving in plan recognition. Kautz and Allen [10] used a graph search algorithm for this process. The algorithm is based on an action taxonomy, an exhaustive description of the ways for actions to be performed. Their algorithm works by taking observations and chaining up the abstraction and decomposition hierarchy until a toplevel action (TLA) is reached. A simplicity constraint is applied to minimize the TLAs and propagates down the hierarchies. Their work runs under the closed world assumptions, such as complete action hierarchies and all actions are purposeful. Calistri [2] executes a plan recognition process with modi ed A* algorithm through a plan hierarchy. His algorithm takes one observed action and a plan hierarchy in an AND/OR tree, and searches for a path to nd a top-level goal, while trying to deal with violated plans and ill-formed plans. Although his work requires no closed world assumptions, it has no means to choose one plan over the others among multiple possible plans, a shortcoming shared with Kautz's work. Work has been performed to create intelligent interfaces. [7], [12], and [13] . Carberry [3] advocates the need of dynamic construction of a plan for an analysis of naturally occurring dialogue. For dynamic plan recognition, the system uses a tree structure called a context 1

model to represent the system's beliefs about the user's plan through the dialogue. The context model gets adjusted and expanded for its updated hypothesis. Plan identi cation heuristics are used to hypothesize domain actions and to identify the coherent relationship between a hypothesized action and the user's current focus. Dynamically computed preference rules using Dempster-Shafer methods are used to rank alternative conclusions according to plausibility. A great deal of research on intelligent interfaces has been in the domain of discoursebased consultants. TAILOR [13] uses information about a user's level of expertise to combine several discourse strategies in a single text in order to generate texts for users. The user models of the user's expertise is used to tailor a reply to the user's question. KNOME [7] is the user modeling component of Unix Consultant(UC). It models a user's knowledge to tailor the responses through the interaction with a user. It also maintains a model of the user's knowledge and deduces individual facts about what the user does or does not know, then combines the evidence to adjust the likelihood ratings for its expertise level. A reactive method [12] to an explanation process is proposed with the information concerning the goal structure of the explanation, assumptions made during its generation, and alternative strategies for follow-up questions. Most work has been done in a single particular domain, which requires a speci c knowledge base and user models. Additionally, most do not support inference about what a user may know from what he or she knows. Moreover, most work does not keep track of a user's level of expertise and change its judgment of that level, even though the user's expertise may increase as time goes by. Cohen et al. [14] use plan recognition to improve responses to users in an advice-giving setting. In their work, to improve dialogue coherence, they characterize when to engage a user in clari cation dialogue with a default strategy and provide options to reduce the length of the clari cation dialogue. They claim that their system never overcommits in recognizing the user's plan so that backtracking for debugging dialogue is unnecessary. However, it still runs under the closed world assumptions of a complete plan library and user correctness. Chien and DeJong [6] postulate that a new problem-solving concept is learned through observing, explaining and generalizing an expert's solution. The system detects learning opportunities, constructs an explanation of the expert's solution to the particular problem, and generalizes the explanation of the observed solution. They deal with the persistence problem, that is the reasoning eciency about the e ects of actions over time by learning which defeasible assumption can cause failures in persistence. Their approach uses discrepancies between observed and predicted goal achievement to focus on cases in which faulty assumptions should be corrected. The system understands preventive measures by analyzing examples of failures. It determines relevant failures, detects prevent operators, and modi es the plan. They present a default reasoning method for recognizing and understanding prevention in plans for explanation-based learning. As an unexpected success may occur when the system predicts that the plan will fail, learning based only on generalized explanation of failures may lead to overly pessimistic prediction. Charniak and Goldman presents a Bayesian model of plan recognition, [5] and argue that the problem of plan recognition is largely a problem of inference under conditions of uncertainty. The key of their model is a set of rules for translating plan recognition problems into Bayesian networks. It uses a marker-passing scheme as a focusing mechanism. However, the marker-passing scheme is not a plan recognition scheme, it merely helps in the creation 2

of concepts and links between concepts that might be useful, given the observation. Their work claims that Bayesian networks require only modest sets of numbers with only reasonable independence assumptions over the arguments against probabilistic approach. Most of above work allow interaction between Human and Interface by weighting more focus on inferring user's intention in a passive way. The work of PIUM o ers two-way communication by actively participating in a dialogue. Relating to general problems in plan recognition, PIUM touches problems of the preference of multiple plans and the validation of pending plan through its meta rules, while it keeps the assumption of correctness of plans.

3 A Motivating Example PIUM is implemented in the speci c domain of in-state phone information service: it performs a dialogue with persons wanting phone numbers as in a following example. 1) 2) 3) 4) 5) 6) 7)

INFORMATION: CALLER : INFORMATION: CALLER : INFORMATION: CALLER : INFORMATION:

Which city are you looking for ? Amherst. Amherst, MA ? Yes. Where are you calling from ? Storrs, CT. You must dial 1-413-555-1212

In item 1, the system initiates a dialogue with a default plan, while expecting a city name among 169 cities in Connecticut. If the response ful lls the expectation, then it continues the plan. In item 2, the response results in Unsuccessful Expectation Ful llment because the city Amherst is not in CT. the system invokes meta-rules to handle the failure by trying to determine what plan the user is trying to follow. In item 3, the system determines (via information in its database) that there is an Amherst in Massachusetts, and that it knows a plan for getting telephone information in Massachusetts (from its plan library). PIUM hypothesizes the user's reply and attempts to get a con rmation from a user with "Amherst, MA ?", using the HYPOTHESIZE meta-rule. In item 4, when the hypothesized intention of a user is con rmed with a user's reply, the action part of the meta-rule is activated. While PIUM is searching for an applicable plan for current situation from a plan library as an action part of the meta-rule, the result of retrieving a plan library shows that there are two di erent plans available depending upon whether the target's area is equal to the caller's area. In item 5, a new query by PIUM is created and asked to know a caller's location for clarifying between two plans. The system recognizes that additional information is needed, the current plan is modi ed by adding new query and this modi ed plan will be stored as a new plan. With this plan creation, the system then learns and is enforced for the future adaptation. In item 6, when PIUM receives \Storrs, CT", it decides that OUT-OF- STATE information request plan is suitable over LOCAL information request plan . 3

In item 7, PIUM concludes the case by responding with the data retrieved from the database for an area code of Amherst.

4 The Model Limitations of the knowledge base is an important problem in AI, for example, having a closed world or an open world model. However, considering the problems in both of these models [7], leads to the point that the ideal system needs to combine the best of both. Based on this idea, PIUM uses an open world model as it serves, while it starts with a closed world model. When a system does not have complete information about an area, an expectation failure will likely occur during a dialogue. The ideal system needs to handle the expectation failure based on \the extent and limit of available knowledge, what facts are relevant in a given situation, how dependable they are, and how they are to be used" [1]. This supports the use of meta-rules.

4.1 Task-Oriented Plans

PIUM starts with an initial set of three task-oriented plans. Without having a rich set of user models, it is guided by knowing where it lacks knowledge and where a user has a misunderstanding. As an initiative interface, PIUM initiates a dialogue with a default plan instead of waiting for a user to submit a request. The users can bene t from the guidance of the system which draws the relevant range of a task-oriented plan in a domain. The plans are used to synthesize an instance of a task- oriented plan. Each di erent interaction is remembered. Each plan related to the instance is also remembered for future adaptation. The default plan is described in Figure 1; other generic plans appear in Appendix A.

4.2 Meta Rules

PIUM uses meta rules as control knowledge to help it decide the next plan to be followed and lead a user's responses to get the nal goal given an irregular request. In PIUM, meta rules are primarily used in dealing with a system's expectation failures. These meta rules are utilized to infer applicable plans and, if necessary, plan creation. A natural language expression of a meta rule is shown in Figure 2; other meta rules are presented in an Appendix ??. For the sake of brevity, U means the user and S means the system. The details of intention and expectation are not speci ed in the rules. These are for the purpose of controlling the plan application. How the meta rules can be utilized is explained in following examples. Due to space constraints, an example of an actual run will be shown later in an Appendix B. For more information, see [11]. Here is an example of hypothesizing intention using above meta rule. CALLER INFORMATION

: :

Amherst Amherst, MA ?

4

DEFAULT PLAN (Plan-1) Assumption : The caller's target resides at Connecticut. Action-1 Goal : ASK : EXPECT :

Action-2 Goal : ASK : EXPECT :

Action-3 Goal : DB action: EXPECT :

Action-4 Goal : RESPOND :

Get the city name where the target resides. [DB: Specify the city-directory relation] What city ? A city name among 169 Connecticut cities. If the response fulfills the expectation, then continue; Otherwise, declare Unsuccessful Expectation Fulfillment. Get the target's identifier. [DB: Specify the selection condition for name] Yes ? Either a name of a person or a business. If the response fulfills the expectation, then continue; Otherwise, declare Unsuccessful Expectation Fulfillment Retrieve the target's phone number from the DB. RETRIEVE phone # FROM ?city-directory WHERE name = ? target Phone number(s) for the unique target If the response fulfills the expectation, then continue; Otherwise, declare DB Retrieval Problem Provide the target's telephone number Answer (DB action) Successful plan execution.

Figure 1: Task-Oriented Plan

1) The Need of Hypothesizing Intention IF S needs to hypothesize U's current intention as K for the application of the P; THEN inquire U the verification of its hypothesis on K. EFFECT if responded affirmatively, apply P; otherwise invalidate the hypothesis on K without applying P.

Figure 2: Natural Language Expression of A Meta-Rule 5

While the system expects a city name in Connecticut, and the user responds `Amherst', which is not a city in Connecticut, the system nds possible information by retrieving from the database. When the information is derived from the database, not given by the user, the system needs to con rm the hypothesis with the user. In this case, the system hypothesizes that Amherst refers to Amherst, MA. An example of rephrasing the unanswered question INFORMATION CALLER INFORMATION CALLER INFORMATION

: : : : :

What city ? The last name is Williams, and the first initial E. Which city is E. Williams for ? Oh, this is for BERLIN. The phone number is 264-8117.

Referring to the default plan in Figure 1, PIUM initiates a dialogue and receives a response 'E. Williams' from a user. Since a name of a person or a business is expected after a system notices a region of a caller's target, the system rephrases the request with the user's response to get the unanswered question, that is, a target of the caller. Plan Creation (as mentioned in item 4 and 5 of the motivating example) is similar to Collins' work [8]. In the situation in item 4 a target's location is in Massachusetts and a caller's location is unknown. This causes a plan failure and presents a possible linkage to plan 2 or plan 3 (Appendix A). Therefore, a new query is made to get missing information to distinguish between two applicable plans. Based on the user's response, if a target's area is the same one as the user's area, then plan 3 will be selected, otherwise, plan 2. In order to distinguish between plan 2 and plan 3, the following actions are made. Since neither plan can handle current features alone, after the new query is created, the actions are included in a pending plan and the modi ed plan is stored in the plan base as a new plan for future adaptation. In this way, the system's discourse models are enhanced as PIUM has more experiences. A newly created plan including new actions following is fundamentally a merge of plan 2 and plan 3. Action-n Goal : Get the location of the caller. [Compare the caller's location with the target's location] ASK : Where are you calling from ? EXPECT : A city name in conjunction with a state name. Action-n+1 COND: If the caller's location the target's location, Then PLAN Two COND: If the caller's location = the target's location, Then PLAN Three

5 Prototyping and Testing The key ideas of this research are 6

posing questions to the user is a good way to distinguish plans, obtain missing information, and maintain focus in the dialogue, and  meta-rules can enhance the adaptability of the system, and can mitigate unrealistic assumptions, such as users and knowledge bases having complete and correct information. These ideas are applicable to recognition systems that assist users in particular problem domains, where the system has more (and better) information about the problem domain than the user. It is advantageous for a system to maintain relevance here; the goal is that the system provide useful information without confusing the user with unnecessary details. The presence of ad hoc user responses increases uncertainty, but such uncertainty as can be recognized can be handled by asking the appropriate questions instead of performing expensive inferences. 

5.1 Control Strategy

As an initiative interface, PIUM opens a dialogue with its own questions pre-stored in a default task-oriented plan. During the matching process of a system's expectation to a user's response, meta-rules are invoked to handle a current situation if an expectation failure occurs. Handling incomplete information is pursued to infer a more suitable plan when multiple plans are applicable to settle a current situation. It is done by identifying the crucial information needed to distinguish one plan from the other and making a new query to obtain the information. A new plan is created based on the added information for later use.

5.2 Plan Representation

Each generic plan in the plan library consists of a sequence of actions and an action carries a question posed by a system with its expectation. An action is described in di erent parts: goal to check the ful llment of an expectation, query to deliver a question, expect to carry an expectation, and done to indicate the execution of the action. Like most plan recognition systems, PIUM has task-oriented plans which have a rigid temporal ordering of actions. However, utilizing meta rules allows a exible temporal ordering can be. When an expectation failure happens, PIUM can hypothesize this situation and the con rmation of the hypothesis from a user can bring another plan. This demonstrates an attempt to check the validity of the pending plan once it is chosen.

5.3 Meta rule Representation

The representation language used for meta rules is a predicate logic that associates predicates and events in the Lk language form which attempts to combine the advantages of the foundations of a formal logic with the structural exibility of frames [15]. A meta rule in PIUM consists of a representations of situations and a handling strategy. Each meta rule includes what PIUM knows about a current situation, how PIUM will handle this situation, and what PIUM knows as possible side e ects. Any aspectual of a relation among meta 7

Get−Number REPHRASE Make_Query HYPOTHESIZE

RECOVERER

Switch_Plan

[META−PLANS] CLARIFY Check_New_Action PERSISTENT Area_Code

Figure 3: Relations of Meta Rules rules will be constrained to be selected by a current situation. The structure of meta rules allows branching and iteration as in Figure 3. In a current implementation, the order of meta rules are not so critical in the process of a recovery, since all meta rules are inspected. Moreover, since each di erent situation has a di erent strategy, no con ict among meta rules is expected for handling a failure case. Although the eciency of the order of meta rules is not so critical with limited numbers of rules at this point, it stands to reason that checking a rephrase meta rule is preferable in that it allows PIUM to remain in the same plan.

6 Conclusion The key ideas of this research are that posing questions to the user is a good way to distinguish plans, obtain missing information, and maintain focus in the dialogue, and that meta rules can enhance the adaptability of the system, and can mitigate unrealistic assumptions, such as users and knowledge bases having complete and correct information. The presence of ad hoc user responses increases uncertainty, but such uncertainty as can be recognized can be handled by asking the appropriate questions instead of performing expensive inferences. The number of meta-rules can be vary depending on the domain to handle more detail level and to be suitable within broaden domains, while the meta rules are general enough to be domain-independent. These ideas are applicable to recognition systems that assist users in particular problem domains, where the system has more (and better) information about the problem domain than the user. It is advantageous for a system to maintain relevance here; the goal is that the system provide useful information without confusing the user with unnecessary details. This work has pointed out the need for some enhancements: For one, we need a richer representation language to be able to handle more complicated situations and plans. Ad8

ditionally, we need a more robust way of inferring the closest plan, as the number of plans increases. PIUM is implemented in Macintosh Common Lisp.

Acknowledgements Initial ideas and valuable guidance for this work was provided by Dong-Guk Shin of the University of Connecticut, who was the rst author's M.S. advisor and supervised the development of PIUM. Thanks also to Mathias Bauer at DFKI for his encouragement and helpful suggestions. Valuable feedback was provided by Jaeho Lee of the University of Michigan and Karl Wurst of the University of Connecticut.

A Generic Plans and Meta-Rules

A.1 Generic Plans

OUT-OF-STATE Information Request (Plan 2) Assumption : The caller's location is in-state. Action-1 Goal : Get the city name in which the caller is interested in [DB: Specify the city-directory relation] ASK : What city ? EXPECT : A city name in conjunction with a state name. If the response fulfills the expectation, then continue; Otherwise, declare Unsuccessful Expectation Fulfillment Action-2 Goal : Retrieve the target's area code from the DB. DB ACTION: RETRIEVE area-code FROM ?area-code-directory WHERE city-name = ?target AND state-name = ?state EXPECT : Area-code for the unique . If the response fulfills the expectation, then continue; Otherwise, declare DB Retrieval Problem. Action-3 Goal : Provide the combination of the area-code and 555-1212. RESPOND : Answer (1-?area-code-555-1212)

LOCAL INFORMATION REQUEST (Plan-3) Assumption : The target's location is same as the caller's state Action-1 Goal : ASK : EXPECT :

Get the city name in which the caller is interested in. [DB: Specify the city-directory relation] What city ? A city name in conjunction with a state name.

Action-2 Goal : RESPOND:

Provide the local information operator number. Dial 1-555-1212.

9

A.2 Examples of Some Meta-rules 2) Choosing One from Multiple Applicable Plans IF U's current intention is understood as K, AND multiple plans exist each of which is applicable to K. THEN try a clarification whose answer enables selection of one plan which is mostly applicable to K among others. EFFECT current plan is switched over to the selected plan.

IF

3) Handling Subordinate Discourse Engagement U's current intention is understood as K, AND S is within executing the K-applicable plan P, AND U presents a subordinate query Q relevant to P;

THEN EFFECT

respond to Q appropriately. maintain the goal and intention of the current plan.

4) The Need to Rephrase the Unanswered Question IF U's response to question Q is understood as the intention K, and K fulfills some other expectation E of P, and K does not fulfill the pending expectation; THEN repeat Q while incorporating the user response into Q. EFFECT the fact that K fulfills some expectation E of P is recognized and remembered.

B Sample Runs

A dialogue between PIUM and a user is described in the forms of INFORMATION and CALLER. Two of runs are listed in this appendix. The rst run shows the usage of the rephrase meta rule to obtain an unanswered question when an expectation of an upcoming action is ful lled ahead of an expectation of a current action. ? (PIUM) =================================================== This is the program for a cooperative & interactive system to assist users with helpful information. =================================================== INFORMATION: (city (val (*var*))) CALLER : (exist (agent (name (jjl)))) ---------------------EXPECTATION FAILURE Happens.. META-PLAN should be applied. ---------------------Retrieving meta plans for current user's response caller input: (EXIST (AGENT (NAME (JJL)))) APPLIED METAPLAN : (PRE-PLAN1) PROPOSED ACTION by METAPLAN : (REPHRASE)

10

---------------------Rephrase requests for user.. INFORMATION: (city (val (*var*))) (EXIST (AGENT (NAME (JJL)))) CALLER : (city (val (new-haven))) former-input: (EXIST (AGENT (NAME (JJL)))) City for a current input: NEW-HAVEN ---------------------***DATABASE ACCESS*** INFORMATION: 772-3035 ----------------------------------------------------------------------

In the following run, when a user's request does not meet any of expectations in a current plan (a plan failure), an applicable plan for the current situation is retrieved from plan library, PIUM hypothesizes the user's request and asks the user for con rmation. With the con rmation, PIUM switches from the current plan to another one: checking validation of a pending plan. In this example, multiple plans are available to meet the request. PIUM uses the meta rules again to choose one over the other. A clarify meta rule is executed to handle the situation. The process of a clari cation is done in three steps: a disparity step to nd a missing or crucial piece of information for distinguishing one from the other, a creating query step to get the additional information, and a selecting step to clarify the situation based on the information. In accordance with a newly chosen plan, PIUM, in this example, provides information for the city outside the state of Connecticut. The case is concluded with the information, with the changes having been made within PIUM. The created query is embedded in the holding plan, which is stored as a new plan in plan library. ? (PIUM) =================================================== This is the program for a cooperative & interactive system to assist users with helpful information. =================================================== INFORMATION: (city (val (*var*))) CALLER : (city (val (amherst))) ---------------------***DATABASE ACCESS*** Possible answer: (MA) ---------------------EXPECTATION FAILURE Happens.. META-PLAN should be applied. ---------------------Retrieving meta plans for current user's response caller input: (CITY (VAL (AMHERST))) applicable plan: (PLAN-2 PLAN-3) APPLIED METAPLAN : (PRE-PLAN2)

11

PROPOSED ACTION by METAPLAN : (HYPOTHESIZE CALLERRESPONSE) ---------------------Hypothesize caller's response like following: INFORMATION: (CITY (VAL (AMHERST))) (STATE (MA)) CALLER : (agree (caller))

?

---------------------Hypothesized intention is confirmed ---------------------Current plan has been failed plan Confirmation of the hypotheses brings new plan switches a READY FOR SWITCHING TO AVAILABLE PLAN ---------------------Multiple plans exist to take over a current plan ---------------------EXPECTATION FAILURE Happens.. META-PLAN should be applied. ---------------------Retrieving meta plans for current user's response caller input: (AGREE (CALLER)) applicable plan: (PLAN-2 PLAN-3)

APPLIED METAPLAN : (PRE-PLAN3) PROPOSED ACTION by METAPLAN : (CLARIFY CALLERRESPONSE) ---------------------Clarification should be done to choose one of them ---------------------Trying to find the disparity of applicable plans First Status: (LOC (CALLER (CT))) Second Status: (LOC (CALLER (OTHER))) ---------------------Trying to make a new query to get an input from user INFORMATION: (LOC (CALLER (*VAR*))) CALLER : (loc (caller (ct))) ---------------------***DATABASE ACCESS*** INFORMATION: Dial 1- 413-555-1212 ------------------------------------------------------------------------

12

References [1] A. Barr. Meta-knowlege and memory. heuristic programming project. HPP Technical Report 77-37, Department of Computer Science, Stanford University, Palo Alto, CA, 1977. [2] R. Calistri-Yeh. Utilizing user models to handle ambiguity and misconceptions in robust plan recognition. User Modeling and User-Adapted Interaction, Special Issue on Plan Recognition, 1991. [3] S. Carberry. A Model of plan recognition that facilitates default inferences. In Proc. of the Second International Workshop on User Modeling, pages 1{13, Honolulu, HI, 1990. [4] S. Carberry. Plan recognition in natural language. In Plan Recognitio in Natural Language. The MIT Press, Cambridge, MA, 1990. [5] E. Charniak and R. Goldman. A Bayesian model of plan recognition. Arti cial Intelligence, 64(1):53{79, November 1993. [6] S. Chien and G. DeJong. Recognizing prevention in plans for explanation-based learning. 1988. [7] D. Chin. KNOME: Modeling what the user knows in uc. In In User Models in Dialog Systems. Springer-Verlog, New York, NY, 1989. [8] G. Collins. Plan creation. In In Inside Case-based Reasoning. Lawrence Erlbaum, 1989. [9] B. Goodman and D. Litman. On the Interaction between plan recognition and intelligent interfaces. User Modeling and User-Adapted Interaction, 2:83{116, 1992. [10] H. Kautz. A Formal theory of plan recognition and its implementation. In Reasoning about Plans, pages 69{125. Morgan Kaufmann, 1991. [11] J.J. Lee. The use of meta-rules to enhance plan-based discourse modeling. Technical report, Department of Computer Science and Engineering, University of Connecticut, Storrs, CT, 1992. [12] J. Moore and W. Swartout. Pointing: A way toward explanation dialogue. In Proceedings of the Eighth National Conference on Arti cial Intelligence, Boston, Massachusetts, 1990. [13] C. Paris. The Use of explicit user models in a generation system for tailoring answers to the user's level of expertise. In In User Models in Dialog Systems. Springer-Verlog, New York, NY, 1989. [14] K. Schmidt R. Cohen and P. van Beek. A Framework for soliciting clari cation from users during plan recognition. In Proceedings of Fourth International Conference on User Modeling, pages 11{17, Hyannis, MA, 1994. 13

[15] D. Shin. Lk: A language for capturing real world meanings of the stored data. In Proc. of the International Conference on Data Engineering, pages 738{745, Kobe, Japan, 1991.

14

Suggest Documents