Local
Plan Recognition
in Direct
Manipulation
Interfaces
Annika Wzern Human-Computer Interaction and Language Engineering Group Swedish Inst. of Computer Science BOX 1263 S-164 28 Kista +46 87521514
[email protected]
view interaction as a means to negotiate a joint task with the system. Instead, users perceive the system as a tool, with which they can perform a set of low-level manoeuvres, which in different ways can be utilised to reach their overall goal. Plan recognition in direct manipulation interfaces must to some extent always be keyhole.
ABSTRACT Plan recognition in direct manipulation interfaces must deal with the problem that the information obtained is of low quality with respect to the plan recognition task. There are two main reasons for this: the individual interactions from the user are on a low level as compared to the user’s task, and users may frequently change their intentions. We present two example applications where this is the case.
Keyhole plan recognition in direct manipulation interfaces is problematic for two reasons. The first problem is that the information available from user interactions is of low quality. Since the computer system is a low-level tool with respect to the user’s goal, many actions and action patterns will look the same no matter what the user’s task is [18]. The second problem is that since users view the system as a tool, they will feel free to change their intentions without considering if the computer should know about the change.
The fact that users change their intentions could be used to motivate an explicit representation of user intentions. However, the low quality of available information makes such an approach unfeasible in direct manipulation interfaces. This paper addresses the same problem by maintaining a plan-parsing approach to plan recognition, but making it local to the user’s most recent actions by imposing a limited attention span. Two different approaches to implementation are given, in the context of the two presented applications. Keywords Plan Recognition,
Intelligent
The latter is particularly true in information-seeking applications such as the ones discussed in this paper, since users are prone to change their task depending on what information they encounter while exploring the system.
Interfaces, Task Adaptation.
THE NEWS FILTER APPLICATION The problems for keyhole plan recognition are well illustrated by an application of news ~ilter-ing [11] that we have studied. A News filter is a functionality that scans novel messages in an electronic news system, and grades them according to the interests of a specific user. This way, users can gain rapid access to the most relevant information in a vast information source.
INTRODUCTION Plan recognition can be characterised as the task of recognizing the intentions of an agent, based on the agent’s actions and explicit statements about intentions. The case when the agent is unaware and does not co-operate to plan recognition, is called keyhole plan recognition [4] (as if looking through a keyhole). If the agent is aware of the plan recognition process and actively participates in it, we talk about intended plan recognition. In a trivial sense, recognition is achieved in a directintended plan maniptdation interface, since it can accept commands and execute associated tasks. Intended plan recognition becomes a powerful concept in natural language interaction [4, 16], agents can collaborate in or in applications where performing a task [6, 7]. In direct manipulation
interfaces, users typically
In [11], a filter configuration tool is described which allows users to manually set up their personal filter preferences. Patty Maes [15] has developed news filters in which users are asked to explicitly mark messages as being interesting or uninteresting. This information is used by the system to learn about the user’s preferences. This approach does not require users to configure their filters manually, but they must still provide explicit information about their preferences.
do not
Permission to make digital/hard copies of all or part of this materird for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage. the CWYright notice, the title of the publication and its date appear. and notice is given that copyright is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires specific permission andlor fee. lU1 97, Orlando Florida USA ..$3.50 @1997 ACM 0-89791-839-8/96/01
Plan recognition may provide an even more “automatic” way of detecting preferences. If it was possible to infer the user’s interest in an entry or a group of entries from the reading pattern of the user, this could be used to gradually build up the filtering knowledge.
7
Read News
Search for41ninteresting Specific Information
A
/-
/ Search for Information In group
Select Specific Group
Select first
Information /\ Avoid uninteresting Avoid uninteresting /“ move to
/ “scan” \ grouplist
* “Click”
\
group
other group
on
article
\ “catch up” group /\
group Random Scan in Group
Search specific info in group
Quick move to next article
“catch up” article
“Next ;rticle”
/c\d* “Subject next”
Keyword search header
Figure 1. A task structure depicting news reading strategies. Arches without annotation denote abstraction. Arches which are connected with arrows denote sequential decomposition. Arches with a star (*) annotation denote actions that can be repeated an arbitrary number of times. Observable interface actions are written in italics. interactions using new experiment subjects. This was done to test both that we had captured the most common reading strategies. In the second study we detected one new reading strategy, but else, the users in the second study used the same interaction strategies as those of the first.
This application of plan recognition provides an “extreme” keyhole plan recognition situation: the user’s actions are monitored in order to accomplish a secondary task for the user. (The primary task is to read news, not to build up filtering knowledge.) In domains of this kind, we cannot presuppose beforehand that the available information will provide enough information to accomplish the plan recognition task.
Based on these observations, we constructed a task hierarchy depicting the most frequent reading strategies for the news reading task, as shown in figure 1. It is straightforward to distinguish a set of high-level reading strategies. However, it turned out to be very difficult to recognise if the search in either strategy succeeded, that is if the user really found interesting information. For example, we originally thought that the reading strategy within a single entry could be used as a key to the user’s interest in the entry. If a user would read for a long time, or go back and forth in an article, this would signify that the article was interesting for the user. However, we found several users who would exhibit this behaviour if they were not sure if the article was interesting or not.
An Empirical Study of News Reading In order to investigate the feasibility of plan recognition for news filtering, we performed a minor empirical study. Our question was: Can we use keyhole plan recognition to detect enough of a users plan to find out if an entry or a set of entries are interesting, or uninteresting, to that user? The study involved Two of the users frequently used inexperienced. All though. The study
five users using the same mail reader. were experienced with the reader and it, whereas the others were more were experienced users of usenet news, is described in more detail in [19] and
[20].
We had better success in identifying strategies that signal that an article or a topic is uninteresting, as shown in figure 1. Articles that were read for a very short time were judged uninteresting. Some subjects would select only such headers that were interesting for them, denoting that the ones they did not select were uninteresting. One of the
The study was performed in two parts. First, we videotaped two users that were “thinking aloud” while using the news reader. The collected material was analysed to compare their reading strategies with the intentions they expressed in the “think aloud” protocol. Next, we confirmed the established analysis of strategies in a second study of three “silent”
8
Window
Probability Strategy
for 1
Probability ...
Strategy
(all...alm)
Pll
...
Pl~
(%lo. %J
pel
...
P en
for n
Figure 2. The structure of a runtime lookup table for local plan recognition with window size m. The variable “n” denotes the number of identifie~ strategies, and “e” denot~s the total number of table entries. subjects would also actively mark as “read’ all messages in a topic that he found uninteresting. . What is not shown in the analysis of user strategies, is that the users frequently changed strategies. Users moved from random scan to subject search (indicating an interest in a thread of articles with the same subject), or even directly from subject search to subject delete (indicating an uninteresting subject). Users of news in general estimate the likelihood of finding any interesting information to be small, and are for that reason prone to abandon strategies at an early stage, when the effort of searching seems unmotivated given the low likelihood of finding anything (or in blunt words, when they grow bored). In conclusion, we may say that some information can be obtained from keyhole plan recognition in this domain, but it is too weak to provide the sole basis for building up filtering characteristics. Furthermore, plan recognition in this domain must cope with users changing strategies. PROBLEMS FOR TRADITIONAL PLAN THE RECOGNITION IN FILTER APPLICATION Kautz and Allen [13] categorise algorithmic approaches to plan recognition into three types: explanation-based approaches, likely inference approaches, and plan parsing approaches. Of these, plan parsing approaches are foremost geared towards keyhole plan recognition, whereas explanation based and likely inference approaches have been used foremost for intended plan recognition. Plan parsing approaches to plan recognition essentially share the property with a Natural Language parser, that they only recognise such strategies that were explicitly encoded in its plan knowledge. Kautz and Allen [12,13] relaxed this requirement somewhat by allowing the plan recognition algorithm to recognise that several strategies were executed interleaved with each other. However, to reduce the set of possible explanations, they would allow this kind of explanation only if the behaviour was inconsistent with any single strategy. Furthermore, they would deduce that the agent was pursuing both tasks - not that the agent had changed from one task to the other.
The declaratively correct approach to recognizing changing intentions is to introduce meta-level plan modification operators, that describe how a user can change his or her intentions from one situation to another. An embryo of this solution can be found in one of the earliest papers on plan recognition [17], and several researchers in plan recognition have used meta-level plans to express such changes [7,14]. However, whereas this approach is perfectly appropriate (but difficult) for intended plan recognition, it runs into problems in keyhole plan recognition. The main problem is that there is very little information available that can be used to make meta-level inferences. The plans that one would like to recognise to make use of such strategies, are simply not observable. Also, there is an anomaly between the complexity of this suggested reasoning framework, and the simplicity of applications of reactive keyhole plan recognition. In the news filter example, we found that the set of strategies that were recognizable were very simple in structure: almost all of the identified user strategies were recognizable by identifying one or two characteristic user actions. LOCAL PLAN RECOGNITION USING AN ACTION WINDOW We have addressed the filtering application by retaining the plan-parsing approach to plan recognition, but restricting its scope by giving it a limited attention span. In this paper, we use the term “local” plan recognition to denote this kind of plan recognition. In the filter domain, we defined the attention span to be a fixed length sequence of the most recent user actions (where user actions that were entirely irrelevant for the recognition task were removed). The sequence used for plan recognition can be called an “action window”. Plan recognition using an action window can be given an extremely efficient behaviour at runtime, since it can be represented by a simple lookup table. The action window is used as the key, and the table entries associate a probability value to each possible user strategy (see figure 2). Note also that during execution, the set of windows that may possibly match after each new observation is no larger than the total number of observable actions. This makes it possible to represent the system as a state-transition graph, where the
strategy compilation approach that was used by us, with more sophisticated probabilistic models.
nodes are the observable action windows, yielding in effect constant access time, given some suitable indexing mechanism over the observable actions. A similar technique for implementing plan recognition is reported in [1].
Once a set of action sequences have been obtained and their probabilities selected, it is straightforward to construct a lookup table in the form shown in figure 2. The formula below can be used to calculate the probability of a particular sequence, given that a specific action window has been observed. The probability for a specific strategy is then simply the sum of the probabilities for its sequences.
Plan Compilation for Local Plan Recognition In order to generate runtime lookup table, we compiled it from the user strategies depicted in figure 1 by use of abstract execution. This means that we “expanded” all the possible behaviors that could result from following a specific strategy. There are two problems associated with this approach: we must estimate how likely it is that each sequence will occur in practice, and we must also stop generating sequences at some point. Similar problems have previously been addressed by probabilistic approaches to plan recognition, se for example [4]. In the filter application, each action sequence sequences were more possible ending point, the case that the user strategy.
The formula below requires that the total number of action window occurrences is the same in each sequence. Since the number of window occurrences in a sequence is closely related to the length of a sequence, this is roughly equivalent of assuming that all sequences are of the same length. If this is false in a domain, the same effect can be obtained by normalizing all sequences to the average sequence length, and normalise the number of window occurrences in a sequence in the same way.
we estimated the probabilities for based on an assumption that short probable than long ones. At each we estimated a fixed probability for would continue pursuing the same
Lemma. Let Nocc(Window, Sequence) denote the number of occurrences of an action window Window in an action sequence Sequence.
As an example, consider the strategy for random scan, “Next group” then repeat(’’Next
article”)
If we assume that the total number of window occurrences in an action sequence is constant (or normalised), and that we seek the relative probability of sequences in relation to each other, the following can be proven.
Suppose that users are involved in random scan during 80% of the time they use the News application, and that there is a 60$Z0chance that a user selects to continue scanning the group after having read an article. Then the probability of the concrete action sequences would be computed as follows.
P(SequencelWindow)
=
P(Sequence) * NoCc(Window,
Sequence)
NoStrategies
Probability
Strategy
32%
“Next group” - “Next article”
19%
“Next group” “Next article”
“Next
article”
-
12%
“Next group” - “Next “Next article” - “Next article”
article”
-
-
g
Proof.
P(Sequencei) * NoCc(Window,
SequenceJ
In [19].
Selecting the Window Size A major issue for local plan recognition is how to select the window size. One way to select the window size is to allow it to be large enough to distinguish between all strategies. In the filter example, all strategies except two are uniquely identified after the two first actions.
Etc... Using such estimates, we could use a probability cut-off point to limit the generation of plan sequences. For example, if the table above is expanded three iterations further, the probability that some other sequence for the strategy would occur goes below 5Y0.
If the window size required for unique identification becomes very large, this strategy will not work, both for computational reasons (the size of the lookup table grows exponentially with the window size), but also because we cannot be sure that the user will maintain the same strategy for that long periods. For example, the remaining two strategies (random scan and random scan in group) in the filter application require an infinitely large window to be uniquely identified in all situations. But then, it is uncertain whether they really represent two different strategies, or if the user can be said to be involved in the same kind of random scan in both cases. If this situation occurs, we are better off by limiting the window size, and accept that we
This way to estimate probabilities only approximates what users do in practice. In particular, it was not correct to assume that the shortest sequence always was the most probable. In reality, the most frequent action sequence length was approximately ten actions; both shorter and longer sequences were less frequent. There are many alternative approaches to obtain a set of action sequences and their estimated probabilities. The most straightforward way is simply by collecting them directly from empirical data. It is also possible to combine the
10
1
I
I Learning structure
I
Learning details
mm Figure 3. The hierarchy of user tasks identified sometimes must return user’s task.
several alternative
The PUSH project was aimed towards finding a hypertext representation for a very large manual on a development method for telecommunication hardware and software. Again, the central issue is filtering: selecting and presenting the information that is relevant to the particular user in a particular situation. During the knowledge acquisition phase, we investigated several means for doing such selections (see [2]). Eventually, we found that the most useful thing was to adapt the presentation to the user’s task, as selected from the task hierarchy presented in figure 3.
guesses at the
Finally, if the user switches strategy very frequently, we may want the system to be a bit slow in recognizing this, and maintain its old task guesses for a while. In this situation, it is useful to select a window size that is larger than the minimum size. The plan compilation strategy cannot deal with this situation, but it can be addressed by using a standard plan recognition strategy within the window. If the window content is inconsistent with any single task, the system must select two or more tasks to explain the user’s actions, in the same way as Kautz’s plan recogniser did. An alternative way to deal with this situation will be presented below. THE
PUSH
in PUSH.
Task
Adaptation
in
PUSH
The prototype developed within PUSH is named POP (Push Operational Prototype). POP implements a hypertext search tool, where the hypertext structure is based on a set of objects. Each object describes a development step in SDP-TA, or a piece of code or documentation to be produced. The information in an object is partitioned into a set of predefine information entities. An information entity is a small hypertext piece, that can contain links to related information. The presentation is adapted by only displaying such information entities that are relevant to the user’s current task. Other information entities are still accessible, by clicking on their headers. This is an example of page adaptation, as opposed to link adaptation, both possible in adaptive hypermedia [3].
APPLICATION
A major problem for the intelligent filter application was that the interface did not in any way reflect that plan recognition was happening. This was due to the fact that we studied the interface of an existing news reader, but it would also have been extremely awkward to introduce such information into the interface, as plan recognition was used to support a task that was seconda~ to the user’s immediate task. The PUSH (Plan- and User Sensitive Help) project has developed an application with an adaptive directmanipulation interface, where plan recognition directly affects the on-going dialogue. I will here only describe this application in brief. A comprehensive description of PUSH can be found in [8] and [10].
The user’s task is either selected explicitly by the user, or inferred through plan recognition. To allow users to inspect and control task adaptations, the inferred task is explicitly shown as part of the answer, and the user can explicitly select another task. This makes POP an example of how a
11
direct manipulation interface can combine features of intended and keyhole plan recognition (this is further discussed in [19]). The POP system has been evaluated in a comparative usability study [8,9], where an adaptive and a non-adaptive version of the system were compared, and which showed that users had a strong preference for the adaptive version of the system. PROBLEMS RECOGNITION
FOR IN
LOCAL
PLAN
PUSH
Even though the interface of POP allows users to explicitly state their tasks, most actions are still at a low level compared to the user’s task. For example, we cannot draw any definite conclusions about a user’s task from his or her way of selecting a page of information. We can also expect that users frequently will change their tasks. In our prestudies on an existing information system in the domain [2], we found that users that started out with one particular task frequently would move over to other tasks. The most frequent reason was that they found some information that would be relevant for their needs, but needed to read something else to fully understand what they had found.
actually the information obtained from monitoring strategies, such as their method for searching. LOCAL “FADING”
PLAN
RECOGNITION
USING
user
A
FUNCTION
To address these problems, we made some modifications to the plan recognition algorithm. In the second version of the plan recogniser, we abandoned the idea of maintaining a table of action windows. It was replaced by a table of strategy fragments, each one action long. Each strategy fragment was associated to the same kind of information as previously was associated to a full action window: a list of possible user tasks and an estimate of how likely this task was, if the strategy fragment had been observed. To formulate a task guess, the plan recogniser scanned the current window and checked which strategy fragments that occurred within it. But all observations were not taken to be equally importan~ instead the most recent “hhs” were given higher rates. The likelihood attributed to a particular task by an observed strategy fragment was multiplied with a fading function, which decreased with the number of actions observed after the strategy fragment was observed.
For both of these reasons, the PUSH application must be addressed in a similar manner to the filter application. Again, we used a local, plan-parsing approach to plan recognition. In the first adaptive version of POP, we implemented plan recognition exactly as in the filter domain, by compiling a window table from a small set of pre-coded strategies. The strategy library was developed based on a small number of sessions with users in a nonadaptive version of the system, with the addition of strategies that recognised the selection of a specific task, or the opening and closing of interaction entities.
We selected a very simple fading function which was linearly decreasing with the number of actions observed after the occurrence, Weight = Likelihood*
(W - N)/W
where N represents the number of actions observed after the recognised strategy fragment, and W the window size. To calculate the total weight for a task, we simply summed up all the weights attributed to it by the strategy fragments that occurred within the current action window. As an example, consider again the “random from the filter domain,
During our first encounters with the system (prior to the actual usability study), we found that this implementation did very well at guessing the user’s task from “normal” actions in the interface. However, when users explicitly selected a task, they expected it to stay valid longer than the window size allowed. On the other hand, increasing the window size lead to normal actions having too persistent effects on plan recognition. It seemed like we wanted the window size to vary with the kind of actions observed.
“Next group” then repeat(’’Next
scan” strategy
article”)
In the fading version of local plan recognition, this strategy gives rise to two strategy fragments, the “Next group” and “Next article” actions. The “Next group” action should attribute a high likelihood to “random scan” (say, 1.0), as it only occurs in strategies associated to thk task. The “next article” actions will associate likelihood both to the “random scan” task, and to the “random scan in group” task, as it occurs in strategies for both these tasks (say, 0.5). Assume that the action window size is three. Assume now that the following action pattern is observed:
The problem we encountered was due to some quite significant differences between the information available in the filter domain and the PUSH domain. In POP, there is little information to be obtained from the ordering of actions. Instead, we can use singular actions as more or less sure indicators of the user’s task. The most definite information is of course obtained when a user explicitly selects a particular task. Less direct keys to the users’ tasks occur when a user opens or and closes singular hypertext fragments. If the user opens a hypertext fragment, this means that the user is likely to be executing a task where this fragment is relevant, and if the user closes a hypertext fragment, he or she is likely not pursuing a task where this fragment is relevant. The most uncertain information is
“Next group” “Next article” “Next article” “Next article” After the first action has been observed, the plan recogniser will attribute the weight 1.0 to “random scan”. After the second action has been observed, the weight attributed to “random scan” has increased to 1.2, but at the same time, a
12
weight 0.5 has been attributed to “random scan in group”. After the third action has been observed, the weight attributed to “random scan” is still approximately 1.2, but the weight attributed to “random scan in group” has gone up to 0.83. When the fourth action is observed, the first action has moved out of the action window, and the plan recogniser attributes the same weight to both tasks (0.995).
distracted from the given task by the information they encounter in the system. If this would happen, users would perform longer sequences of actions that are directly related to the given task, than they would do in real usage of the system. The limited attention span of local plan recognition can to some extent compensate for, but not entirely make away with, this problem.
As is clear from the example above, the fading approach to local plan recognition still maintains a limited attention span. Imposing a fading function essentially means that we allow plan recognition to forget gradually, rather than impose a strict cut-off point. In addition, it ensures that actions or strategies that are strong indications of a certain task, affect plan recognition for a longer period than actions that give less certain information.
An alternative approach to machine learning is to obtain training data during real usage of the system. The interface design of POP allows for acquiring such data, since users sometimes state their intentions explicitly during usage of the system. Such statements, together with the actions performed in close conjunction with it, can be used as training data. However, this approach requires that users sometimes actively select a task, something the users seldom did in the usability study.
OPPORTUNITIES IN
FOR
MACHINE
LEARNING
CONCLUSIONS
PUSH
A problem for the approach to plan recognition taken in PUSH was that we initially had very little information about user plans, as the system was under construction. This is not particular for PUSH. Indeed, whenever plan recognition is used to influence the responses to users, we must expect that users may change their behaviour in response to this adaptation. This is again an effect of the fact that users modify their plans and intentions based on what they encounter during the interaction with a system. The effect is that we cannot expect to be able to fully capture the user strategies in advance. For this reason, it is interesting to investigate the possibilities for a machine learning approach to local plan recognition. Some strategies for applying machine learning algorithms to local plan recognition are discussed in [19]. Here, we will only briefly discuss the practical requirements on attempting this approach, in light of our experiences with the POP system. A central concern for the practical application of machine learning techniques is that it must be possible to obtain correct, and sufficiently rich, empirical data. One approach is to use data collected in special experiments setup for this purpose. In the initial studies of POP, we gave the users a set of specific problems to solve. These roughly corresponded to specific tasks in the task hierarchy. These tasks and the logged sequences of actions could potentially form a ground for machine learning. However, the set-up of such experiments can be very costly. In our own study, the set of examples collected in the initial study was way to small to warrant a machine learning approach. This approach to training has two inherent weaknesses. Obviously, it can only be used for initializing a plan library: it cannot be used during real usage of the system, and it will for this reason never fully cope with the emergent behaviour of users that interact with the adaptive system. Also, users may behave differently in an experimental situation than they would in real usage of the system. People participating in experiments often try to “perform well”, which will make them less prone to get
13
Applications of plan recognition in direct manipulation interfaces must often base their reasoning on scarce and low quality information obtained from the human - machine interaction. There are two reasons for this: the individual interactions from the user are on a low level as compared to the user’s task, and users may frequently change their plans or tasks. This paper suggests to address this problem by imposing a limited attention span to the plan recognition process. This approach does not require meta-level knowledge about when and why users may change their intentions - it only requires an idea of how often it happens. The usefulness of this approach was exemplified by two applications: one within news filtering, and the other from an adaptive hypertext domain. We outlined two approaches to implementing local plan recognition. In the first, the limited attention span was achieved by using fixed number of the most recent actions as a basis for plan recognition. In the second, we instead imposed a “fading function” to the sequence of observed actions, which forced the plan recognition process to gradually forget about previous user interactions. Future
Work
The “fading” approach to plan recognition is very promising. However, much work is still required in order to develop it into a general approach to plan recognition. Firstly, the weights calculated in the proposed algorithm cannot be given a probabilistic semantics, both because of the use of a fading function, but also because we compute the value for a user task as the sum of the weights obtained from all occurrences of strategy fragments for the task. As shown in the example, this sum may very well exceed one. The reason is that the strategies do not forma partition over the set of reasons for assuming a specific task. It remains to work out a formal semantics for this version of local plan recognition.
REFERENCES
J. Plan Recognition in VERBMOBIL. 1. Alexandersson, Working notes from the IJCAI workshop ‘Next Generation of Plan Recognition Systems’, [Montreal, Canada], AAAI, 1995. 2. Bladh, M., and Hook, K. Satisfying User Needs through a Combination of Interface Design Techniques. HumanComputer Interaction, Interact’95. Chapman and Hall, Oxford, England, 1995. 3.
Brusilovsky, P. Methods and Techniques of Adaptive Hypermedia. User Modelling and User - Adapted Interaction, Vol. 6, 1996.
4.
Calistri, R.J. Classifying and Detecting Plan-Based Misconceptions for Robust Plan Recognition. PhD thesis, Brown University, Dept. of Computer Science, Providence, Rhode Island, 1990.
5.
Carberry, Dialogue. 1990.
15. Maes, P. Agents that Reduce Work and Information Overload. Communications of the ACM, 37(7), 1994. 16. Pollack, M. E., Inferring Domain Plans in Question Answering. Ph.D. thesis, University of Pennsylvania, 1986. 17. Schmidt, C.F., Sridharan, N, S., and Goodson, J.L. The Plan recognition Problem: An Intersection of Psychology and Artificial Intelligence. Artificial Intelligence 11, 1978. 18. Suchman, L.A. Plans and Situated Actions. The Problem of Human Machine Communication. Cambridge University Press, 1987.
S. Plan Recognition in Natural Language MIT Press, Cambridge, Massachusetts,
6. Cohen, P.R. and Levesque, H.J. Persistence, Intention and Commitment. in Cohen, P.R, Morgan, J. and Pollack, M. E., editors, Intentions in Communication. MIT Press, Cambridge, Massachusetts, 1990. Plans for Group 7. Grosz, B., and Kraus S. Collaborative Activities. IYCAI’93 [Charberry, France], Morgan Kaufmann, 1993. K. A Glass-Box Approach to Adaptive 8. Hook. Hypermedia. Ph.D. thesis, Stockholm University, Dept. of Computing and System Sciences, Stockholm, 1996. the Usefulness of an Adautive 9. Hook, K. Evaluating Hypermedia Syste;. in proc. of the International Conference on Intelligent User Interfaces [Orlando, Florida], ACM, 1997 (this volume). 10. Hook, K., Karlgren, J., Wzern, A., Dahlback, N., Jansson, C.-G., Karlgren, K., and Lemaire, B. A Glass Box Approach to Adaptive Hypermedia. User Modeling and User - Adapted Interaction, 6, 1996, to appear. 11. Karlgren, J., Hook, K., Lantz, A., Palme, J., and Pargman, D. The Glass Box User Model for Filtering. UM-94 [Hyannis, Massachusetts], Mitre Corp, 1994. 12. Kautz, H.A. A Circumscriptive Theory of Plan Recognition. in Cohen, P.R, Morgan, J. and Pollack, M. E., editors, Intentions in Communication. MIT Press, Cambridge, Massachusetts, 1990. 13. Kautz, H. A., and Allen, J.F. Generalized Plan Recognition. Proceedings of the 5th national conference on artificial intelligence [Philadelphia, Pennsylvania], Morgan Kaufmann, 1986. 14. Litman, D., and Allen, J.F. A Plan Recognition Model for Subdialogues in Conversations. Cognitive Science 11, 1987.
14
19. Waxn, A. Recognizing Human Plans: Issues for Plan Recognition in Human - Computer Interaction. Ph.D. thesis, Royal Inst. of Technology, Dept. of Computing and System Sciences, Stockholm, 1996. 20. Wax-n, A. and Stenborg, O. Recognizing the Plans of a Replanning User. Working notes from the IJCAI workshop ‘Next Generation of Plan Recognition Systems’, [Montreal, Canada], AAAI, 1995.