Towards Creating Assistive Software by Employing ...

0 downloads 0 Views 8MB Size Report
One such system is Opportunity knocks [26] which is an automated transportation routing system, that strives to improve the efficiency, safety and independence ...
Towards Creating Assistive Software by Employing Human Behavior Models Frank Krüger∗, Kristina Yordanova, Christoph Burghardt, Thomas Kirste

Abstract Assistive software becomes more and more important part of our everyday life. As it is not straightforward to create such a system, the engineering of assistive systems is a topic of current research with different applications in healthcare, education and industry. In this paper we introduce three contributions to this field of research. Whereas most assistive systems use approaches for intention recognition based on training data applicable to specific environments and applications, we introduce a training-free approach. We do that by showing that it is possible to generate probabilistic inference systems from causal models for human behavior. Additionally, we collect a list of requirements for context aware assistive software and human behavior modeling for intention recognition and showed that our system satisfies them. We then introduce a software architecture for assistive systems that provides support for this kind of modeling. In addition to introducing the modeling approach and the architecture we show in an experimental way that our approach is suited for smart environments. The collected list of requirements could help a software engineer create a robust and easily adaptable to changes in the environment assistive software.

1

Introduction

Assistive systems are beginning to play an important role in everyday life. From applications for smart phones and navigation systems, to the much more challenging intelligent environments where multiple devices constantly interact with the user, assistive systems promise to be part of our future. One objective of assistive systems is to proactively support the everyday life. Here, a user does not need to explicitly call for support. Instead, the system infers user intentions and needs from observations, based on which it then autonomously selects appropriate actions. Support can be provided in situations, where the user is not able to explicitly call for assistance: when he is occupied with a demanding primary task, when he is not aware of the situation, when he is unconscious or immobile, etc. The need for proactive support arises in a wide array of settings, from driver assistance to intelligent office facilities to smart homes for independent living. ∗ [email protected], University of Rostock, Institute of Computer Science, Mobile Multimedia Information Systems Group,Albert-Einstein-Straße 22, D-18059 Rostock

1

In very confined scenarios, proactive assistance can be achieved by comparatively simple sensor-actor systems – anti-lock brake systems can be regarded as an example for this. Intentions here can be directly read off the break pedal. However, "everyday" situations in work and home environments are usually more variegated, making the development of robust and accurate systems for recognizing activities and intentions a challenging task. Usually, probabilistic models are employed [5, 24, 1], as they are able to cope with the inherent uncertainty of the sensor data available in such environments. The parameters for these models are estimated from pools of training data; this training data is acquired from test subjects performing realistic actions in realistic settings, the data then is annotated, and the parameter estimation algorithms are applied to the annotated training data. This data-driven approach is able to produce quite successful results. However, this will hold only for situations that match those in the training data. Once the situation significantly departs from the training data, performance degrades. For instance, a system trained on specific locations for user activities will not understand a living environment based on a different floor plan. In addition, the data-driven approach is expensive as it requires a sufficient amount of training data that can only be acquired using expensive human test subjects. The objective of this paper is to discuss an alternative strategy for creating systems for activity and intention recognition. This strategy is based on a model-driven approach that strives to substitute prior knowledge for expensive training data. The underlying idea is that human intentions cause action sequences that can be modeled at a symbolic level, based on prior knowledge on human behavior. Likewise, sensor models (e.g., the error distribution of a location sensor), and environment models (e.g., a floor plan), can be created from prior knowledge. From this prior knowledge we then synthesize a probabilistic model tailored to the specific recognition task. In this paper, we want to introduce one way to implement this approach and we evaluate its applicability based on an experimental real-world application. The further structure of this paper is as follows. In Section 2.3 some related work on assistive software is listed and the underlying concepts necessary for understanding our approach are described. Section 3 presents a compilation of requirements for context aware assistive software as well as for human behavior modeling for intention recognition. Section 4 introduces our general architecture for assistive software, while in Section 5 the modeling approach used for substituting training data to prior knowledge is presented and an example is given. Later, in Section 6, the compiler that generates probabilistic models from the human behavior model from Section 5 is presented and in Section 7, the mechanism is tested by recognizing the activities in 20 3-Person meetings. Finally, a short discussion and conclusion on the mechanism are provided in Section 8. The main contributions of our work are: (1) the introduction of a mechanism for generating training-free probabilistic models from rule-based human behavior models; (2) the compilation of a list of requirements for context aware assistive software and human behavior modeling for intention recognition; and (3) the introduction of a general architecture for assistive software that supports the integration of our mechanism for intention recognition.

2

2

Related work and preliminaries

Intelligent systems and assistive software in particular, combine aspects from various domains, in order to be able to provide adequate context aware intention recognition which is the most important input for the devices cooperation and assistance. In order to explain the HBM compiler and the proposed architecture in an understandable manner, this section deals with background information regarding assistive systems and activity recognition software. Thus, Section 2.1 describes the current state of the art in assistive systems and what we can contribute to it. As our concept relies on using human behavior models (HBM) in order to generate probabilistic models, Section 2.3 provides information about existing human behavior models that could be considered candidates for our purpose. Section 2.2 explains the general concept of our approach, its underlying ideas, and the contributions it could bring to the community.

2.1

Related work on assistive software

With the development of technology more and more systems that provide user assistance are emerging. One such system is Opportunity knocks [26] which is an automated transportation routing system, that strives to improve the efficiency, safety and independence of users with mild cognitive disabilities. It uses GPS data and an activity inference software in order to recognize the user start and end journey points and the current transportation mode. This is done by employing a Dynamic Bayesian Network that learns from the GPS sensor data as the only observable input. Additionally, the inference software uses a prior knowledge in order to detect route errors. Another assistive system is Predestination [23]: a Microsoft project that aims at predicting the drivers destination by taking into account the driver destinations history together with previous driving behaviors. The method also considers places that were not visited previously and introduces probability for them being visited based on trends in the data. Finally, they use a Bayesian inference in order to produce a probabilistic map of the destinations. Yet another project for smart assistance is the Intelligent classroom at the Northwestern University [13, 14]. Its aim is to assist users in a particular set of tasks by observing the user and reasoning about the world. The employed approach uses plan recognition to infer the current activities of the user. The actions are represented as processes and a plan is a sequence of such processes. However, the approach has the problem of running into local maxima, because as soon as a belief state narrows down to one process, it stays there without considering any other options. Finally, MavHome [8, 9] is an architecture for assistive home that is composed on different layers, such as physical components, computer components, services, and applications. The user activities and goals are discovered by mining the activities form the gathered sensor data and describing the discovered activities with a symbolic representation. Later the system uses windowing in the sensor data from a new user in order to discover these activities. In order to predict an action, the used algorithm computes the probability of each of the detected actions, appearing in a specific behavior.

3

Almost all of the assistive systems described above, use learned models based on training sensor data that is sometimes difficult to analyze and provide in a suitable format for the system to learn. Additionally, the data collection and annotation is an expensive and time consuming process which, with the continuous increase of the data that the sensors produce, should be reduced to minimum or somehow substituted by a different approach. Below we describe an alternative method for achieving this, based on human behavior models.

2.2

Analyzing intentions

Humans often act in order to achieve goals. For instance, if a presenter walks up to the stage, s/he does so because s/he has the goal to give a presentation. In such situations we can speak of goal-directed behavior. A goal is a possible world state; an intention is a goal a person is actively trying to achieve. Human behavior models that describe goal-directed behavior basically encode strategies which humans do employ in order to achieve goals. Such strategies can be called plans. A plan encodes possible action sequences, a set of plans is called a plan library. If we assume a person’s actions to be observable in some way, we can use a plan library for inferring her intention: we identify that plan which best explains the observed sequence of actions. This plan’s goal is then the intention we can assume as cause for the observed actions. Observations (sensor data) are usually ambiguous and noisy. Likewise, a person’s individual strategies for achieving a goal are usually not known precisely. Therefore, intention inference is necessarily fraught with uncertainty; usually probabilistic or possibilistic frameworks are employed in order to allow sound reasoning under these conditions. The research area of probabilistic plan recognition provides quite a few methods to enable such reasoning. However, the development of suitable plan libraries is currently a significant engineering effort. Furthermore, specific modeling methods are tied to specific inference engines – thereby exposing details of the inference process at the modeling level (where such details are irrelevant). Our objective is to provide a modeling concept that (1) allows for an easy and flexible creation of plan libraries and (2) that is – at least to some extent – independent of the underlying inference engine.

2.3

Related work on human behavior models

Below we describe some of the more relevant models that were investigated while searching for an appropriate modeling mechanism that could fit the concept from 2.2. Their relevance is based on the number of requirements, described in Section 3, that they satisfy. ACT-R is a computational theory of human cognition incorporating declarative and procedural knowledge into a production system where procedural rules act on declarative chunks. ACT-R consists of two types of modules: The first are the perceptualmotor modules which are responsible for the interface with the real world; and the second type are the memory modules which are divided into declarative memory and procedural memory [25, 32, 27]. ACT-R has the advantage of using heuristics based on 4

common conflict resolution strategies [2], that could be extremely useful in an intention recognition system. The disadvantage of ACT-R is that the modeling mechanism does not allow abstraction from the specific domain. Another human behavior modeling approach is CTT which stands for ConcurTaskTree is a notation used for expressing hierarchical task models. With it a compound activity is represented as a task tree, where each tree node is a task. This allows composite tasks to be decomposed into subtasks. Various temporal notations are used for constraining the task’s sibling nodes [16, 32]. The advantage of CTT is the parallel action execution and the support for temporal modeling. On the other hand, it is not able to create model abstraction and does not support causal reasoning. Similar to CTT is the Collaborative Task Modeling Language or CTML that is used as a specification framework for collaborative applications. It has task driven methodology that is able to model cooperation, as well as the domain. Wurdel et. al showed that CTML, or task models respectively can be applied in the field of activity recognition [30, 29]. This is done by employing the specified models for defining the probabilities of the next possible action during activity execution. A probabilistic inference mechanism, having recognized the current state, then makes use of the task model and adjusts the probabilities of the next state. However, this approach has the disadvantages of the CTT. Yet another approach is Petri Nets which are used for different purposes such as system modeling, verification and implementation. However, in [12, 11] they are regarded as a means for modeling actions. Petri Nets describe actions by expressing the way this actions change the set of conditions and restrictions on which the actions depend [17]. The advantages of Petri Nets are that they support parallelism and can achieve a level of abstraction. However, they do not support causal reasoning or integration of context information. The last approach we investigated is the Planning Domain Definition Language (PDDL) that is mainly used in planning problems and intended to express the elements and dynamics of a domain, namely what kind of predicates are there, what set of actions are possible, what is the structure of compound actions, and what are the effects of these actions [15]. Although PDDL is designed for planning problems, it can also be applied in the field of human behavior models [6, 7] as human behavior can also be regarded as a plan, a person is following in order to achieve a goal.The advantage of PDDL is the support of causal reasoning and a level of abstraction that allows the reuse of the model. The disadvantage is that it does not support probabilistic reasoning. Later in this work we introduce requirements for HBM (see Section 5) and compare the requirements satisfied by the modeling approaches described above (Table 1) and in the next sections we explain in more detail which modeling mechanism we choose and how we model human behavior with it.

3

Requirements for context aware assistive software

Developing adequate assistive software is a challenging task, because different aspects such as sensors and algorithms for sensor interpretation have to be connected with the context information through various application components. As such system, where 5

the sensors outputs are directly incorporated in the application, is difficult to modify or adapt to new contexts, usually some level of abstraction is used. Systems developed in that manner have the advantage of supporting modules that can be shared between applications. For example, explicit high-level context model that contains context information from different sources, can be shared by different system applications. Context modeling plays an important role in such systems as the context module is the one being modified when changes in the infrastructure are made. It also allows a uniform representation of the information that can be interpreted by all the components dealing with different applications. Thus, it is essential the context modeling approach to meet some requirements that ensure the successful module abstraction as well as its usage in various context-aware applications. Additionally, important aspect of context modeling is the human behavior modeling, as it plays an essential role in systems dealing with user intention recognition. Most of the assistive software applications aim not only at recognizing the current user state, but also at inferring the user goal, so that the system can provide adequate assistance. To achieve this, the context model has to contain information about various human behaviors, so that the system is able to recognize achievable goals from the viewpoint of the current user state. In order to develop assistive software that is able to support various applications, in this section we discuss the requirements for context awareness, as well as the human behavior modeling requirements and some application specific system requirements.

3.1

Requirements for context awareness

In their paper "Context Awareness", Indulska et al. [20] describe the importance of context in assistive software and discuss the requirements that context modeling approaches should meet in order to be applicable in variety of context-aware applications and to achieve a useful for software engineering abstraction. Below we discuss these requirements and their importance for an assistive system. Imperfect context information: Context-aware applications have the common problem of imperfect context information that could be due to noise in the sensor data, or sensor malfunction, or even inaccurate algorithms for extracting context information from the sensors. It could also be caused by incorrect information provided by the users, such as incomplete or wrong agenda. Thus, when modeling context, it should be able to represent information that is incomplete, imprecise or ambiguous. Additionally, it should have some sort of quality indicators, so that when a sensor is malfunctioning, it can be traced back and repaired. Context histories: Often the information about the current state is not enough for the proper functioning of assistive software. It could also require information about past and future contexts. Therefore, a context modeling approach should be able not only to represent histories but also to be able to reason about them. This information is essential in assistive applications where behavior patterns are to be detected or where the intention of a user is to be inferred. Software engineering: A context model benefits the software development when it is introduced in the early stages of the software engineering lifecycle. Then it can be refined incrementally during the lifecycle, thus introducing the types of context 6

information required by the application and the data constraints. Additionally, it can be used to evaluate the suitability of the context sensing infrastructure that is already developed, and to present more software or hardware requirements. Furthermore, the context model can be used for producing different use cases for software testing of the context-aware functionality. Runtime querying and reasoning: One of the context models forms is the runtime model, that is queried by the context-aware applications. The runtime model deals with problems such as how to represent the information at runtime so that it can be reasoned upon in order the system to provide decision making. The model should contain information about the existing context types and their characteristics, as well as concrete context information. It should also be easily extendable so that it can cope with the reasoning in evolving environment. Interoperability: One of the characteristics of smart environments is that the contextaware applications could be faced with the problem of communicating with components that were unknown to the software designer. Such components could be new applications, or a new device, or new sensing hardware. Thus the context-aware applications should be able to exchange information with them even when the component was previously unknown. This requires either transforming the information in different representations, using a shared context modeling approach, or supporting transformation between different modeling approaches. Recognition of semantic goals: A common practice in activity recognition is the detection of labels, i.e. a name that is associated with specific data pattern without any further meaning. However, in order a system to be able to perform strategy synthesis for assisting the user, semantic goals should be recognized. Namely, not only an activity, but also the plan (or path) that leads to achieving the goal. That way the system can generate a plan based on the user’s semantic goal and assist him / her while achieving this goal. Additionally to the requirements listed above, we go deeper into the modeling of context information and histories in order to define a more fine-grained set of requirements for human behavior modeling.

3.2

Requirements for human behavior modeling

Models of human behavior play important role in assistive systems for several reasons. From psychological viewpoint, human behavior modeling helps to better understand human actions, to find what caused these actions and how they affect the world. Additionally, given an activity, it is possible to detect the behavior to which it belongs, which is important ability in systems that monitor users and try to detect anomalies in their behavior. Finally, HBM is essential for software providing proactive assistance, because having recognized a behavior, the system is then able to plan the best way in which to assist the user. In a previous work we analyzed different activity datasets and identified the HBM requirements needed for a successful model that is able to improve the intention recognition of an assistive system [31, 32]. Below we describe the requirements we have derived and discuss their importance for behavior models in assistive software.

7

Procedural modeling:The requirements for procedural modeling are important, because they allow the description of the dynamics of user activities, such as activities composed of simpler actions, sequential activities, repeating activities. They are also necessary for the representation of situation where one activity is interleaved with another, or where the activity is interrupted and is continued after the second activity is executed. Parallel execution modeling: Humans often execute more than one actions simultaneously. The requirements for parallel execution modeling are therefore necessary for expressing parallel processes and activities that are executed in parallel and have to be synchronized. They are also necessary for representing multiple users acting in parallel for achieving a common goal or competing / independent goals. Probabilistic modeling: Models which purpose is simulation do not require requirements for probabilistic modeling. However, when the purpose is intention recognition and assistance, it is important that the model is able to represent not only one possible execution sequence but all probable sequences and their probabilities. It is also necessary for the model to describe what is the probability that a given observation is observed in reality and to describe the state of the observed world. Another requirement for probabilistic modeling is the ability to model action durations, so that when the activity duration is over, the system can assume that a new activity has started. Causal modeling The ability to model causality is essential, because it describes the cause of an activity, the requirements that have to be met before it is executed, as well as its effects in the environment. Causal modeling is usually expressed with the help of preconditions that describe the state in which the world has to be in order the activity to be executed; and effects that describe the state of the world after the activity execution. Additionally, causal modeling contains all additional relevant information about the state of the world and the user that can be of use for the system to recognize the current user state and his / her intentions. Modeling purpose: There are various human behavior models and all of them serve different purpose. However, for assistive software it is important the model purpose to be causal inference, and if that is not the case, the model has to be easy to adjust for this purpose. Additionally, it has to be able to do state and parameter estimation, and to detect errors in the human behavior, so that the system can assist the user to correct them. Finally, it should also be able to detect unknown actions and to be able to cope with them. Model abstraction: In order the model to be easily reused in different application domains, it should have an abstract representation of activities that later, depending on the specific use case, can be parameterized by domain- and user-specific parameters. This ensures that the software engineer will not be forced to change the whole model every time when a new application for the assistive software is discovered, but only the domain-specific parameters. Having defined the requirements the system and specifically the model should satisfy, the next section describes in detail the general framework for assistive software we propose, the requirements it satisfies and its advantages and functionality.

8

4

A general framework for assistive software

Assistive Technology is designed to support users in fulfilling their tasks. An assistive software system therefore has to monitor its environment together with the user, in order to infer the user intention. Assisting the user means adjusting the environment to his needs without obstructing him in any manner. To meet the requirements for Context Awareness and assistive software in general listed in section 3.1 we designed the software architecture showed in Fig. 1. The software system consists of two main components that use probability distributions of user intentions for communication. The division of assistive software into two components is not a new approach: as Heider et al. [18] stated, goal-based interaction requires two functionalities, namely, intention recognition and strategy analysis. This essential disjunction is also implemented in our architecture in order to provide better flexibility and interoperability with new system elements. Intention Recognition component: The intention recognition component takes sensor data from the external environment monitoring component and creates a probability distribution over possible user goals, making use of the high level human behavior model. It leverages probabilistic temporal models to infer user intentions. The model itself has to be adjusted to the specific problem domain and environment specifics such as available sensors and other domain specific elements. Strategy Synthesis component: The strategy synthesis component employs classical planning strategies to support the most likely user goal. It therefore takes a probability distribution of possible user goals from the intention recognition component as well as some additional environment specific information, such as cost functions for single device functions, and generates a plan for the devices to follow in order to support the user. Once a strategy for assistance was created the external environment controller starts to realize it by controlling the environment. Environment Monitor

Assistive Software System

Environment Controller

Assistive Software System Strategy Synthesis

Intention Recognition

Intention Recognition Component Generic Filter

Model

Figure 1: General Architecture for Assistive Software Fig. 1 also illustrates that the intention recognition component itself consists of two subcomponents, the model and the generic filter. We split up the intention recognition component into these two parts to ensure the reusability of the filter implementation independently from the concrete scenario. 9

The generic filter component provides a generic implementation of common bayesian filter methods such as Hidden Markov models or particle filter. These probabilistic methods rely on the specification of a prior state distribution and a process model. Later in Section 6 we explain how we generate these from a PDDL description. The introduced architecture fulfills the requirements listed in 3.1. The support for context histories as well as the support for imperfect context information is provided by the kind of algorithm that was chosen. We choose temporal probabilistic models which take the history of the context into account. Due to the probabilistic component this group of algorithms can also cope with noisy and unambiguous sensor data. The support for software engineering is provided by the strict separation of the software framework into components. By creating only the model from a human readable specification of human behavior we ensure that only a small part has to be adjusted in different situations. The software framework as well as all other components can be software tests without using a specific model but a dummy one. As described later we split up the process model and the observation model too. This enables us to simply exchange the kind of sensor information for the same model. The requirement for runtime querying and reasoning is fulfilled by the framework itself, by providing a general way of representing data, such as parameters and time series. The requirement for interoperability which means, that even if devices like sensors or actors fail the software should provide assistance is not supported directly but based on the architecture easy to provide. The environment controller, illustrated in Fig. 1 is not only able to observe sensors but also the presence and absence of devices. This can be used either as trigger for the software to exchange the observation model if it was a sensor, or for the strategy synthesis to remove or add the device to the list of available actors. After we described the general framework of our architecture, in the next sections we focus on the intention recognition component and its elements. Section 5 motivates the usage of PDDL as a human behavior model and the approach is illustrated by an example; while in Section 6 the process of mapping PDDL to particle filter or HMM is explained, as well as the observation model linking the observations to the underlying runtime model.

5

Engineering human behavior models

As we already explained in the previous sections, there are two main approaches to obtain models that are used for activity recognition. The first way is to learn the model from the training data, and the second is to use a high-level abstract human behavior model that is later parameterized by a domain-specific parameters. In order to omit the problems connected to the first approach, we are regarding the second option, namely an abstract model that is able to capture the dynamics of user behavior. From software engineering point of view such model should satisfy the requirements for context awareness from Section 3.1; and from modeling point of view, it should satisfy the human behavior modeling requirements from Section 3.2. Thus, in this Section we discuss the Planning Domain Definition Language (PDDL) as a human behavior modeling approach, its advantages to other HBM formalisms; and finally, we give an 10

Requirements

CTML

CTT

Petri N.

ACT-R

PDDL

composition

+

+

+

+

+

hierarchy

+

+

+

+

+

sequences

+

+

+

+

+

procedural modeling

loops

+

+

+

+

-

interleaving activities

-

-

-

-

+

choice

+

+

+

+

+

constraints

+

+

+

+

+

enabling

+

+

+

-

+

disabling

+

+

+

-

+

priority

+

+

-

+

+

independence

+

+

-

+

+

suspend

+

+

-

-

+

resume

+

+

-

-

+

parallel execution parallelism

+

+

+

+

+

synchronisation

+

+

+

+

+

observation models

-

-

-

-

-

prob. for action seq.

-

-

-

-

-

durations of activities

-

-

-

-

-

preconditions

+

-

-

+

+

effects

+

-

-

+

+

relation to prior knowl.

-

-

-

+

+

probabilistic modeling

causal modeling

modeling purpose causal inference

-

-

-

+

+

state estimation

-

-

-

-

-

parameter estimation

-

-

-

-

-

detecting errors

-

-

-

+

+

unknown actions

-

-

-

+

+

model abstraction

-

-

-

-

+

Table 1: Requirements for human behavior modeling satisfied by CTML, CTT, Petri Nets, ACT-R, and PDDL example by modeling a three person team meeting with PDDL.

5.1

Why PDDL?

In a previous work [32] we investigated 19 different human behavior models and compared their suitability for intention recognition according to the requirements in Section 3.1. We then came to the conclusion that PDDL is one of the best choices for our purposes. Table 1 shows the requirements the models from Section 2.3 satisfy and it can be seen that PDDL supports more requirements than the other approaches. PDDL not only supports most of the requirements, but also is able to generate human behavior models without the software engineer explicitly specifying the execution sequence. This is due to the manner in which actions are specified – every action is represented by preconditions that have to be met before the action is executed; and by

11

effects that describe what is changed in the environment by executing this action. Thus, it is not necessary to define an explicit execution sequence, but rather feed the actions descriptions, called operators, into a planner and after defining the goal, the planner is able to provide a plan satisfying all the preconditions and effects and fulfilling the goal. It is also easy to handle overlapping activities, either by using parallelism when the activities are executed by more than one person, or by executing subactivities that satisfy common preconditions for the overlapping activities. Additionally, the operators allow a level of abstraction that usually is difficult to reach with other HBM. This is achieved by the PDDL ability to define abstract objects that are later parameterized by domainspecific constants. Furthermore, PDDL easily captures any context and metadata that is relevant for the activities execution, which makes the approach extremely suitable for context aware activity recognition applications. The only requirements PDDL does not support are those for probabilistic modeling. However, this drawback can be overcome by mapping the model into a probabilistic model such as particle filter or HMM. We describe the process generating both particle filter and HMM from PDDL model in the following Section 6.

5.2

Engineering models with PDDL

As we explained above, PDDL employs operators based on preconditions and effects for describing actions, thus having defined the action specifications, the PDDL planner takes care of generating the human behavior model without the need of manual model engineering. Here we introduce the components of the model and exemplify their usage for HBM by introducing the three person team meeting scenario and modeling it in PDDL. PDDL model components A PDDL description consists of two parts: a domain specification part and a problem specification part that are specified in two separate .pddl files. In the domain description, sometimes called operator description, all the operators, respectively actions, are described in terms of preconditions and effects; as well all the predicates which represent the world state and are used for defining the preconditions and effects. The second part of the PDDL model description is the fact file, or sometimes called problem file, that contains all constants, or with other words the domain specific parameters. Additionally, it describes the initial state of the world, namely the predicates that are true for the world before the plan is generated; as well as all the goals that the user wants to achieve in his plan, i.e. the predicates together with the concrete constants that have to be true when the goal is achieved. Team meeting scenario Having defined the general structure of a PDDL model, we now introduce the team meeting scenario, with which we will exemplify how human behavior can be modeled with PDDL. The scenario is based on our experimental environment, a smart meeting room, which we use as a testbed for human-computer interaction in an intelligent

12

ambient environment [4]. This evaluation environment contains various sensors and devices that are able to sense changes in the environment. The sensor infrastructure consists of cameras, an ultra-wideband positioning system with active tags and mobile acceleration sensors that provide information about the user position. Fig. 2 shows a picture of the smart meeting room where students are having a team meeting.

Figure 2: Our evaluation environment The scenario unfolds as follows: three users that are about to have a meeting enter our smart meeting room. The first person goes to the stage, connects his notebook to the projector and prepares his presentation. In the meantime the other two users go to their respective seats, sit and prepare to listen to the presentation. While the first user is presenting, the other two are listening. When the presentation is over, the presenter moves to his seat and the second person replaces him on the stage in order to prepare his presentation. The process repeats for the second and the third user until after the last presentation, when a discussion takes place and after that all three team members leave the room. PDDL model of the team meeting scenario In order to describe the above scenario in PDDL, all the actions have to be identified and specified as PDDL operators. After a short analysis, the derived actions are enterRoom, move, sit, listen, preparePresentation, present, discuss and leaveRoom. The next step is to describe them as operators. For example, Fig. 3 shows the PDDL operator for the action discuss, where the precondition for executing it is that the presenter has finished his presentation and all the attending are seated. If the precondition is satisfied, then the action is executed, and the effect is hasDiscussed. The predicates in this case are isSeated, hasPresented, hasDiscussed; and the abstract objects are ?who of type user and ?x of type user. It can be seen that the operator is abstract as it does not contain concrete users specification but rather variables that can be exchanged by any user. All the remaining activities are represented in a similar manner. If we want to attach weights to our operators, or duration to the longer lasting activities in the model, we do this by using PDDL extensions for duration [10] and priority. Fig. 4 shows the extended operator of the same action discuss as in Fig. 3. The extension :prior allows us to weight the operator relative to all other operators.

13

(:action discuss :parameters (?who - user) :precondition (and (isSeated ?who) (hasPresented ?who) (forall (?x - user) (isSeated ?x))) :effect (hasDiscussed ?who)))

Figure 3: Action discuss described as a PDDL operator Furthermore, the duration, defined by :duration will take 5 times longer than the same activity without a specified duration. (:durative-action discuss :parameters (?who - user) :prior (2) :duration (= ?duration 5) :precondition (and (isSeated ?who) (hasPresented ?who) (forall (?x - user) (isSeated ?x))) :effect (hasDiscussed ?who)))

Figure 4: Action discuss described as a PDDL operator with extensions for duration and operator weight Finally, in order to define the model purpose, or the goal that has to be reached, the fact file is specified. Fig 5 shows an example of the information in the fact file for the team meeting scenario. The specific parameters in this case are the three users A, B and C, and the places are the three seats, the stage and the door. Then the initial state of the world is defined with the keyword :init, in our example the initial world state is that all the three users are at the door. Finally, the goals of the users are specified. Here their goal will be that all of them should have left the room after presenting and discussing. The operators preconditions and effects will ensure that before the goal is achieved, all of them have been at the stage and presented their slides, as well as discussed them afterwards. After the PDDL model is created, in the next Section 6 we describe how this model is compiled into probabilistic models and how the derived model is integrated in our general framework for assistive software.

6

The HBM Compiler

The proposed architecture in section 4 enables us to exchange single components from the software system. The probabilistic model, which strongly depends on the concrete application, can be created independently from the other components and later incorporated. As we already saw the main drawback of PDDL is the lack of probabilistic 14

(:objects A B C - user stage seat1 seat2 seat3 door - place) (:init (isAt A door) (isAt B door) (isAt C door)) (:goal (and (hasLeft A) (hasLeft B) (hasLeft C)))

Figure 5: Facts definition in PDDL

Figure 6: HMM generated from PDDL modeling. In this section we discuss how to generate probabilistic models, such as Hidden Markov models or particle filters, from the description of human behavior in PDDL.

6.1

PDDL to Hidden Markov Models

Hidden Markov Models are probabilistic models, namely Dynamic Bayesian Network, with finite and discrete state space. HMM’s are state of the art in recognizing activities [5, 24, 1]. To generate an HMM from the PDDL model, the state space has to be generated. It is developed by applying every feasible operator to the world state. A world state is one possible occupancy of all predicates from the PDDL. The number of possible world states is then given by the number of grounded predicates together with the number of applicable operators. To start the model with a valid configuration the prior state distribution has to be specified. Here we just use the initial world state specification from the PDDL problem description. For example, Fig. 5 shows the initial state of the world as well as the 15

goals of the users. The prior probability of these states is calculated by heuristics, where the prior probability of states unrelated to the initial world state is set to zero. The heuristics are build on common conflict resolution strategies used in production systems like ACT-R as described in section 2.3: Salience: An operator may be prioritized by applying a weight to it. Recency: The most recent operator may be prioritized. Refractoriness: An operator that was applied once should not be applied again. This strategy helps to avoid creating infinite loops. Specificity: The operator that fulfills the most predicates of the goal state is preferred to other. As we already explained in Section 5, planners usually generate only one of N possible execution sequences. Thus, here we employ our own planner that generates a directed acyclic graph that contains all valid plans. This graph is then translated into the transition function of the HMM by using the heuristics to determine the probabilities. If the domain to model involves multiple users the might interact in parallel, the different users have to be defined by creating object instances of type user. Using multiple agents further extends the state space of the HMM. It is done by creating the cartesian cross product of the states created for the single agent case and the agents. This algorithm ensures that each possible action that is contained in the model has an appropriate state in the HMM and can thus be inferred. The HBM compiler generates HMMs from PDDL descriptions. Fig. 6 shows such HMM generated from the domain and fact files described in section 5. During the generation, one has to select a heuristics used for calculating probabilities. After creating an internal representation of the state space, the prior distribution, the duration model and the process model, it takes an observation model and creates an executable HMM. Using this approach to generate HMM’s usually creates huge state spaces. In the example from Fig. 6 the number of states is small. However, as the complexity of the HMM filtering process depends on the number of states, this approach is not suited for situations with many states. Another disadvantage of this approach is the usage of expanding states as duration model [21]. With the ESHMM (Expanded State HMM) approach only a small set of duration probability distributions may be modeled. Durational states are expanded to a grid of states, which supports the fast growth of the state space. In the next section we describe to generate particle filters from the PDDL model to counteract this disadvantage.

6.2

PDDL to Particle Filter

Particle filter[3] methods approximate the state space with a set of weighted samples. Like other Bayesian filtering approaches, one has to specify the possible state space, namely the structure of a state, the prior state distribution, and the process model. In addition to these common parts, it is necessary to specify methods that take a set of weighted samples and calculate the most likely state. To track the current configuration of the environment including the states of the agents, we introduce the world state. This world state is constructed by combining all

16

predicates from the PDDL problem description applied on every applicable parameter. This world state enables us to check preconditions or make effects become true. In contrast to the HMM approach, in the particle filter approach the state space should not be explored at model compile time but at runtime using a planner. This is the main advantage of the particle filter, as we explore only a small subset of the state space. Every PDDL action contains a list of preconditions and effects which might depend on parameters. The first step in compiling the specification is to apply the possible parameters to actions to create the grounded operator. This is done in figure 7. Given the list of preconditions, a method to check if the current world fulfills these preconditions is created. The action discuss in Fig. 3 applied for a user named A for example results in the method checkdiscussA in Figure 8. Only if all preconditions, namely all other users are still in the room, are sitting and A is finished with giving the presentation, are fulfilled, the operator discuss can be applied to user A. In addition to the preconditions each action defines effects that have to become true when the action is executed. The list of effects of a action is translated to a method that sets the effects to the current world state. Figure 9 illustrates the resulting execution method for the discussA operator. (:action discussA :precondition (and (isSeatedA) (hasPresentedA) (isSeatedB) (isSeatedC)) :effect (hasDiscussedA)))

Figure 7: Grounded discuss operator

static bool checkdiscussA(WorldState *w){ if ((w->isSeatedC) && (w->isSeatedA) && (w->isSeatedB) && (w->hasPresentedA)) return true; return false; }

Figure 8: Method to check if operator discussA is applicable

static void execdiscussA(WorldState *w){ w->hasDiscussedA=true; }

Figure 9: Method to execute the operator discussA Our approach is to explore the state space online. Based on the prior state distribution, that is directly given from the problem specification, we employ a partial order 17

planner to explore possible operators. If the priority information to actions might not be specified (salience), we use the introduced heuristics to weight them. Similarly to generating HMMs with the HBM compiler, creating the particle filter starts with producing an internal representation of the model. After adding the observation model (see Section 6.3) the model can be used in our framework. As already mentioned, the HMM approach has the disadvantage that the state space has to be explored at compile time. One advantage of the particle filter based approach is that the state space is explored dynamically. Another advantage is that by using this approach, the developer of the PDDL model is not limited to one kind of duration model. It is possible to use every kind of probability distribution for durational tasks.

6.3

Observation model

In the previous sections we explained how to create the process model of probabilistic models from PDDL descriptions. Both sections omitted the definition of the observation model. As we propose a training free approach to infer users activities, here we provide an approach to also derive an observation model from prior knowledge. An observation model has to be specific for an environment. Therefore we describe how to create observation models with different levels of complexity only by using prior knowledge about the concrete environment. Fig. 10 illustrates a very simple approach for creating an observation model for smart environments. Here the position of every possible seat and canvas is known. Based on the assumption that people give presentation in front of canvases and are listening to presentation sitting on a seat, we can create a bivariate normal distribution where the mean is the location of the seat (or the location in front of the canvas). The observation model for moving around in room can be created by taking the location of the center of the room with a huge variance in x and y direction.

Figure 10: Simple observation model If we have additional knowledge about the size and the attitude of the objects, we can determine for each object the individual covariance matrix. This model is also built with simple prior knowledge but it can be updated very easy, as we only take a number of salient devices, like chairs into account. In this more complex approach we model 18

the movement of the users by using a Gaussian mixture model that can be created by analyzing a detailed floor plan of the specific environment. This approach is illustrated in Fig. 11

Figure 11: Complex observation model Another approach for modeling the users movement was introduced in [19] where they created a detailed floor plan with particular points. The movement path of the user was then calculated by searching the shortest path between those points. Having described all elements of the HBM Compiler and the framework for assistive software, in the next section we test its performance.

6.4

Interpreting Results at Runtime

Filtering algorithms for probabilistic models are able to provide a likelihood value (or log-likelihood for easier calculation). This value provides information about how good a model describes the current situation. Observing this value may help for recognizing accidental events. An example for this kind of event is the fall of elderly people. If the system is developed to detect this kind of event, falling has to be modeled and the system will detect it. Since the fall itself does not belong to normal behavior of elderly people, a fall might not be modeled. In such a case a hard change to the evolution of the observed likelihood will give information about "abnormal" behavior, such as accidental events. In addition to this approach a model should contain a default action with very low probability to cope with such situations. In section 4 we discussed that the single components of the software system, the intention recognition and the strategy synthesis part communicate using probability distributions of user goals. In our approach a single filter tool is able to follow one set of goals and therefore detect the activity of users following this goal. Again the likelihood, which describes how the model fits the observation data can be used. Different filter tools, all sharing the same action description but following different sets of goals have to run in parallel. At each time step the probability distribution is then calculated by using the likelihood of all instances. This can be done due to the finite number of possible goals in such environments [22].

19

7

Evaluation

To analyze the resulting filter tool and evaluate the results of the HBM Compiler and the modeling approach itself, an experiment was conducted. The experiment was conducted in the Smart Appliance Lab at the university of Rostock that we described in Section 5. The experiment involved the participation of three students, each of whom presented a short proposal. Each of the three presentations lasted between 1 and 2 minutes. After the presentations a short discussion took place. The experiment ended when the students left the smart meeting room. The whole meeting was recorded with UbiSense sensors that give the users’ location. This is done by using location tags that transmit ultra-wide band radio signals, which are captured by fixed network of receivers [28]. The sensor data is then filtered using the tool generated by the HBM Compiler probabilistic model and the corresponding observation model. A more detailed analysis of the results of this modeling approach and comparison with handcrafted HMM’s and particle filter is presented in [33]. Although the meeting is relatively short compared to an actual meeting, it has all the behavior dynamics occurring in a real meeting and is fully capable of testing the performance of the HBM Compiler.

7.1

Results

In order to quantify the influence of the compromises we had to made in order to create the whole model in a fully automatic fashion, we prepared the particle filter models with three different observation models of increasing complexity. For comparison of the single evaluations we calculate the accuracy by dividing the number of correctly recognized states through the number of all states. Our example meeting had the structure of Table 2. Here for each person (A, B, C) the order of actions is listed. A.state moveAdoorstage presentA moveAstageseat listenA listenA listenA listenA discussA moveAseatdoor

B.state moveBdoorseat listenB moveBseatstage presentB moveBstageseat listenB listenB discussB moveBseatdoor

C.state moveCdoorseat listenC listenC listenC moveCseatstage presentC moveCstageseat discussC moveCseatdoor

Table 2: Sequence of actions that we observed The users enter the room simultaneously. B and C reach for their seating place while A is preparing his presentation right away. After A finished his presentation he is reaching for his seat while B is taking the stage. For the final discussion, everybody takes his seat. After they closed their meeting, everybody is leaving the smart environment. Note that all users act concurrently and independently.

20

7.1.1

Dummy Observation Model

The first objective is to test whether our observed parallel action sequence is actually a valid plan in terms of our PDDL modeling and whether the derived particle filter can track our observation sequence. In order to test this hypothesis, we derive a “dummy” observation model, where we assign each action a constant probability distribution function (pdf), yielding 1 if the observation matches the state and 0 otherwise. Here we do not use the data measured by the sensors, but we use the annotated states as observation sequence. The observed sequence is given in Table 2. moveAdoorstage listenB discussC moveCstageseat moveAdoorseat moveBseatdoor listenA moveBdoorstage moveAstagedoor moveBstagedoor presentA moveAstageseat User

discussA moveCdoorstage

1

moveBdoorseat

2

moveAseatstage

3

moveCdoorseat presentB moveBseatstage moveCstagedoor listenC discussB presentC moveBstageseat moveCseatstage moveCseatdoor moveAseatdoor

500

1000

Time

1500

2000

2500

Figure 12: Results from the dummy observation model with 2000 particles, accuracy=1.0 If we apply the particle filter with the dummy observation model to our observations, we yield the results in Fig. 12. The accuracy of our filtering is 100%, as our constant pdf removed all ambiguous hypotheses. 7.1.2

Simple Observation Model

The more interesting results are from a more realistic particle filter where the transition and the observation model for the observations are both derived from a prior description. For our measurement runs, we used the observation model from Fig. 10. The accuracy of our filtering is given in Fig. 13. The accuracy of our model compared to our dummy observation model is obviously decreased, as we have to cope with the ambiguous observations from our real time location system. Furthermore the simple observation model (especially the simplistic treatment of all move actions) show that the particle filter has difficulties to decide whether A, B, or C move to the stage or the seat. This fact accounts for the bad accuracy during the move actions. In our experiment we reached an accuracy of about 0.62.

21

moveAdoorstage listenB discussC moveCstageseat moveAdoorseat moveBseatdoor listenA moveBdoorstage moveAstagedoor moveBstagedoor presentA moveAstageseat User

discussA moveCdoorstage

1

moveBdoorseat

2

moveAseatstage

3

moveCdoorseat presentB moveBseatstage moveCstagedoor listenC discussB presentC moveBstageseat moveCseatstage moveCseatdoor moveAseatdoor

500

1000

Time

1500

2000

2500

Figure 13: Results from the simple observation model with 10.000 particles, accuracy=0.62 7.1.3

Complex Observation Model

In order to quantify the impact of the synthetic observation model and especially the simplistic treatment of all move actions, we approximated the observations for each action as bi-variate Gaussian mixtures of our observations. This was done by analyzing previous meeting. Approximating the observations with a mixture of Gaussian allows us to follow the movement of persons in the room even around the corners (see figure 11 for comparison). Furthermore we can distinguish better between a “move-to-stage” and a “move-to-seat” action. The results of applying our observations to the complex observation model are given in Figure 14. The result is slightly better than the simple observation model, especially during the move phases which accounts for the gap in accuracy between those both. The analysis of the experiment shows that we are able to reach an accuracy of about 0.794. To close this gap in the future, a research in the direction of more intelligent parametrizable movement models would be advisable.

8

Discussion and conclusion

The main contribution of the paper consists of three parts; first we gathered a list of requirements that have to be fulfilled by assistive software and context aware software in general; second we showed that our approach to model human behavior with PDDL can be used to create probabilistic models, which do not need to be trained to get a good recognition rate for the users state. The third part is that we propose a software architecture that fulfills the requirements for context awareness and can be used – together with a runtime model – to assist users.

22

moveAdoorstage listenB discussC moveCstageseat moveAdoorseat moveBseatdoor listenA moveBdoorstage moveAstagedoor moveBstagedoor presentA moveAstageseat User

discussA moveCdoorstage

1

moveBdoorseat

2

moveAseatstage

3

moveCdoorseat presentB moveBseatstage moveCstagedoor listenC discussB presentC moveBstageseat moveCseatstage moveCseatdoor moveAseatdoor

500

1000

Time

1500

2000

2500

Figure 14: Results of complex observation model with 10.000 particles, accuracy=0.794 Gathering requirements is an important task in order to check if a software fits them. The list in section 3.1 contains requirements for the kind of model used such as support for context histories as well as requirements for the engineering process itself like support for interoperability. Fulfilling this list of requirements can help creating assistive software in a flexible way that is easy to adapt to new components in the system. Additionally, we presented an approach to create assistive software by specifying the behavior of the users with PDDL and afterwards use this specification to create probabilistic models that are able to cope with noisy and unambiguous sensor data. We discussed multiple approaches for modeling human behavior and we showed on the basis of a list of requirements that PDDL is adequate for our needs. In addition we introduced a modeling approach for human behavior with PDDL. Furthermore, the architecture introduced by us, which leverages from the split of intention recognition and strategy synthesis, is able to fulfill the gathered requirements for assistive software and context awareness, which makes it robust enough to cope in a changing environment, and easily adjustable to new components. Finally, we illustrated that our approach – depending of the kind of observation model – is able to recognize the users’ state with an accuracy of about 80%. This stands to show that a probabilistic model generated from HBM could be a powerful tool for intention recognition that is able to compete with a learned model. It also proves that accurate activity recognition can be achieved also without the high cost of training the applied probabilistic model. However, further research has to be done on using the durative actions as well as automatically creating observation models, independently from the human behavior model itself. The modeling approach can be extended by defining atomic action templates as proposed in [31], which enables the system designers to create a library of

23

small action templates that can be used to model composite activities. Such approach could solve some planning problems that cannot be resolved when having only coarsegrained activities. In addition to this, it might be possible to get better results by using better heuristics for the transition model. Furthermore, as the experiments conducted in this work did not show the performance of the generated model compared to hand-crafted models, additional paper was published, comparing generated models with different complexity to hand-crafted ones [33]. There it was shown that depending on the modeling strategies and the model complexity, the generated models are able to outperform the trained HMMs or particle filters. Regarding the experiment complexity, currently we are working on a more challenging example for the system where a non-staged three person meeting is to take place for about an hour and the system is required to recognize their actions and intentions. Additionally, in the future we intend to deploy and test the assistive architecture in a real system as well as to apply the approach in other domains. In our previous work [31] we already showed that we are able to model activities from the elderly care domain with PDDL and in the future we intend to employ the model in our intention recognition system in order to monitor the activities of a nurse. In addition research has to be done in the direction of deploying an application into a smart environment. Therefore it has to be shown that the architecture is easy to maintain and to deploy in new environments.

References [1] D. Aarno and D. Kragic. Motion intention recognition in robot assisted applications. Robotics and Autonomous Systems, 56(8):692–705, August 2008. [2] J. Anderson. The Architecture of Cognition. Cambridge, MA: Harvard University Press, 1983. [3] M. S. Arulampalam, S. Maskell, and N. Gordon. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Transactions on Signal Processing, 50(2):174–188, 2002. [4] S. Bader, G. Ruscher, and T. Kirste. A middleware for rapid prototyping smart environments. In Proceedings of the 12th ACM international conference adjunct papers on Ubiquitous computing, Copenhagen, Denmark, 2010. [5] M. Buettner, R. Prasad, M. Philipose, and D. Wetherall. Recognizing daily activities with rfid-based sensors. In Proceedings of the 11th international conference on Ubiquitous computing, Orlando, FL, USA, 2009. [6] C. Burghardt, M. Giersich, and T. Kirste. Synthesizing probabilistic models for team activities using partial order planning. In KI’2007 Workshop: Towards Ambient Intelligence: Methods for Cooperating Ensembles in Ubiquitous Environments (AIM-CU), Osnabrück, Germany, 2007.

24

[7] C. Burghardt and T. Kirste. Synthesizing probabilistic models for team-assistance in smart meetings rooms. In Adjunct Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work, CSCW’08, San Diego, CA, USA, 2008. [8] D. J. Cook, G. M. Youngblood, and G. Jain. Algorithms for smart spaces. In A. Helal, M. Mokhtari, and B. Abdulrazak, editors, The Engineering Handbook of Smart Technology for Aging, Disability and Independence, pages 767–783. John Wiley & Sons, Inc., 2008. [9] S. Das and D. Cook. Designing smart environments: A paradigm based on learning and prediction. In S. Pal, S. Bandyopadhyay, and S. Biswas, editors, Pattern Recognition and Machine Intelligence, volume 3776 of Lecture Notes in Computer Science, pages 80–90. Springer Berlin / Heidelberg, 2005. [10] D. E. Smith. The case for durative actions: a commentary on pddl2.1. Journal of Artificial Intelligence Research, 20(1):149–154, 2003. [11] J. Fix and D. Moldt. A reference architecture for modelling of emotional agent systems. In Proceedings of The Seventh German Conference on Multi-Agent System Technologies (MATES), Hamburg, Germany, 2009. [12] J. Fix, C. von Scheve, and D. Moldt. Emotion-based norm enforcement and maintenance in multi-agent systems: Foundations and petri net modeling. In Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS’06, Hakodate, Japan, 2006. [13] D. Franklin. Cooperating with people: The intelligent classroom. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, AAAI’98, Madison, WI, USA, 1998. [14] D. Franklin and K. Hammond. The intelligent classroom: providing competent assistance. In Proceedings of the Fifth International Conference on Autonomous Agents, AGENTS’01, Montreal, Canada, 2001. [15] M. Ghallab, A. Howe, C. A. Knoblock, D. V. McDermott, A. Ram, M. Veloso, D. Weld, and D. Wilkins. Pddl — the planning domain definition language. AIPS98 planning committee, 78(4):1–27, 1998. [16] M. Giersich, P. Forbrig, G. Fuchs, T. Kirste, D. Reichart, and H. Schumann. Towards an integrated approach for task modeling and human behavior recognition. In J. Jacko, editor, Human-Computer Interaction. Interaction Design and Usability, volume 4550 of Lecture Notes in Computer Science, pages 1109–1118. Springer Berlin / Heidelberg, 2007. [17] C. Girault and R. Valk. Petri Nets for Systems Engineering: A Guide to Modeling, Verification and Applications. Springer, 2002. ISBN 3540412174. [18] T. Heider and T. Kirste. Smart environments and self-organizing appliance ensembles. In Mobile Computing and Ambient Intelligence: The Challenge of Multimedia, Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 2005. Internationales Begegnungs- und Forschungszentrum (IBFI). 25

[19] A. Hein, C. Burghardt, M. Giersich, and T. Kirste. Model-based inference techniques for detecting high-level team intentions. In B. Gottfried and H. Aghajan, editors, Behaviour Monitoring and Interpretation - BMI: Smart Environments, volume 3 of Ambient Intelligence and Smart Environments, pages 257 – 288. IOS Press, Amsterdam, 2009. [20] J. Indulska and K. Henricksen. Context awareness. In A. Helal, M. Mokhtari, and B. Abdulrazak, editors, The Engineering Handbook of Smart Technology for Aging, Disability, and Independence, Computer Engineering Series. John Wiley & Sons, Inc., 2008. ISBN 0471711551. [21] M. T. Johnson. Capacity and complexity of hmm duration modeling techniques. Signal Processing Letters, IEEE, 12(5):407–410, 2005. [22] T. Kirste. Making use of intentions. Technical Report CS-01-11, Institut für Informatik, Universität Rostock, Rostock, Germany, March 2011. ISSN 9445900. [23] J. Krumm and E. Horvitz. Predestination: Inferring destinations from partial trajectories. In Proceedings of The International Conference on Ubiquitous Computing, UbiComp’06, pages 243–260, Orange County, CA, USA, 2006. [24] T. Maekawa, Y. Yanagisawa, Y. Kishino, K. Ishiguro, K. Kamei, Y. Sakurai, and T. Okadome. Object-based activity recognition with heterogeneous sensors on wrist. In P. Floréen, A. Krüger, and M. Spasojevic, editors, Pervasive Computing, volume 6030 of Lecture Notes in Computer Science, pages 246–264. Springer Berlin / Heidelberg, 2010. [25] M. Matessa. Interactive models of collaborative communication. In Proceedings of the Twenty-third Annual Conference of the Cognitive Science Society, pages 634–638, Edinburgh, Scotland, 2001. Lawrence Erlbaum Associates, Inc. [26] D. J. Patterson, L. Liao, K. Gajos, M. Collier, N. Livic, K. Olson, S. Wang, D. Fox, and H. Kautz. Opportunity Knocks: A System to Provide Cognitive Assistance with Transportation Services. In Proceedings of The International Conference on Ubiquitous Computing, UbiComp 2004, pages 433–450. Springer, 2004. [27] A. Serna, H. Pigot, and V. Rialle. Modeling the progression of alzheimer’s disease for cognitive assistance in smart homes. User Modeling and User-Adapted Interaction, 17(4):415–438, September 2007. [28] Ubisense. Real-time location systems (RTLS) and geospatial consulting. http://www.ubisense.net, Retrieved: 24th January 2012. [29] M. Wurdel, C. Burghardt, and P. Forbrig. Supporting ambient environments by extended task models. In Proceedings of AMI’07 Workshop on Model Driven Software Engineering for Ambient Intelligence Applications, Darmstadt, Germany, 2007.

26

[30] M. Wurdel, D. Sinnig, and P. Forbrig. Ctml: Domain and task modeling for collaborative environments. Journal of Universal Computer Science, 14(19):3188– 3201, 2008. [31] K. Yordanova. Modelling human behaviour using partial order planning based on atomic action templates. In Proceedings of The International Conference on Intelligent Environments, IE’11, Nottingham, UK, 2011. [32] K. Yordanova. Toward a unified human behaviour modelling approach. Technical Report CS-02-11, Institut für Informatik, Universität Rostock, Rostock, Germany, May 2011. ISSN 0944-5900. [33] K. Yordanova, F. Krüger, and T. Kirste. Context aware approach for activity recognition based on precondition-effect rules. In In Proceedings of the workshop COMOREA at the PerCom conference, Lugano, Switzerland, 2012. accepted.

27

Suggest Documents