applied to model development, e.g. the waterfall model [14], or the. Boehm's spiral model [3]. They however do not consider the ques- tion of how to combine ...
Towards Systematic Development of Symbolic Models for Activity Recognition in Intelligent Environments Kristina Yordanova and Thomas Kirste1 Abstract. In a world where assistive systems are becoming very popular, it is important for such system to provide adequate and accurate recognition of the users needs, actions and goals. Recently, a number of approaches for activity recognition that utilise probabilistic symbolic models have been proposed. Such approaches rely on the combination of symbolic models and probabilistic inference techniques in order to recognise the user activities in situations with uncertainty. One problem with such approaches is that the models are often developed based on the designer’s intuition instead of on some structured guidelines. This leads to variety of problems – from inability to track errors, through inability to reproduce given results, to enormous amount of time needed to produce satisfactory results. To resolve this problem here we propose a structured development process for probabilistic symbolic human behaviour models that aims at improving the model documentation, problems traceability, and results reproducibility. To illustrate the development process, a problem from the activities of daily living is modelled. In general, the work provides practical guidelines to creating successful models for activity recognition.
1
Introduction and Motivation
Activity recognition (AR) plays an important role in our everyday life. To achieve an accurate AR, there are different approaches that can be applied, e.g. by utilising probabilistic methods like Dynamic Bayesian Networks [16], by using ontology-based approaches [13], or task trees [6]. Recently, there is also a number of emerging approaches for context-aware activity and intention recognition that utilise probabilistic symbolic human behaviour models (PSHBM) [7, 15, 12, 11, 9]. Such approaches encode prior knowledge about the user behaviour in the form of rules that are later expanded to form the model state space. To perform activity or intention recognition the transitions between the states are assigned probabilities in order to cope with ambiguous data and missing or erroneous sensor readings. For example, Hiatt et al. use the ACT-R production system [1] to which probabilistic simulation analysis is performed to determine the different execution paths and their probabilities. Alternatively, Ramierz et al. use the Planning Domain Definition Language (PDDL) to encode the user behaviour in the form of preconditioneffect action templates. Later, the model is mapped onto a partially observable Markov decision process and the transition probabilities are assigned based on the distance from the current state to the goal [12]. A similar approach is the one proposed by Kr¨uger et al. where 1
University of Rostock, Germany, {kristina.yordanova,thomas.kirste}@uni-rostock.de
email:
a PDDL-like notation is used but the model is mapped either onto a Hidden Markov Model or particle filter [9]. A general problem in the field of activity recognition is that researchers are usually more interested in runtime modelling than in the process of developing the models [8]. This is even more true in the case of PSHBM as they are relatively new for this filed and there are no much records about how to model with them in order to achieve the desirable results. In difference to ontological approaches for activity recognition where there are established development processes [4], there is no much information about the process by which state of the art PSHBM have been produced. Kr¨uger et al. investigates the need of tool support for such models but a structured process is not discussed [10]. In this work we propose a novel development process for probabilistic symbolic human behaviour models for activity recognition that aims to define a structured way of producing and evaluating such models, and that bridges the gap between developing probabilistic and developing symbolic structures. To do that, in the next section we give a short introduction to probabilistic symbolic human behaviour models. In Section 3 we introduce the development process, while in Section 4 we illustrate the process by modelling a cooking task. Finally in the last section we discuss the future work related to the process.
2
Probabilistic Symbolic Human Behaviour Models
Symbolic human behaviour models describe the underlying system behaviour in terms of states and transitions between these states. Such models have an initial world state and a goal state that the system strives to reach by making transitions from state to state. More formally, a symbolic state space model M is a tuple (P r, S, A, P l), where P r is the set of predicates which are boolean functions that provide statements about the model world state. The predicates in the model then build up the model states. Given a state s ∈ S where S is the model state space, then each state contains different combination of the values of all available predicates. For example, if we have 3 predicates in the model each of which can be either true or false, then a possible state will be s = (true, true, false). Then the model’s state space S is the set of all valid combinations of the existing predicates, which will result in 8 states in the example above. There is one special state in the model s0 which is the initial model state or the state of the world before any action has been executed. Then there is a special subset of states g ⊆ S called the goal states which represent all the predicates that have to hold in order the goal to be reached. To reach from one state to another the model has transitions between states which are denoted by A. In the context
The development process we propose consists of five phases which include the model development, its evaluation, and documentation. The goal of the process is to systematically develop a working model
Analysis and data preparation
Analyse problem Collect data
Documentation
Create actions ontology Annotate data
Causal model
Action selection heuristics
Actions durations
Choose modelling solution Design
of activity and intention recognition we call them actions. For two states s, s0 ∈ S we say that s0 is reachable from s by a ∈ A, if the result of applying a in s is s0 . We denote that with s0 = a(s). P l = {p1 , . . . , pn } is the set of all possible plans from the initial to the goal state, where pi = (a1 , . . . , am ) for a goal g ⊆ S and an initial state s0 ∈ S is a finite sequence of actions aj such that sl ∈ g where sl = am (· · · a2 (a1 (s0 )) · · · ). We say that pi achieves g. In order to be able to reason in probabilistic manner about the action being executed, one has to assign probabilities to the transitions between the states. This is done by assuming that the probability of the initial state is 1, while the probabilities of any action that can be executed from this state is determined by the underlying heuristics. The probability of applying an action a in state x can be achieved in different ways, e.g. by using the goal distance, cognitive heuristics etc. Generally it can be described by
Choose actions' heuristics Choose actions' durations
Implement minimal model
∝
exp(
f1 (a, x)
=
log γ(a(x)),
λk fk (a, x))
(1)
k=1
(2)
=
log s(a),
(3)
f3 (a, x)
=
δ(a(x)).
(4)
Here γ(a(x)) is the revisiting factor that is 0 if the resulting state of applying the action a to the state x has already been visited. This factor is determined by the history of each single running hypothesis. s(a) is the saliency of the action a that is specified in the action template specification. The third feature δ(a(x)) is the goal-distance of state x0 = a(x) that will be reached if action a is applied to state x. By using λk , each feature can be weighted. Furthermore as an action takes more than a single time step, probabilistic action durations allow each action to contain a duration density function. This provides the possibility of encoding additional a priori knowledge to the action definition. The probability of finishing the execution of an action α in the state s in the time interval (a, b) is given by: P (a < δ < b|δ > a) =
F (b) − F (a) 1 − F (a)
(5)
where F denotes the cumulative density function p(δ|α, s). This enables each plan to sample whether the action should be stopped or not from F in each step.
3
Development Process for Probabilistic Symbolic Human Behaviour Models for Activity Recognition
Activity recognition generally relies on data analysis processes in order to develop the AR system. Such processes include data collection, feature extraction, model building and optionally training, and finally performance evaluation [5]. However, there are no explicit guidelines on building the model, but rather on algorithms selection for the given problem. In software engineering, on the other hand, there is a number of established development processes that can be applied to model development, e.g. the waterfall model [14], or the Boehm’s spiral model [3]. They however do not consider the question of how to combine symbolic with probabilistic structures or what effects such combination have on the model development. Here we consider exactly these problems and discuss how to develop PSHBM by combining software engineering and data analysis techniques.
Implement durations Enrich model
Validate plans Validation
f2 (a, x)
Compile model Implement heuristics
Validate collapsed actions Validate durations
Evaluation
p(a|x)
Implementation
3 X
Data
Evaluate performance Evaluate success criteria
Interaction between phases
Documentation generation
Annotation storage and usage
Data storage and usage
Figure 1. Process for developing human behaviour models for activity recognition.
with high activity recognition performance, and meanwhile to increase the ease of tracking and identifying modelling problems and of finding alternative solutions. It also aims at providing better model documentation, as in the field of activity recognition it is often the case that a detailed model documentation is missing [10]. Fig. 1 shows the process. It can be seen that the model has two layers. The first one resembles a waterfall model. The first phase in this layer is the analysis phase which consists of understanding and analysing the problem domain, identifying the model objectives, and later deriving the model requirements. It also takes care of collecting the data to be analysed, of deriving actions ontology, and later – of creating the data annotation based on the actions ontology. The second phase is the model design where modelling solutions for the objectives are selected. The third phase is the model implementation which can be done with e.g. Computational Causal Behaviour Models [9], ACT-R, PDDL etc. The fourth phase is the model validation in which the implemented model is validated against an existing plan or a set of plans and improved based on the annotation. As the developed model aims at recognising user activities, the modelling process consists of one more phase, namely the model evaluation. In it the model is employed for recognising user activities and its performance is computed by comparing the recognised activities to the ground truth. Additionally, the modelling objectives derived in the first phase are used for success criteria in order to evaluate the
model. The second layer in the development process is introduced due to the need of coping with the model’s structure that is a combination of causal and probabilistic elements. It consists of three phases. The first is the development of the causal model and the corresponding observation model. The second is the development of the action selection heuristics; and the third is the development of the actions durations. Each of the three phases spreads over three phases from the first layer: namely, design, implementation, and validation. Were the model composed of only symbolic representation, the waterfall model could be easily adjusted to accommodate the development process. This is due to the fact that on symbolic level one can model each action in the model without influencing the rest of the action. In that manner one can recursively perform analysis, design, implementation, validation, and evaluation achieving the desired model behaviour. However, in the case when probabilities are present, it is not possible to develop the corresponding probabilistic structure together with the symbolic structure as a change in the probabilistic structure of one action will directly affect the probabilistic structure of all the remaining actions. In that sense, the symbolic structure of the model has to be developed before the designer is able to proceed with the probabilistic structure. Furthermore, it is essential to distinguish between the development of action selection heuristics and the development of action durations, because before proceeding with the action durations, one has to ensure that the correct action can be selected at all. This is due to the fact that incorrectly used action selection heuristics could assign too small probability to the correct action, rendering it improbable regardless of the fact that it is a valid execution sequence. Considering the probabilistic structure, it is also incorrect to start with assigning action durations before assigning correct action selection heuristics, as in the case the action has incorrect heuristics, it will never be selected, thus the action duration can never be validated. This interaction between causal structure, action heuristics, and action durations results in the need of introducing the second model layer, where the symbolic structure is developed first, then the action selection heuristics, and finally the corresponding action durations. Furthermore, the process takes care that each phase is carefully documented so that the model development and evaluation as well as the decisions made can be easily backtracked.
4
Modelling the Cooking Task: An Example
To illustrate the problem, we go through its steps and give example from a typical activity recognition problem – namely the preparation of a meal. The cooking task is a typical kitchen task assessment problem [2], where the aim is to detect whether the person is executing the task in the correct order, and if we stretch it further to an assistive problem, the system would like to detect inaccuracies in the user behaviour and assist her in correcting her mistakes and successfully achieving the task objective. This problem has applications where the user could suffer from Alzheimer or similar deceases but the decease is still in early stages and she wants to preserve her independent lifestyle as long as possible. Seven cooking tasks were recorded. Each of the tasks lasted about 7 minutes and although the task at hand, namely cooking, was staged, the behaviour of the participants while achieving their goal was left to themselves.This resulted in different execution paths leading to the goal state and increased the model variability needed to be able to recognise the correct behaviour. Additionally, the datasets contained between 636 and 1207 samples.
Figure 2. The layout for the cooking task and the objects being manipulated
4.1
Analysis
During the analysis phase, the following questions are answered. What is the problem to be modelled?: The answer to this question should provide information about what kind of behaviour is to be modelled; how many users are involved; what are their roles in the problem domain; what is the environment with which the user(s) is(are) interacting; which elements of the environment are to be modelled and which to be omitted. In the context of the kitchen task, the following would be the result of this question. The scenario to be modelled is as follows: a person is cooking a carrot soup in a kitchen supplied with the necessary kitchen appliances. The person starts by washing her hands, then cutting the carrot, putting it into the pot and cooking it. Later after the meal is ready, she serves it in a plate, puts water in a glass, and sits on the table to have a lunch. Finally, after the person has eaten and drunk water, she stands up, goes to the sink and washes her utensils. The fine-grained actions that take place (such as fill plate, fill glass, move, etc.) can be executed in any causally correct order. The locations in the kitchen are sink, counter and table, which are locations that could be reached only by walking from one place to another. Additionally, there are fixed locations, or places, which from a certain locations can be reached only by moving the hand. The places are cupboard and oven, which could be reached from the counter. Furthermore, different objects with varying functions and properties are used: cutting board, pot, plate, glass, bottle, knife, spoon, sponge, and the additional water and carrot (see Fig. 2). What is the application for which the model will be used?: The general answer of this question is that the model is, of course, designed for activity recognition. However, there are different applications of activity recognition, e.g. it could provide information to an additional assistive component that proactively assists the user; it could monitor people who may exhibit erroneous behaviour caused by illness or age, etc. In the context of the kitchen scenario, the application will be the recognition of fine grained actions and the corresponding objects being manipulated, so that the information can be used for assisting cognitively impaired people in preparing their meal. What are the model objectives?: The model objectives describe the
results the models have to achieve in order to be successful. These could be reducing the modelling time, improving the model robustness, increasing the model performance etc.. They should be clearly defined as later they will be used as success criteria for the model evaluation. For example, in the kitchen task the objective will be to accurately recognise the fine-grained user actions and the corresponding objects being manipulated. How are the objectives measured and what is the success criteria?: In order to be able to use the objectives as success criteria, each objective should be assigned a measurement unit and a threshold value which indicates the limit under or above which the objective was successfully accomplished. For example, in the kitchen scenario we would measure our objective with the accuracy rate the model produces, and we would want to have e.g. a recognition rate of at least 75%. What are the actions that will be modelled?: Based on the problem description, one should compile a list of actions that the user is able to execute within the given problem domain. These actions will be the building blocks of the corresponding user behaviour. In the kitchen example, the actions will be wash, wait, move, take, put, cut, fill, turn on, cook, turn off, open, close, sit down, eat, drink, stand up. What kind of sensors do we need to capture these actions?: Based on what is to be modelled and what we want to recognise, an appropriate sensor infrastructure should be selected. For example, in the kitchen task, we need a sensor infrastructure that can detect not only the action but also the object being manipulated. Properties (move): objects: none locations: from, to instantiations: move-sink-counter move-sink-table move-table-counter ...
move-sinkcounter
Actions
move-from-to
move-sinktable
...
move-tablecounter
...
...
Figure 3. Actions ontology for the cooking problem and the concrete action move
Additionally, in this phase the data is collected and cleaned. After the data collection, the actions ontology is created. It describes the involved actions, their properties and structure. For example, Fig. 3 shows the ontology for the action move. It can be seen that the action and the parameters it can take are described, as well as the instantiations of the action combined with the relevant parameters. Finally, based on this ontology, the datasets annotation is produced. It should follow the action names construction defined in the ontology. Furthermore, it should be causally correct and without time lapses with missing annotation.
4.2
Second development layer
The next three phases of the development process – design, implementation, and validation – fall under the second development layer. As explained before, the need of this layer comes from the combination of symbolic and probabilistic approaches. The exact procedure of this process is explained below. The numbers in it correspond to the numbers in Fig. 4.
1. Choose modelling solution 6. Choose actions' heuristics 8. Choose actions' durations
2. Implement minimal model 3. Compile model 7. Implement actions' heuristics 9. Implement actions' durations 11. Enrich model
[if 4 is unsuccessful, go to 1, 2 (respectively 11)] [if 5 is successful go to 1, 6, 7, or 9; else go to 1] [after 2, 7, 9 , or 11, go to 3] [if 10 is successful, go to 11; else go to 8 or 9]
4. Validate plans [if 9 compiles, go to 10]
5. Validate collapsed actions 10. Validate durations
Figure 4. The second layer of the development process that includes designing, implementing and validating a model.
1. Choose design solutions for the actions to be modelled (e.g. causal relations between the actions, representation of context information, observations that correspond to the actions). For example, a causal relation will be that the carrot cannot be cut before being washed; observation that correspond to wash will be one that indicate the carrot is being manipulated and that the user is at the sink. 2. Based on the problem description, create a minimal model that is able to explain the problem. In this case we use Computational Causal Behaviour Models that represent the actions with precondition effect rules and that are later compiled into a probabilistic model [9]. Avoid model overfitting by considering all variations of the behaviour simultaneously, rather than concentrating only on one of the datasets. One can create a minimal model by incrementally adding all actions with only the minimal set of necessary constraints. 3. Compile the model in order to check for syntactic and semantic errors. 4. Validate the model with the plans generated from the annotation. An example of a plan can be seen in Fig. 5. In the case the validation was successful, continue to the next step, else return to step 1 or 2. 5. Validate the models with the collapsed observations generated from the annotation. This shows whether the inference engine is able to find the plan given the observations. In case the step is successful, continue to step 6, else to 1 or 2; in case the action selection heuristics were already implemented and validated, continue to step 8; in the case the heuristics were unsuccessfully validated return to 6 or 7 . 6. Choose appropriate action selection heuristics. These could be the goal distance, action weights, cognitive heuristics etc. (see Formula 1). For example, we can decide that the action eat has higher importance than the rest of the actions, thus we will assign weight to it using the saliency option (Formula 3). 7. Implement the action selection heuristics if applicable for the problem. Continue to step 4.
d1
d2
d3
d4
d5
d6
d7
d5
d6
d7
d5
d6
d7
Precision
d1
d2
d3
d4
Specificity
1.00
0.70
0.80
0.90
0.70
0.80
0.90
Accuracy
0,(wash hand) 1,(wait) 2,(move sink counter) 3,(take carrot counter) 4,(move counter sink) 5,(wash carrot) 6,(move sink counter) 7,(take knife counter) 8,(put carrot cutting-board)
0.70
0.85
8. Choose appropriate action durations based on the domain knowledge, and on the appropriate durations probability distribution. For example, given the layout of the kitchen and the distance between the locations, we can represent the duration of the action move as a gaussian distribution with mean of 20 time steps and a standard deviation of 5 time steps. 9. Implement the actions durations. 10. In the case the model with its durations compiles, validate the durations by using the durative observations generated from the annotation. 11. In the case the durations validation was successful, one can now enrich the model by adding more context information, creating more complex causal relations or more constraints for reducing the model size, etc. and repeat the process from the start.
d1
d2
d3
d4
Figure 6. The second layer of the development process that includes designing, implementing and validating a model. Figure 5. Extract of a plan from the cooking task
4.3
Evaluation
The last phase is the model evaluation, where the model’s performance is evaluated. The output of this phase are the evaluation results, their visualisation (e.g. in the form of box plots, heat maps, simple plots etc.), and the evaluation of the success criteria in case such was defined in the analysis phase. The evaluation is performed based on evaluation scripts that allow results reproducibility. The corresponding metrics are calculated by comparing the estimated activities to the ground truth (namely the annotation). The measurements used for evaluating the model were accuracy, namely the degree of closeness of an estimated behaviour to the actual behaviour; precision, namely the proportion of positive test results that are correctly recognised; and specificity which represents the proportion of actual negative instances that are identified as such. Fig. 6 shows the average accuracy, precision, and specificity for the 7 recorded datasets in the kitchen example. It can be seen that the accuracy is between 75% and 85% depending of the dataset. This indicates that also our objective to have accuracy of above 75% is achieved.
4.3.1
inability to correctly reconstruct the development process. Additionally, the usage of different parameters, or slightly varying evaluation metrics that are not well documented causes inability to reproduce the obtained results. Table 1 shows the documentation produced in
Documentation and results reproducibility
Although not a phase, the documentation plays an important role in the development process. It ensures that each step of the development is appropriately documented and that there is no information lost. This later serves as log for easier problems traceability, provides better model understanding, and ensures the results reproducibility at a later point. Documentation and results reproducibility are often two aspects of model development that are underestimated in the field of activity recognition. The documentation is often produced at the end of the development process, resulting in inaccuracies or the
Documentation
Analysis
Design
Implementation Validation Evaluation
Table 1.
problem description actions to be modelled environment elements to be modelled datasets description annotation description requirements specification success criteria and hypotheses description of the collapsed actions’ observations log of changes causal relations between actions conceptual modelling solutions action durations action heuristics relation between high level actions and observations log of changes description of modelled actions description of available context description of model parameters log of changes plans to be validated log of changes evaluation scripts description of parameters used for results reproducibility description and interpretation of model results in terms of accuracy, precision, recall, specificity, etc. visualisation of model results log of changes
Documentation provided in the different development phases
each step of the process. It can be seen that among other outputs, there is always a log of changes. It ensures that any changes made throughout the model development are carefully documented, as the opposite often leads to the designer’s inability to discover reasons behind changes, or problems resulting from the changes. Another essential point is the results reproducibility. To ensure such, all scripts, tools, models, and parameters involved in producing given results should also be included into the model documentation. It is recommended that the procedure of storing all involved experiment ele-
ments is automated. This decreases the possibility of manual errors. Furthermore, the experience showed that it is essential to create evaluation scripts when calculating the model performance. The contrary often leads to inability of reproducing the results due to the usage of slightly different formula, or a different function.
5
Discussion and Conclusion
In this work we presented a development process for probabilistic symbolic human behaviour models for activity recognition. In a field where the stress is put on the runtime modelling instead on the model development, it is often the case that the designers have to rely on their intuition and learn from trials and errors how to build high performance models. We addressed this problem by introducing a structured way of developing such models that will ensure results reproducibility and better errors tracking. It should also reduce the time needed for developing working and well performing model as it takes care the designer will not fall into pitfalls typical for such approaches. One drawback of the process is that its efficiency was not compared to the state of the art development processes. In the future we intend to validate the proposed process by comparing its efficiency with that of a typical waterfall model. The choice of the waterfall model is made based on our experience that designers intuitively follow this model for developing their applications. On the other hand, there are no existing activity recognition processes that deal with the development of the model in a software engineering sense, thus we do not have other baseline development process with which to compare the proposed one. As a conclusion, the proposed development process provides useful guidelines about how to produce successful and well documented PSHBM with reproducible results. Furthermore, in the future we hope to show that the process also provides better efficiency compared to state of the art development processes.
REFERENCES [1] J. R. Anderson, The Architecture of Cognition, Cambridge, MA: Harvard University Press, 1983. [2] Carolyn Baum and Dorothy F. Edwards, ‘Cognitive performance in senile dementia of the alzheimer’s type: The kitchen task assessment’, The American Journal of Occupational Therapy, 47(5), 431– 436, (1993). [3] Barry W. Boehm, ‘A spiral model of software development and enhancement’, Computer, 21(5), 61–72, (May 1988). [4] Liming Chen and Ismail Khalil, ‘Activity recognition: Approaches, practices and trends’, in Activity Recognition in Pervasive Intelligent Environments, eds., Liming Chen, Chris D. Nugent, Jit Biswas, and Jesse Hoey, volume 4 of Atlantis Ambient and Pervasive Intelligence, 1–31, Atlantis Press, (2011). [5] Richard O. Duda, Peter E. Hart, and David G. Stork, Pattern Classification, Wiley and Sons, 2001. [6] Martin Giersich, Peter Forbrig, Georg Fuchs, Thomas Kirste, Daniel Reichart, and Heidrun Schumann, ‘Towards an integrated approach for task modeling and human behavior recognition’, in Human-Computer Interaction. Interaction Design and Usability, ed., Julie Jacko, volume 4550 of Lecture Notes in Computer Science, 1109–1118, Springer Berlin / Heidelberg, (2007). [7] Laura M. Hiatt, Anthony M. Harrison, and J. Gregory Trafton, ‘Accommodating human variability in human-robot teams through theory of mind’, in Proc. of the Int. Joint Conference on Artificial Intelligence, pp. 2066–2071, Barcelona, Spain, (2011). AAAI Press. [8] Jadwiga Indulska and Karen Henricksen, ‘Context awareness’, in The Engineering Handbook of Smart Technology for Aging, Disability, and Independence, eds., A. Helal, M. Mokhtari, and B. Abdulrazak, Computer Engineering Series, 585–606, John Wiley & Sons, Inc., (2008).
[9] Frank Kr¨uger, Kristina Yordanova, Albert Hein, and Thomas Kirste, ‘Plan synthesis for probabilistic activity recognition’, in Proc. of the Int. Conference on Agents and Artificial Intelligence, eds., Joaquim Filipe and Ana L. N. Fred, pp. 283–288, Barcelona, Spain, (February 2013). SciTePress. [10] Frank Kr¨uger, Kristina Yordanova, Veit K¨oppen, and Thomas Kirste, ‘Towards tool support for computational causal behavior models for activity recognition’, in Proceedings of the 1st Workshop: ”SituationAware Assistant Systems Engineering: Requirements, Methods, and Challenges” (SeASE 2012) held at Informatik 2012, pp. 561–572, Braunschweig, Germany, (September 2012). [11] Miquel Ram´ırez and Hector Geffner, ‘Probabilistic plan recognition using off-the-shelf classical planners’, in Proc. of the Nat. Conference of Artificial Intelligence, pp. 1211–1217, Atlanta, Georgia, USA, (July 11–15 2010). [12] Miquel Ram´ırez and Hector Geffner, ‘Goal recognition over pomdps: Inferring the intention of a pomdp agent’, in Proc. of the Int. Joint Conference on Artificial Intelligence, pp. 2009–2014, Barcelona, Spain, (2011). AAAI Press. [13] P. C. Roy, S. Giroux, B. Bouchard, A. Bouzouane, C. Phua, A. Tolstikov, and J. Biswas, ‘A possibilistic approach for activity recognition in smart homes for cognitive assistance to Alzheimer’s patients’, in Activity Recognition in Pervasive Intelligent Environments, eds., L. Chen, C. D. Nugent, J. Biswas, J. Hoey, and I. Khalil, volume 4 of Atlantis Ambient and Pervasive Intelligence, 33–58, Atlantis Press, (2011). [14] W. W. Royce, ‘Managing the development of large software systems: Concepts and techniques’, in Proceedings of the 9th International Conference on Software Engineering, ICSE ’87, pp. 328–338, Los Alamitos, CA, USA, (1987). IEEE Computer Society Press. Reprinted in Proceedings of the International Conference on Software Engineering (ICSE) 1989, ACM Press, pp. 328-338. [15] J. Grogory Trafton, Laura M. Hiatt, Anthony M. Harrison, Franklin P. Tamborello, Sangeet S. Khemlani, and Alan C. Schultz, ‘Act-r/e: An embodied cognitive architecture for human-robot interaction’, Journal of Human-Robot Interaction, 2(1), 30–55, (2013). [16] Tim L. M. van Kasteren, Athanasios Noulas, Gwenn Englebienne, and Ben Kr¨ose, ‘Accurate activity recognition in a home setting’, in Proceedings of the 10th International Conference on Ubiquitous Computing, UbiComp ’08, pp. 1–9, New York, NY, USA, (2008). ACM.