expressed in the metaphor of creating 'virtual companions' for later life. ..... would act like a human caregiver who keeps a medical journal and gets to know his or her ... assistance with, and a first set of functions that would provide interaction ..... would autonomously detect unusual situations, issue warnings, and .... Page 14 ...
A Framework for the Specification of Multifunctional, Adaptive, and Realistic Virtual Companions for Later Life*† Dennis Maciuszek and Nahid Shahmehri Department of Computer and Information Science Linköping University S-581 83 Linköping, Sweden {denma, nahsh}@ida.liu.se
Summary: Independence in everyday life is a concern of many frail older people. There is a vision that in the near future this group will considerably gain independence and improve their quality of life by relying not only on human caregivers – who cannot grant round-the-clock assistance – but in addition on smart computer-based help capable of imitating human care-giving behaviour. This vision is expressed in the metaphor of creating ‘virtual companions’ for later life. In the first part of this paper, we point out three major requirements that have to be met by assistive high technology before it can claim to realise the vision: (1) Instead of solving a specific problem in a specific way, an elderly person's virtual companion has to be multifunctional. (2) Instead of serving only a small target group, a virtual companion should be able to adapt its behaviour to many elderly users’ situations of life. (3) To supply realistic help, a virtual companion should not be specified by software engineers alone, but in dialogue with the user, caregivers, and experts. In the second part of the paper, we suggest a framework for creating virtual companions that do meet the requirements. Keywords: elderly, independence, quality of life, diversity, assistive technology, high technology, computer applications, personal assistance, virtual companion, multiagent system, adaptivity, user modelling, user-centred software design
1 Introduction Independence in everyday life is a major concern of many frail older people. It is usually desired to sustain the ability to care for oneself as much as possible, and to be able to stay in one’s own home, i.e. to avoid institutionalisation (Willis, 1996). This kind of independence may be sought for psychological, but also for practical reasons. Neither a professional nor an informal caregiver can assist an older person on a round-the-clock basis. For instance, home care service in the municipality of Linköping, Sweden is usually limited to assistance with personal hygiene in the morning and evening, a safety alarm and telephone service, and up to six hours per month of cleaning, laundry, shopping, and food preparation assistance (Linköpings kommun, 2002). In the US, 64% of older people living in the community and in need of long-term care receive assistance only from informal caregivers, i.e. family and friends (Liu et al., 2000). Obviously, these people have their own lives to take care of as well. There is a vision that in the near future frail older people will considerably gain independence and improve their quality of life by relying not only on human caregivers, but in addi*
This research is done in collaboration with the Institute for the Study of Ageing and Later Life at Linköping University Campus Norrköping. † The work is supported in part by the Swedish Council for Working Life and Social Research.
1
tion on intelligent computer-based assistance for situations of everyday life (Shahmehri, 2001). Inspired by recent advances in communications, home automation, pervasive computing, and artificial intelligence, this vision about new high technology aids goes beyond the view of assistive technology as low technology tools like the helpful walker or hearing aid. As ‘technology is increasingly either occupying or sharing the role traditionally occupied by human caregivers’ (Miller et al., 2001), researchers’ ambitions develop towards creating computer-based help that imitates real care-giving behaviour. This vision has been summarised in expressions like ‘automation as caregiver’, which was a workshop in AAAI 2002, or a ‘virtual companion’ for an older person. The latter metaphor has been used e.g. at the Department of Applied Computing, Dundee, Scotland to describe a constant helper enhancing the independence of a person with dementia, and on a (no longer online) Web site by Daniel Drew that promoted the vision in general. It should be noted here that the idea of a virtual companion is not connected to a certain type of user interface. Instead, a companion might appear as a desktop application, on different screens on the wall, as a robotic pet, as an invisible voice in the room, or in a mobile phone. What counts are the software behind it and the interaction with the user that is offered. Haigh & Yanco (2002) and Dewsbury et al. (2002) provide surveys of current research on computer-based ‘automation as caregiver’ that addresses smart homes, assistive robotics, telecare and telehealth, as well as human factors issues. Studying the literature, we were impressed by the wealth of ongoing projects and the functionality of realised products, yet we find that there is some way to go before the vision of virtual companions for later life can come true. Previously (Shahmehri, 2001), we identified as general problems in the design of intelligent IT for older people the need for multi-modal communication, the need for user support, issues of ownership and control, as well as adaptivity and adaptability. In this paper, we adopt the particular concept and vision of the virtual companion, and propose a framework that could, in our opinion, realise it. In Section 2, we identify three major requirements that are to be met by such technology before it can make the big difference to many old people's independence and quality of life that researchers envision. Section 3 presents our framework for the specification of multifunctional, adaptive, and realistic companions. We explain how it addresses the requirements. Section 4 discusses related work. Section 5 concludes our reasoning, and names the future work to be done.
2 Requirements on Virtual Companions for Later Life 2.1 Functionality The usual assistive device solves a specific problem in a specific way. Yet, this will often not suffice to ensure independent living for a frail older person. It would be a simplifying assumption that a person depending on care-giving assistance is characterised by a single problem for which a single solution exists. Instead, he or she will have a variety of needs and qualify for a variety of intervention strategies. As for the variety of needs, data from 1997 (Gabrel, 2000, Table 4) shows 89.3% of US American nursing home residents receiving assistance with two or more activities of daily living (ADLs). 88.7% of the residents received assistance with two or more instrumental activities of
2
daily living (IADLs). In 1996 (Munson, 1999, Table 2), 51.3% of American home care users received assistance with two or more ADLs. 28.1% received assistance with two or more IADLs. ADLs (Katz et al., 1963) include bathing, dressing, toileting, transferring, continence, and eating (the statistics additionally included walking). IADLs (Lawton & Brody, 1969) cover the activities of using the telephone, shopping, food preparation, housekeeping, laundry, transportation, handling medication, and handling finances (the statistics used only subsets of these). Notice that even if a person only receives help with one ADL or IADL, e.g. doing the laundry, this could in fact include several services, like carrying the clothes, explaining how to operate the washing machine, etc. Furthermore, everyday life is more than functioning in ADLs and IADLs. Think of safety concerns, the basic need for having social contact, or the desire to fill life with meaning. Consequently, a person’s dependency does not just result from a number of functional limitations. Gerontology knows many more facets of dependency, e.g. economic, social, or emotional dependency (Baltes, 1996). Baltes studied dependency on a behavioural level, finding that a dependence need not be a necessary consequence of some weakness, but may have been learned socially. Dependent behaviours are often modifiable, even reversible. Moreover, Baltes argues for a form of behavioural dependency that should not be regarded as negative, but instead as strategies towards successful ageing, namely self-regulated dependency. The idea is that the frail older person makes decisions concerning where and how he or she becomes dependent or independent. He or she may deliberately select to depend on others in some areas of everyday life, but at the same time select other activities that are abandoned, and replaced by activities he or she is capable of performing independently. A person may compensate for limitations, e.g. by depending on people, or by using technology that supports independence. The older person may further try to optimise his or her skills in mastering certain types of situations on his or her own. We observe that in everyday life there are not standard solutions that grant independence regarding certain standard problems, but rather different possibilities in different situations. If we do have the ambition to create virtual companions capable of substantially complementing human care, then it is required that our systems are flexible in that they support a variety of application areas through a variety of functions. Our smart homes must not only prevent fire in the kitchen, but also give cooking instructions. Our medication reminder systems must not only remind, but also train the user not to forget. Otherwise, we risk that our assistive technology is of help only occasionally, or that it helps in undesirable ways. An obvious alternative to multiple functions – and applications of these – within the same system would be to provide a number of artefacts, one for each occasion. This, however, is not practicable, considering the combinatorics of pairing a larger number of functions with a larger number of application areas. An integrated implementation, in which modules and data can be shared and function code can be reused, is easier and cheaper to develop and maintain. It is easier to expand, as new applications can be integrated into an established structure, and thus be made compatible from the start. Notice that this is more than giving the companion a uniform user interface. Usability will already be supported by consistent behaviour inside the system. Our first requirement is multifunctionality, so that several needs are addressed, and applications become compatible with each other. In Section 3, we suggest a multifunctional framework capable of integrating many of the isolated ideas for automated assistance found in technology today, thus forming a more holistic computer-based aid.
3
2.2 Target group The usual assistive device for older people serves a small target group. This is not efficient in terms of productivity, and it does not fulfil the ambition to work on technology for ‘the elderly’. That statement may sound contradictory, considering the growing percentage of older citizens in modern societies. Yet, the assumption that ‘old people are all alike’ is a myth. Elders demonstrate greater heterogenity than any other age group. Their collection of experiences and different lifestyles makes ‘the elderly’ a group that is very diverse (Ferrini & Ferrini, 2000). Diversity does also prevail on the level of functioning. Returning to the statistics collected by Gabrel (2000, Table 4) and Munson (1999, Table 2), the only ADLs with which both at least one third of nursing home residents and at least one third of home care users receive assistance are bathing and dressing. Percentages for assistance with the different IADLs are rather high for nursing home residents (from 62.2% to 77.1%), but only assistance with shopping and light housework is more or less common (84.3% and 38.9% respectively) among users of home care. Notice further that e.g. ‘light housework’ is a wide area, and actual needs will differ. Frail old people are not only different in the ADLs and IADLs they need help with, but also in how much help they need. For example, the IADL assessment scale (Lawton & Brody, 1969) measures performance in housekeeping relative to five levels of dependency: ‘maintains house alone or with occasional assistance’, ‘performs light daily tasks’, ‘performs light daily tasks but cannot maintain acceptable level of cleanliness’, ‘needs help with all home maintenance tasks’, and ‘does not participate in any housekeeping tasks’. Occupational therapists (Levine & Brayley, 1991) assess individual performance in yet much more detail, e.g. from every little component of the vacuum cleaning activity to the role of being a competent housewife or house husband. On the lower, physical and cognitive levels, found diversity disproves popular belief about ‘the elderly’ or ‘all elderly’ as well. It is due to improved health care that for instance only a minority of seventy-year-olds in Sweden are without teeth today (Steen, 1991). And by no means is every old person forgetful. Confusion and significant memory loss are not a normal part of ageing. They do occur for some people, and then should be investigated for causes like dementia, depression, toxic medication, and others (Ferrini & Ferrini, 2000). Several studies (Moskovitch, 1982; Maylor, 1990; Craik, 2000) have demonstrated that whereas older adults tend to perform worse than younger subjects in experiments that assess prospective memory, they often equal or outperform young people in realistic, everyday life prospective memory tasks. These are interesting observations, as prospective memory is relied on in tasks such as remembering to take medicine or to turn off the water tap in the bathroom. Tasks like these constitute typical situations to which assistive technology is applied. Östlund (2002) points out what the common misconception of elderly people as a homogenous group means for them as a user group of IT. In her opinion, it leads to ad-hoc descriptions of the users, so that some applications will work, and others will not. Notice that another risk would be an application working ‘too well’, i.e. by being helped too much its user would no longer learn in everyday life situations (Intille, 2002), and thus lose independence. Östlund (2002) suggests that in order to avoid disappointments, developers of technology must acquire knowledge about their users’ socio-economic situation, competence, health, and gender. In addition, she highlights the importance of the local context into which technology is integrated. As software engineers, we might state that in the case of assistive technology, where success of an implementation depends so much on the satisfaction of user needs and a seamless in-
4
tegration into the home environment, a thorough requirements analysis would be compulsory. Yet, how can one analyse the requirements of a customer group – ‘the elderly’ – that is decidedly heterogeneous? We believe that assistive technology for larger groups must be so flexible that it can adapt itself to each particular user’s individual and changing needs. An adaptive companion would act like a human caregiver who keeps a medical journal and gets to know his or her client in everyday interaction. Connecting to our reasoning above, we require a companion that can provide many possible functions to allow every person the usage of certain subsets of these, and to be helped individually by appropriate degrees of automation, in their individual environments. An alternative to a generic system adapting to as many users as possible is to develop isolated devices for small, clearly defined target groups. Some researchers work specifically on assistive technology for people with dementia or a certain disability. With this strategy, it might happen though that needs related to one person’s independence and quality of life, but not his or her disability, are missed. In addition, a disability will have different effects on different people. Section 3 suggests a generic framework for the creation of advanced assistive software that adapts to a user's needs, skills, and the people and artefacts in his or her environment. 2.3 Expertise The usual assistive device is created by engineers. Yet, what do we computer scientists know about older people? Of course, we can evaluate our products in usability studies with elderly users. But that can only improve the quality of our finished products, or products already in development. Instead, we should have the right ideas from the very beginning. We have argued previously that representatives of the elderly must be involved from the onset, otherwise there is a risk that they simply will not be able to use the products we create (Ställdal et al., 2001, Section ‘Smart homes mean more independence’). Studying literature on developed intelligent IT for the elderly, we noticed however that quite often not a collection of requirements marks the beginning of a project, but rather a certain invention inspires and guides the development of an artefact. Certainly, robots, sensors, and home networks are good things to have, and probably they are quite useful for old people, or for applications that old people will be using. Yet, this is often taken for granted. The need is not shown. This is not surprising because naturally we write about what we understand best – the technical details – and what interests us the most – technical innovations. Of course, we can start intensive research on the needs of older people, but it will take long until we become experts. Yet, expert knowledge is the basis on which our new, intelligent applications will have to work. Engineers need to engage in dialogue with users and experts to fill the knowledge gaps in their models and algorithms, as well as their initial product ideas. Our own group, for instance, benefits from collaboration with the Institute for the Study of Ageing and Later Life in Norrköping. The opinion stated above is supported by Östlund (2002) who writes that the development of supportive technology for older people is usually technology-driven. Technical applications and expectations of how they will be used are described, while the elderly as a group are not. Östlund argues that failures of new technology often derive from the engineers’ lack of knowledge regarding the context of application of a piece of technology. Elderly users and gerontologists are often not included in the discussion. Co-operation among different actors and multidisciplinary approaches however are a key factor in producing a positive outcome. For these reasons, we state the integration of involved actors’ expertise in each configuration and upgrade of a virtual companion as our third requirement.
5
Section 3 suggests a framework for the creation of realistic virtual companions. The methodology attached to it is technology-driven, yet a technology-driven dialogue with users, experts, and caregivers, who may guide each specification, and contribute their knowledge. This technology does not restrict, but provide a common language for interdisciplinary dialogues.
3 A Framework for the Specification of Virtual Companions 3.1 Multifunctionality A virtual companion software is supposed to assist its elderly user in different situations of everyday life. Assistance shall be given through interaction with the user and the environment he or she lives in. According to our first requirement, a variety of application areas are to be supported via a variety of functions. In order to determine a set of application areas that a number of older people would need assistance with, and a first set of functions that would provide interaction patterns supporting these application areas, we studied the literature. Firstly, we looked into publications that have older people describe functional limitations and their responses (Rogers et al., 1998), that investigate meaningful recreational and creative activities (Hazan, 1992), that analyse communication between older persons and caregivers (Cedersund & Nilholm, 2000), and that summarise needs of older adults from the perspective of occupational therapists (Glogoski & Foti, 2001). Secondly, we learnt about existing technology described in the surveys (Haigh & Yanco, 2002; Dewsbury et al., 2002) and found by us on the Web. In addition, we sought inspiration from a companion application from another domain (Fleck et al., 2002). In the end, we recorded as potential needs, and thus candidate areas of application, those shown in Figure 1. Note that the hierarchical structures of our taxonomies will be expanded further in the future, when they become part of a larger ontology. As typical interaction patterns in dealing with the needs, and thus candidate functions for the companion, we identified those in Figure 2. Notice that the same function, e.g. a trainer, can be applied to different application areas, e.g. one trainer trains food preparation, and another one trains memory performance. Likewise, the same application area may be supported e.g. by both training and guiding.
Figure 1: Application areas
Figure 2: Functions
6
A virtual companion is more than a mere tool that is being operated. It assumes the role of an active caregiver who acts on his or her own initiative. When realising the identified interaction patterns and supporting the identified needs, it has to exhibit rather complex behaviour. We therefore decided to regard each basic function of the companion as a software agent. An agent shall be understood as a software module capable of (1) autonomous action, e.g. to actively help when a new need is detected, (2) deliberation, i.e. reasoning on the basis of goals and knowledge, e.g. to solve a problem, and/or (3) reactivity, i.e. responding to outside stimuli, e.g. a sensor reading violating safety constraints, and for some agents also (4) learning, i.e. the capability of updating knowledge or stimulus-response rules on the basis of experience. The companion system as a whole is seen as a multiagent system. In a multiagent system, many agents are (1) active concurrently, and they can (2) communicate and work together in order to realise the global behaviour. Figure 3 displays a multiagent system that represents a generic multifunctional companion. Its core functionality is realised by communicating function agents. Each function agent behaves on the basis of a set of goals, knowledge, stimuli, and responses. These are stored in global databases, so that a piece of data may be shared among several agents. The multiagent system interacts with the user via a user interface. It is embedded in the environment through sensors and actuators, as well as interfaces for assisting users.
Figure 3: Multifunctional framework
One function agent • realises one function (from the taxonomy of Figure 2) • in support of one application area (from Figure 1, e.g. safety or food preparation). An instance of a virtual companion may include several function agents realising the same function, e.g. a food preparation/IADL trainer and a memory/cognitive comfort trainer. Likewise, there may be several function agents supporting the same application area, e.g. a food preparation/IADL trainer and a food preparation/IADL guide. The function of a function agent is the interaction pattern that the agent follows in order to assist the user. The goals, knowledge, stimuli, and responses available to a function agent depend on its application area. At present, we suggest ten functions. More may be added, if necessary. We will now explain each function by outlining its interaction pattern, and by naming a typical example from existing technology. •
Monitor: Continuously reads from sensors, and generates a warning when readings are out of the usual range. The Swedish Handicap Institute's smart home (Elger & Furugren, 1998) can warn e.g. when a door or a window has been left open, when a water tap has not been turned off, or when the cooker is overheated. The corresponding function agent supporting their application area would be a safety monitor.
7
•
•
•
•
•
•
•
•
‡
Reminder: Reminds the user when an event occurs that he, she, or an assisting user has defined previously. Companies like e-pill offer a wide selection of programmable electronic medication reminders (e-pill, 2003). In our framework, this would be realised as a medication/IADL reminder agent. Supplier of activities: Engages the user in an interactive activity. If a result is created, it may be stored to be looked at again later. Libin and Cohen-Mansfield (2002) have investigated the positive effects that a robotic cat may have on isolated and understimulated persons with dementia. This can be regarded as an emotional concerns supplier of activities. Mediator: A difficult user interface is simplified. Translates simple input into commands, and difficult output into information that the user can easily grasp. Pieper and Kobsa (1999) built an interface that enables a bed-ridden manually impaired old person to use a word processor by talking to it and reading from the ceiling. This application would be a software usage/IADL mediator (the original IADLs include only telephone usage as a comparable activity). Operator: A task is done by the automation instead of the older user. Generates commands to achieve the user's current goal, then forwards them. Observe that here a ‘goal’ denotes a short-term goal, to be achieved in this one situation – or in terms of temporal logic ‘sometime’. The Gloucester Smart House (Adlam, 2001) issues warnings like Elger & Furugren (1998). In addition, it turns off a tap or the cooker automatically, if the user does not do so in time. This would be realised as a combination of agents – a safety monitor working together with a safety operator. The monitor would ask the operator to take care of the problem. After that, it would not become inactive, but concurrently carry on monitoring. Communicator: Calls a person who might help achieve a current goal. The goal may have been formulated by the user or another agent. The MORE mobile phone (fortec et al., 2000) allows its user to make special alarm calls to be handled at a service centre. We would call this a safety communicator. A communicator function need not only work in one direction, for instance the MORE technology enables its service centre staff to locate a caller through his or her phone. In combination with other agents, e.g. a monitor or an operator, a communicator could, via a residential gateway (Herzog & Shahmehri, 2001), provide care staff with different forms of access to a user and his or her home environment. Guide: Uses its knowledge about a current difficult task to generate helpful advice, instructions, or suggestions, so that the user can achieve his or her goal. The COACH talking bathroom (Mihailidis, 2002) is installed in a demented user's bathroom. When necessary, it gives him or her cues showing the next step in a handwashing task. This would be called a bathing/ADL guide. Trainer: Helps the user optimise his or her skills. Teaches via theoretical instructions and interactive exercises with feedback on performance. Rebok et al. (1996) showed that non-demented elderly can improve memory performance through computerised memory training. One can therefore implement memory/cognitive comfort trainers. Informer: Retrieves and presents information about a current user interest or goal from a database or the Web. The SAID project‡ includes a subsystem (Bogdanovich
http://www.eptron.es/projects/said/index.html
8
•
et al., 2003) capable of retrieving information from the Web that suits an older user's needs. Its ontology basis covers a number of areas, from recreational to health interests. In that case, the corresponding function agent is a generic informer. Conversation aid: Supports or has a conversation with the user – simply for fun, to write a text, communicate thoughts to someone else, etc. Results can be stored. As an example, memory-impaired people are handicapped in conversations. The Circa multimedia tool (Alm et al., 2003) helps such persons express themselves during reminiscence sessions. Reminiscence sessions can be relaxing or create biographical stories. Hence, a function agent realising this is a recreation and creativity conversation aid.
The set of available functions for supported application areas will be different for each personal instance of a virtual companion. This is why the system is called a framework. On its meta-level it does not define one companion, but potential companions that may be defined. Still, it is not only a theoretical construct, as we are planning to implement application area and function components which can then be selected and combined to form working agent systems. 3.2 Adaptivity The concept of the virtual companion shall appeal to a wide target group. According to our second requirement, this is achieved by enabling a companion to adapt its functionality and behaviour to an individual user's changing needs, skills, and the environment he or she lives in. In our framework, adaptation to an individual user means two things, namely • the selection of which function agents should be activated in a companion, according to the user’s needs or goals, • the adaptation of the behaviour of an individual function agent, according to the user’s skills or preferences. If the user has just been prescribed a lot of new medication, a previously inactive medication reminder could become active. An individual food preparation/IADL guide must adapt its assistive behaviour so that it compensates for the user’s weaknesses, but at the same time does not seize control. A user may very well be able to peel a potato, but not know how long it must boil. A physically weak user who walks rather slowly may be assisted by a physical comfort operator that opens doors, however this does only make sense if it keeps the doors open quite long. Adaptation to a user’s environment is important as well. The environment may create needs, like a wheelchair that is difficult to manoeuvre. Or it may help in assistance, e.g. a lamp that may be lit. A function agent must therefore know the local environment and its behaviour. Moreover, individual goals and preferences as well as the environment context may change. For example, the user may have had an accident, or a technical artefact has been reconfigured. Such changes must become visible in the profile. The standard approach to forming a basis for adaptivity is user modelling. Data about the user would be stored in relation to a well-defined structure of goals and preferences and be made available to the function agents. These would adjust their behaviour with regard to the data. Yet, our virtual companion is supposed to imitate user-caregiver dialogues. As the caregiver has already been modelled as several function agents with goals and the capability of agent-agent communication, it is a logical decision to represent the user by an agent as well. The
9
user model data becomes an active user agent that derives goals, and behaves towards achieving these according to knowledge about the user. Activation and deactivation of function agents can then happen by matching long-term user agent goals with long-term goals of the function agents. Notice the difference between long-term goals to be achieved ‘always’ and short-term goals to be achieved ‘sometime’ (cf. 3.1). Adaptation of one function agent’s assistance can happen by communication and sharing of goal-directed behaviour between function agent and user agent. Figure 4 shows areas of user preferences that may become user agent knowledge. Examples of helpful information would be medication that a user has been prescribed (health) or his or her functional competence in the food preparation IADL. The areas were identified by applying insights from literature and experience with user modelling – for instance in a live help system (Åberg et al., 2001) – to a set of prepared scenario cases.
Figure 4: User agent knowledge
In analogy to the user agents, our framework represents people and things in the environment as environment agents. These are mainly reactive. Their main usage will be simulation of environment responses to certain stimuli to be emitted by function agents. This is in order to guarantee a good and reliable outcome of the companion's actions. Whereas one user agent is enough to represent the companion's client, the environment can be arbitrarily complex. Figure 5 displays a small selection of possible environment types.
Figure 5: Environment types
Figure 6 shows the virtual companion framework with one added user agent and several added environment agents. The new agents exist on the fringe of the system, where they communicate both with its core, i.e. the function agents, and with the outside. The user agent adapts interactive companion behaviour to user goals and preferences via the user interface of the system. The set of environment agents makes the companion sensitive to the environment context by reading from sensors and changing internal representations of the environment accordingly. In addition, they carry out commands from function agents that intend to manipulate the environment. Finally, the environment agents are connected to user interfaces for assisting persons.
10
Figure 6: Multifunctional and adaptive framework
3.3 Realistic specification Before a companion system is used, it needs to be configured with an initial set of applications that are useful to the individual user. In addition, later upgrades will become necessary. Our third requirement demands such specification of an individual's companion to be done in dialogue with those actors who possess the required knowledge about older people in general as well as the particular case. The elderly user, his or her informal or professional caregivers, as well as experts in gerontology, nursing, occupational therapy, architecture, and traditional technology know the person's needs. Software engineers provide the possibilities in the form of the designed and implemented framework. Together, they can shape a realistic set of aids for the individual user. While the group consisting of elderly person, caregivers, and experts may be present at each individual configuration and upgrade, the few computer engineers working on the product cannot. We must therefore provide ways for the mentioned group to perform configurations on their own. In our approach, we suggest to complement the multifunctional and adaptive framework with a language for composing specifications. The language must be understandable and usable, so that the group including the old person, caregivers, and experts can identify needs, and then turn them into a specification. The definition of specifications will be supported by a tool that translates high-level input made by caregivers and experts into working companion code. We conclude this section by stating some requirements on the expressiveness of the companion specification language. Basically, it must be possible to describe every desired agent in the system by as few technical details as are sufficient for determining its behaviour. In the case of an upgrade, new agent descriptions would be added, or existing agents would be deactivated, changed, and activated again. The manual part of describing function agents, the user agent, and environment agents will be similar to the following process. For a better understanding of its aims, imagine an elderly person, Mrs N (Glogoski & Foti, 2001), who lives with her daughter, but needs to be independent and safe from 8 a.m. to 3:30 p.m. Mrs N has a severe walking disability. She needs to be protected from falls, and from causing a fire in the kitchen. She is motivated to learn cooking from the wheelchair, but she often feels sad, due to aches and pains and recent deaths of friends. For each function agent, a new instance of one of the ten predefined functions (Figure 2) is created. This determines the agent's general behaviour including autonomy, learning, and possible communication with other function agents, the user agent, and environment agents. For Mrs N, a monitor, a trainer, and a conversation aid might be chosen. The monitor agent, for instance, would autonomously detect unusual situations, issue warnings, and delegate problems by communicating with a guide, operator, or communicator, if those were also created. Secondly, one of 11
the application areas (Figure 1, with future refinement of its hierarchy in mind) is chosen by making the predefined set of goals, knowledge, stimuli, and responses that defines the area available to the function agent. In the end, Mrs N might receive a safety monitor, a food preparation/IADL trainer, and an emotional concerns conversation aid helping her express her sadness. The final, third step is the most complex one, because it requires more than making choices. It involves the manual definition of additional knowledge and stimulus-response rules that are unique to the specific instance of the function agent for the specific, chosen activities performed by the specific user. An example would be adding new food recipe knowledge to the database and making it available to the food preparation/IADL trainer if its current recipes do not suit Mrs N’s taste or diet. To specify the user agent, one instance of a generic user agent is created. This determines the agent's behaviour including autonomy, learning, and possible communication with function agents and the user interface. In a second step, an initial set of long-term goals is chosen for the agent. These are the user's needs that determine which function agents will be active right after the initial system configuration. Mrs N would e.g. have the goals of not causing a fire, and being trained in cooking. The set of goals should match a subset of goals of already specified function agents. In the third, most complex step, knowledge about the user that can be expressed in terms of Figure 4 is inserted into the database, and made available to the user agent. Examples would be definitions of Mrs N’s walking disability and her competence in the food preparation IADL. For each environment agent, a new instance of an environment agent is created. This determines the agent's behaviour including autonomy, learning, and possible communication with function agents and the actual environment. Secondly, one environment type (Figure 5) is chosen by making the predefined goals, knowledge, stimuli, and response set of that type available to the new environment agent. For Mrs. N, this could be a hardware appliance (a microwave oven), a sensor world (cooker and smoke detector in the kitchen), a person (her daughter), a room (the kitchen), a vehicle (the wheelchair), and so on. Environment types will mainly provide stimulusresponse rules, due to their reactive nature. Finally, in the third step, new stimulus-response rules and possibly items of knowledge are added. Stimulus-response rules could be operations on the microwave plus the outcome of these. Knowledge might include the ground-plan of the kitchen.
4 Related Work Like our own approach, the Independent LifeStyle Assistant (I.L.S.A.; Haigh et al., 2002a; Haigh et al., 2002b) is a multiagent system. Its idea is similar, but the distribution of responsibilities among agents differs. Instead of centring on interaction patterns in function agents, I.L.S.A. has domain agents that cover all functions for one application area, e.g. medication management or eating. I.L.S.A. is multifunctional, however it includes fewer types of functions than our taxonomy. I.L.S.A.'s monitoring, operation, communication, and simple guidance supports a person's independence within the home environment, but not so much interactive problem solving for a more general improvement of quality of life. I.L.S.A. does e.g. not support conversation or supply meaningful activities. The system is adaptive to the user and to the environment. It can be configured and upgraded, and its open architecture allows software producers to add new functionality. Care experts were included in an initial collection of requirements. Researchers at the Rehabilitation Engineering Research Center on Technology for Successful Aging in Florida (Mann & Helal, 2002; Giraldo et al., 2002) are developing a companion
12
in the form of a smart phone. The phone further acts as a ‘magic wand’ for controlling a smart environment. Their preliminary set of applications contains basically the functions also identified by us. Only a supplier of activities type of function is not mentioned. The set includes a number of application areas, however there is no method for combining functions with application areas, and no taxonomy. A software architecture to enable a wide range of applications is planned though. The smart phone integrates well into the home environment through a reactive eventcondition-action engine. Adaptivity on the user side seems to be limited, except for their shopping application (Shekar et al., 2003), which adapts to nutrition profiles and shopping habits. Administrative tools for upgrading the smart environment and integrating the phone are being developed. Apparently, this does not include customisation of the set of available applications for a person. General user requirements were collected in a survey with elders. The large scale Nursebot project (Baltus et al., 2000; Pollack et al., 2002) is building virtual companions in the form of personal service robots. Multiple functions including monitors, reminders, communicators, guides, and informers have been realised or are planned. The most advanced applications are a reminder for different ADLs and IADLs, and a walking guide. Yet, no formal method for the integration of different functions or a taxonomy of functions is suggested. As for adaptivity, isolated solutions are described for the different applications. The reminder relies on personal data, and the walking guide learns a map of the environment. There is no automated activation and deactivation of applications. Applications are directed at a specific target group, such as cognitively impaired users in the case of the reminder. Specification of the reminder's user data is done by a caregiver, but no general specification method is reported. In summary, the companion systems above are each in certain aspects and to a certain extent multifunctional, adaptive, and realistic. They are not companion creation tools. CUSTODIAN (Dewsbury et al., 2001) is a creation tool for designing smart home environments for individual residents. These, on the other hand, offer fewer functions than a companion would, and once in use, the smart homes do not adapt to their elderly users. CUSTODIAN allows the specification and visual simulation of a smart home. One such specification involves choosing one standard template based on the individual's general level of functioning, and then adding extras to it. The authors do not recommend more varied and detailed specification, unless done by a competent and careful engineer. Consequently, their approach lumps together quite different elderly users, yet for the design of basic smart home architectures this might suffice.
5 Conclusions and Future Work We described the vision of a virtual companion that can increase the independence and improve the quality of life of a frail elderly user. We discussed problems in the design of advanced assistive technology, and provided a list of three major requirements that helped us define a framework by which companions can be realised. These will be multifunctional, adaptive, and specified in dialogue with the user, caregivers, and experts. The framework is defined as a multiagent system that is complemented by a systematic specification method, which will enable the group of the user, his or her caregivers, and experts to create their own virtual companions. We have reported on work in progress. Our next objective is to realise an implementation of a specification tool based on the framework. At the first stage, this will be a prototype based on subsets of functions and application areas. Our long-term goal is to obtain a system that will allow a validation of the properties multifunctionality, adaptivity, and realistic specification.
13
References Åberg, J., Shahmehri, N., & Maciuszek, D. (2001). User modelling for live help systems. In Proceedings of the Second International Workshop on Electronic Commerce (Welcom ‘01), 164–179, Heidelberg, Germany. Adlam, T. (2001). The Gloucester Smart House project – A case study. In HomeNet 2001, Brussels, Belgium. Alm, N., Dye, R., Gowans, G., Campbell, J., Astell, A., & Ellis, M. (2003). Multimedia communication support for people with dementia. In Include 2003, London, UK. Baltes, M. M. (1996). The Many Faces of Dependency in Old Age. Cambridge, UK: Cambridge University Press. Baltus, G., Fox, D., Gemperle, F., Goetz, J., Hirsch, T., Magaritis, D., Montemerlo, M., Pineau, J., Roy, N., Schulte, J., & Thrun, S. (2000). Towards personal service robots for the elderly. In Workshop on Interactive Robotics and Entertainment, Pittsburgh, PA, USA. Bogdanovich, A., Fischer, K., Winterstein, S., & Zinnikus, I. (2003). SAID: Social aid interactive developments – Intelligent agent subsystem. Talk at DFKI, Saarbrücken, Germany, 7 January 2003. Cedersund, E. & Nilholm, C. (2000). Samtal i äldreomsorgen. Samspelet mellan omsorgspersonal och äldre med Alzheimers sjukdom. (Conversation in care for the elderly. The interplay between care staff and older people with Alzheimer’s disease.) Lund, Sweden: Studentlitteratur. In Swedish. Craik, F. I. M. (2000). Age-related changes in prospective memory: Empirical findings and theoretical implications. In 1st International Conference on Prospective Memory, Hatfield, UK. Dewsbury, G., Sommerville, I., Rouncefield, M., & Clarke, K. (2002). Bringing IT into the home: A landscape documentary of assistive technology, smart homes, telecare and telemedicine in the home in relation to dependability and ubiquitous computing. DIRC working paper PA7 1.1 (Lancaster University, UK). Dewsbury, G., Taylor, B., & Edge, M. (2001). Designing safe smart home systems for vulnerable people. In Dependability and Healthcare Informatics (DIRC) Conference, Edinburgh, UK. e-pill (2003). e-pill Medication Reminders. Product catalogue. Elger, G. & Furugren, B. (1998). An ICT and computer-based demonstration home for disabled people. In TIDE 1998 Conference, Helsinki, Finland. Ferrini, A. F. & Ferrini, R. L. (2000). Health in the Later Years. Boston, MA, USA: McGraw-Hill. Fleck, M., Frid, M., Kindberg, T., O’Brien-Strain, E., Rajani, R., & Spasojevic, M. (2002). From informing to remembering: Ubiquitous systems in interactive museums. Pervasive Computing, 1(2), 13–21. fortec, BENEFON, ifADo, IMS, & Stakes (2000). MORE – MObile REscue phone. Reports and demonstration to other interested groups. MORE D11 report. Gabrel, C. S. (2000). Characteristics of elderly nursing home current residents and discharges: Data from the 1997 National Nursing Home Survey. Advance data from vital and health statistics no. 312. Hyattsville, MD, USA: National Center for Health Statistics. Giraldo, C., Helal, S., & Mann, W. (2002). mPCA – A mobile patient care-giving assistant for Alzheimer patients. In First International Workshop on Ubiquitous Computing for Cognitive Aids (UbiCog ‘02), Gothenburg, Sweden. Glogoski, C. & Foti, D. (2001). Special needs of the older adult. In Pedretti, L. W. & Early, M. B., editors, Occupational Therapy: Practice Skills for Physical Dysfunction, fifth edition, Chapter 51, 991–1012. St. Louis, MO, USA: Mosby. Haigh, K. Z, Geib, C. W., Miller, C. A., Phelps, J., & Wagner, T. (2002a). Agents for recognizing and responding to the behaviour of an elder. In AAAI 2002 Workshop on Automation as Caregiver: The Role of Intelligent Technology in Elder Care, 31–38, Edmonton, Alberta, Canada. Haigh, K. Z., Phelps, J., & Geib, C. W. (2002b). An open agent architecture for assisting elder independence. In The First International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS), 578–586, Bologna, Italy. Haigh, K. Z. & Yanco, H. A. (2002). Automation as caregiver: A survey of issues and technologies. In AAAI 2002 Workshop on Automation as Caregiver: The Role of Intelligent Technology in Elder Care, 39–53, Edmonton, Alberta, Canada. Hazan, H. (1992). Managing Change in Old Age. The Control of Meaning in an Institutional Setting. Albany, NY, USA: State University of New York Press. Herzog, A. & Shahmehri, N. (2001). Towards secure e-services: Risk analysis of a home automation service. In Proceedings of the 6th Nordic Workshop on Secure IT Systems (NordSec), 18–26, Copenhagen, Denmark. Intille, S. S. (2002). Designing a home of the future. Pervasive Computing, 1(2), 80–86.
14
Katz, S., Ford, A. B., Moskowitz, R. W., Jackson, B. A., & Jaffe, M. W. (1963). Studies of illness in the aged. The index of activities of daily living. A standardized measure of biological and psychological function. Journal of the American Medical Association, 185, 914–919. Lawton, M. P. & Brody, E. M. (1969). Assessment of older people: Self-maintaining and instrumental activities of daily living. Gerontologist, 9, 179–185. Levine, R. E. & Brayley, C. R. (1991). Occupation as a therapeutic medium. A contextual approach to performance intervention. In Christiansen, C. H. & Baum, C. M., editors, Occupational Therapy. Overcoming Human Performance Deficits, first edition, Chapter 22, 590–631. Thorofare, NJ, USA: Slack. Libin, A. & Cohen-Mansfield, J. (2002). Robotic cat NeCoRo as a therapeutic tool for persons with dementia: A pilot study. In Proceedings of the 8th International Conference on Virtual Systems and Multimedia, Creative Digital Culture, 916–919, Seoul, Korea. Linköpings kommun (2002). TAXA Hemtjänst – särskilt boende. Avgifter inom äldre- och handikappomsorg. (Rates Home-help service – special accomodation. Fees within care for the elderly and the disabled.) In Swedish. Liu, K., Manton, K. G., & Aragon, C. (2000). Changes in home care use by disabled elderly persons. The Journals of Gerontology Series B: Psychological and Social Sciences, 55(4), S245–S253. Mann, W. & Helal, A. (2002). Smart phones for the elders: Boosting the intelligence of smart homes. In AAAI 2002 Workshop on Automation as Caregiver: The Role of Intelligent Technology in Elder Care, 74–79, Edmonton, Alberta, Canada. Maylor, E. A. (1990). Age and prospective memory. Quarterly Journal of Experimental Psychology, 42A, 471–493. Mihailidis, A. (2002). Using computerized cognitive devices to increase the independence of people with dementia: Present and future. In Designing for Diversity in Dementia Care, 177–184, Toronto, Ontario, Canada. Miller, C. A., Dewing, W., Krichbaum, K., Kuiack, S., Rogers, W., & Shafer, S. (2001). Automation as caregiver; the role of advanced technologies in elder care. In Proceedings of the 45th Annual Meeting of the Human Factors and Ergonomics Society, Minneapolis, MN, USA. Moskovitch, M. (1982). A neuropsychological approach to memory and perception in normal and pathological aging. In Craik, F. I. M. and Trehub, S., editors, Aging and Cognitive Processes, 55–78. New York, NY, USA: Plenum. Munson, M. L. (1999). Characteristics of elderly home health care users: Data from the 1996 National Home and Hospice Care Survey. Advance data from vital and health statistics no. 309. Hyattsville, MD, USA: National Center for Health Statistics. Östlund, B. (2002). The deconstruction of a targetgroup for IT-innovations: Elderly users’ technological needs and attitudes towards new IT. Nätverket – Kulturforskning i Uppsala, 11, 77–93. Pieper, M. & Kobsa, A. (1999). Talking to the ceiling: An interface for bed-ridden manually impaired users. In Altom, M. W. & Williams, M. G., editors, CHI99 Extended Abstracts, Video Demonstrations, 9–10, Pittsburgh, PA, USA. Pollack, M. E., Brown, L., Colbry, D., Orosz, C., Peintner, B., Ramakrishnan, S., Engberg, S., Matthews, J. T., Dunbar-Jacob, J., McCarthy, C. E., Thrun, S., Montemerlo, M., Pineau, J., & Roy, N. (2002). Pearl: A mobile robotic assistant for the elderly. In AAAI 2002 Workshop on Automation as Caregiver: The Role of Intelligent Technology in Elder Care, 85–92, Edmonton, Alberta, Canada. Rebok, G. W., Rasmusson, D. X., & Brandt, J. (1996). Prospects for computerized memory training in normal elderly: Effects of practice on explicit and implicit memory tasks. Applied Cognitive Psychology, 10, 211–223. Rogers, W. A., Meyer, B., Walker, N., and Fisk, A. D. (1998). Functional limitations to daily living tasks in the aged: A focus group analysis. Human Factors, 40(1), 111–125. Shahmehri, N. (2001). Intelligent systems and the elderly – problems and possibilities. In Conference on Ageing, Care and Welfare of Elderly and how IT can improve Quality of Life, Stockholm, Sweden. Shekar, S., Nair, P., & Helal, A. (2003). iGrocer – A ubiquitous and pervasive smart grocery shopping system. In Proceedings of the ACM Symposium on Applied Computing (SAC), Melbourne, FL, USA. Ställdal, E., Caesar, M., & Eriksson, R. (2001). Ageing, elderly care and how IT can be used to enhance the quality of life for the elderly. Report from a Swedish-Japanese scientific seminar held in Stockholm. Steen, B. (1991). Det nya åldrandet. (The new ageing.) Socialmedicinsk tidskrift, 68(2–3), 117–120. In Swedish. Willis, S. L. (1996). Everyday problem solving. In Birren, J. E. & Schaie, K. W., editors, Handbook of the Psychology of Aging, fourth edition, Chapter 16, 287–307. San Diego, CA, USA: Academic Press.
15