Adaptive Interfaces for Supportive Ambient Intelligence Environments Julio Abascal, Isabel Fernández de Castro, Alberto Lafuente, and Jesus Maria Cia Laboratory of Human-Computer Interaction for Special Needs Informatika Falultatea University of the Basque Country-Euskal Herriko Unibertsitatea Manuel Lardizabal 1, 20018. Donostia. Spain {julio.abascal,isabel.fernandez,alberto.lafuente}@ehu.es
[email protected]
Abstract. The Ambient Intelligence paradigm offers an excellent way to define Ambient Assisted Living systems for all kind of users. In addition, people with physical, sensory or cognitive restrictions are expected to particularly benefit from the support of intelligent environments. Nevertheless, the huge diversity of users’ characteristics makes very difficult to develop interfaces that are adequate for all of them in order to successfully interact with the environment. Even if a “Design for All” approach is assumed and adaptive interfaces are adopted, it is almost impossible to fulfill the diverse, and frequently contradictory, requirements of the different users. This paper presents an experience of designing adaptive interfaces oriented to the needs of the elderly people living in an intelligent environment. These interfaces are integrated in an architecture destined to build complex Ambient Intelligent environments that share resources –mainly hardware and heterogeneous networks– and knowledge. Keywords: Adaptive Human-Environment Interfaces for Elderly People, Ambient Intelligence, Ambient Assisted Living.
1 Introduction The fast development experienced by Ambient Intelligence in the last years permitted the development of the Ambient Assisted Living concept, focused on environments that proactively support the users. However, despite of the numerous attractive scenarios that have been devised and carefully described, the design of the interface between the user and the system is still unclear. Most authors agree on the need of directing the work through the so called “natural interfaces”. In most cases it means that the user will spoke to the system. Therefore the system has to be able to understand spoken natural language and probably to perform some gesture recognition. The environment should communicate with the user mainly by means of voice messages, even if some complex information may require written messages displayed in large mural screens or in small displays built in wearable devices (watches, glasses, mobile telephones, etc.). Nevertheless, something which is “natural” for many users may be “banned” for others. For instance, some K. Miesenberger et al. (Eds.): ICCHP 2008, LNCS 5105, pp. 30–37, 2008. © Springer-Verlag Berlin Heidelberg 2008
Adaptive Interfaces for Supportive Ambient Intelligence Environments
31
people with sensory, motor or cognitive restrictions can experience difficulties to utter sentences that the system can understand. In addition, they may have diverse degree of difficulties to hear, read or understand the messages coming from the environment. Therefore, they require different ways to interact with the system. Among people that can experience restrictions, elderly people compose a human collective with an enormous diversity. With the pass of the time, people tend to experience physical, sensory or cognitive restrictions and, in some cases, they may be included in the collective of people with disabilities. Nevertheless, a disability oriented approach is not adequate for elderly people. Therefore, the design of the usersystem interface for elderly people living in Ambient Assisted Environments require a specific approach that takes into account their special physical and cognitive features. This paper presents an architecture to built adaptive interfaces to support elderly people living in intelligent environments. As a part of this architecture, intelligent agents interact with the users and share knowledge with other intelligent applications by means of a novel Intelligent Services Interface level. The proposal has been tested in the PIAPNE project that is briefly described.
2 Requirements of the Interfaces for Elderly People As an extension of the requirements described in [1], in order to appropriately interact with elderly people an interface requires the following features: Efficient, Supportive, Rehabilitative, Adaptive, Non-disruptive and Ethically Aware. • Efficient: It must be able to both collect the request from users and to communicate the necessary information in an effective way. • Supportive: Users must be empowered by the interface. The system must sustain their ability to develop by themselves any kind of tasks they desire and can do. In addition the system must help them managing the tasks they cannot do or are not interested in. • Rehabilitative: When possible, the system must stimulate users to recover functions and abilities that are restricted and must maintain the ones that are active. • Adaptive: The communication must be adapted to the modalities the user can handle and understand (taking into account that they may have physical, sensory or cognitive restrictions impeding the use of some standard input/output devices or modalities [2]). • Non-disruptive: Explicit communication with the user must be initiated by the system only when it is strictly necessary and avoiding disturbing the execution of other tasks, unless urgent safety-related actuations are required. • Ethically aware: The system cannot take decisions on behalf of the user without his or her consent. The transmission and storing of sensitive information about the user (such as health habits, behavior patterns, etc.) must be encrypted while it is being used and erased when it is no longer necessary. In addition, “informed consent” is required for people who are being monitored or tracked [3].
32
J. Abascal et al.
Some of these requirements are not technological. Nevertheless they can not be ignored in the design. Systems developed without taking into account psychological, social or ethical “side effects” may produce considerable impact in the autonomy, privacy or the users’ way of life. Frequently these effects are deeply routed in the conceptual design of the system and may not be removed without a complete redesign.
3 Adaptive User Interfaces for Elderly People Adaptive interfaces are able to modify their appearance and behavior in order to adjust themselves to the user’s current needs and interests. The knowledge of the current value of a number of selected parameters that characterize the user and the interaction context can be used to optimize the communication. These parameters are usually grouped in models and stereotypes that allow reasoning about the desirable functionality of the interface for each different situation. Therefore, adaptive interfaces must include mechanisms to make assumptions about the current status of several observable and relevant parameters. For elderly people living in an intelligent environment, we have designed three models: User Model, Task Model and Environment Model. The objective is to determine how the system has to react when a specific user is in a particular place doing a specific task. The User Model records physical, sensory and cognitive characteristics of the user, such as Hearing (normal, hard of hearing, deaf), Vision (normal, low vision, blind), Reaction Capacity (slow, medium, fast), Movement Precision (low, high), Speech (normal, dysarthric), Orientation Capacity (low, high), etc. In addition, user profiles and stereotypes can be defined for the possible meaningful combinations of these parameter values. Changes in some of these parameters are extremely slow (e.g. hearing or vision capacity), while other parameters can suffer sudden modifications (e.g. reaction time and movement precision may decay very quickly due to fatigue, whilst they may enhance due to interest or motivation). There are parameters that vary smoothly with the learning process (e.g. orientation capacity). Therefore, an adaptive interface continuously estimates these parameters and compares them with the user profiles. In this way it can make assumptions about the current state of the user and decide the modality and the type of devices needed to communicate with him or her. The Task Model records the set of well identified and described tasks that can be performed (e.g. eating, slipping, using the toilet, etc.) and the restrictions (of time, place, etc.) for each one. In order to be able to recognize whether a user is performing a specific task they are characterized by patterns of values collected by sensors. When the system has detected which task is currently performed by the user it can verify the presence of potential safety risks. In addition, the interface can make assumptions that allow disambiguating the user requests and commands. Task modeling also allows easing the communication with the user for everyday support: when the user is performing a particular task, only the possible actions related to that task are offered to the user. The Context Model is built as a topological map, similar to the ones used by mobile robots. With the data received from the location system, this model determines
Adaptive Interfaces for Supportive Ambient Intelligence Environments
33
where in the map is exactly the user. Taking into account the restrictions for users, places and times, the intelligent application can verify whether a task is being performed in an appropriate place, or whether unexpected activities are happening in a particular position. From the point of view of the interface, the location of the user allows to select the particular input/output devices that must be used for communication. In addition it helps in the disambiguation of location-dependent user requests such as “Switch on the lights”. 3.1 User Interface for Elderly People Support For our purpose we consider semi-autonomous elderly people living in a flat provided with enough infrastructures such as diverse types of networks, sensors and processors. The interface must be clear and supportive. It should help elderly people to develop everyday tasks, reminding them times, places and procedures. In addition it should be able to easily understand and perform user’s commands. The mode of the interaction depends on the user sensory and cognitive characteristics. The adaptive interface provides coherent multimedia messages that can be easily understood by the users. In the current implementation voice messages and text messages (appearing in a TV screen) are being used and tested. The user requires services or provides commands by means of voice orders or using a remote control for TV. Users wear a discrete tag to be located and to monitor body position, in order to detect falls, strokes, etc. In addition a network of heterogeneous sensors, provide environment information such as temperature, humidity, status of the lights, gas, water, etc. This scenario can be extended to another one where elderly people with dementia live in a residential institution. In this case, the user interface must be very simple, since the tasks they can perform autonomously are reduced. However, care personnel require advanced interfaces to monitor user’s safeness and health.
4 Sharing Information between the Environment and the Interface The technological support to Ambient Assisted Living –to help elderly people– may be provided by Ambient Intelligence systems through ubiquitous, distributed and wearable computing. Therefore, it is necessary to built technological infrastructure that allows the design of assistive environments able to supply coherent assistance to the users. Our approach to Ambient Assisted Living is based on the specification of different intercommunicating levels. This architecture allows the interconnection of diverse hardware elements through heterogeneous wired and wireless networks. A middleware level provides interoperability functions and solves presentation and networks compatibility [4]. In this architecture diverse adaptable user interfaces share the infrastructure and interact with the other levels through the Intelligent Services Interface, in order to handle user input/output devices, networks and applications, for communication and control.
34
J. Abascal et al. Table 1. Use of the models by applications and interfaces
Intelligent Applications
Adaptive User Interfaces
- Characterization of the user: likes, capacities, - Modality for communication permissions, restrictions… - User interface device selection - Task requirements for safety verification: (Is - Reduction of the choices for the user allowed to perform this task?) message production Task - Requests’ disambiguation Model - Task performing assistance - User location (Where is the user?) - Reduction of the choices for - Spatial requirements for safety verification: message production Context (Is the task done in the appropriate place? Is - Requests’ spatial disambiguaModel the user allowed in this place [at this time]?) tion - Guidance of smart wheelchair - User guidance User Model
It cannot be forgotten that in this kind of environments multiple intelligent applications are running in parallel providing support to the user. Many of these applications need to handle models of the user, the task or the environment (see table 1). Nevertheless, to maintain separated models can suppose a waste of processing effort and a source of inconsistency. Therefore the choice for our approach is to share models among the diverse intelligent applications, including the adaptive interface. User Interface 1
User Interface 2
User
Intelligent application 1
… Interface n
Intelligent application 2
Intelligent
… application m
Intelligent Services Interface
Middleware Network 1 Hardware
Network 2 Hardware
Network 3
…
Hardware
Network k …
Hardware
Fig. 1. The proposed architecture
Taking into account the functions and behavior of adaptive interfaces in the proposed architecture each interface instance is conceived as an intelligent application. All the applications communicate and interchange or share information through the level, called Intelligent Services Interface (figure 1), that provides modeling information to the intelligent applications.
Adaptive Interfaces for Supportive Ambient Intelligence Environments
35
In this way interfaces may be designed as proactive intelligent agents able to interact with other intelligent applications and to mediate between the user and the system. This approach very much simplifies the design of the interface, and allows the inclusion of new agents with new capabilities.
5 The PIAPNE Environment To test this approach the PIAPNE intelligent environment recreates a smart house where elderly people (maybe with physical, sensory or cognitive restrictions) receive intelligent environment support to maintain an independent way of life. The system is adaptive and context sensitive. In order to provide precise people location information the project used a room where an accurate Radio Frequency-based location system was deployed1. The location system collects the position of people wearing a discrete tag with 30 cm accuracy and sends these data to the application every 10 seconds. A snap-shot of the user location application can be found in figure 2.
Fig. 2. User location application
Diverse intelligent agents supervise users’ activities, to relate them to the place where they are being carried out and to detect risky situations. In this way, most dangers can be avoided and the user (or other responsible person) can be warned. In addition, the user can receive help to perform everyday tasks, either when the system detects a need for assistance or at the user’s request.
1
This location system was developed and tested by the Assistive Technology Research Group of the University of Zaragoza [5].
36
J. Abascal et al.
By means of a stimulus/reaction mechanism agents are linked to the current status of the environment. In this way, their collective performance produces a sort of intelligent behavior. The domain requires the following features: immediate answer for real time performance, emergent activation to shoot alarms and modulation by allocation of specific tasks to each agent. The system has a mixed architecture combining both, an action-reaction rule architecture and a subsumption architecture that allows the activation or inhibition of agents by other agents of a higher level in the hierarchy. The intelligent agents are implemented in JESS. Each of them has a set of actionreaction type rules. The knowledge-base is composed of facts containing information collected from the environment by a network of sensors. In addition, it contains the current and previous tasks and users characteristics. PIAPNE environment is described with more detail in [1]. See below a fragment of a task recognizer written in JESS programming language. Fragment of the “cooking lunch” task recognizer written in JESS ;******************************************************** ; TASK: COOKING ;******************************************************** (defrule TASK-RECOGNIZER: cooking-lunch ?imp