User Activity Synthesis in Ambient Intelligence Environments - CiteSeerX

2 downloads 121 Views 267KB Size Report
supports service provisioning in an AmI environment, building upon the Web services3 (WSs) paradigm. A WS is defined as a software entity, which: (a) is ...
User Activity Synthesis in Ambient Intelligence Environments Nikolaos Georgantas, Valérie Issarny INRIA UR Rocquencourt Domaine de Voluceau, 78153 Le Chesnay, France {Nikolaos.Georgantas,Valerie.Issarny}@inria.fr In Adjunct Proceedings of 2nd European Symposium on Ambient Intelligence (EUSAI 2004), November 8-10, 2004, Eindhoven, The Netherlands.

must be respected to get a useful result. This very generic definition allows for the interoperation of WSs developed and deployed independently on heterogeneous platforms. This is particularly important in the open, multi-platform AmI computing environment. Building on the generic WSs model, the WSAMI middleware establishes the base for dynamic composition of mobile WSs in AmI environments. Specifically, WSAMI comprises: (i) the WSAMI XMLbased language for describing mobile composite WSs; (ii) the naming & discovery service (ND) for dynamically locating requested services; and (iii) a lightweight middleware core broker, developed specifically for enabling WS deployment on wireless resource-constrained devices.

ABSTRACT

Ambient Intelligence (AmI) has opened new perspectives to the enactment of human activities related to accessing information and computation. We present in this paper our approach based on Web services towards the dynamic synthesis of user activities within an AmI environment. We introduce a detailed architectural model allowing for precise modeling of environment functionality and its integration into user activities. We illustrate our modeling approach by applying it to a demanding AmI scenario. Keywords

Nevertheless, AmI requires further advanced middleware support for the enactment of complex user activities related to accessing information and computation. A user enters into an AmI environment, carrying a portable device. Networked devices provide the environment’s functionality in the form of services/resources, which are dynamic. A user executes tasks within the environment, which are highlevel user activities combining access to information and computation. A user task is executed by employing the services/resources of the environment. A scenario of a complex user task is presented in the following; it is an adapted extract from one of the Ozone demonstrator scenarios.

Ambient Intelligence, architectural model, service composition, middleware, Web services INTRODUCTION

Ambient Intelligence (AmI) is systemically realized as a synergistic combination of intelligence-aware humanmachine interfaces and ubiquitous computing & networking. Our work described in this paper is part of the effort of the IST Ozone1 project; within Ozone, we have in our previous work elaborated the WSAMI middleware2 [2], which supports service provisioning in an AmI environment, building upon the Web services3 (WSs) paradigm. A WS is defined as a software entity, which: (a) is deployed on the Web; (b) exposes a public interface – comprising a set of operations – described in an XML-based language; and (c) can interact with other software entities using an XMLbased protocol over standard Internet transport protocols. A WS is in general stateful, that is, a specific sequence in invoking and/or servicing its operations, i.e., conversation,

“…The Rocquencourt city offers cybercar transportation; these are automated vehicles that do not require any driver assistance and may be booked via the Internet. Michel waits for his friend Paul at the Rocquencourt tennis club. Michel lives in Rocquencourt, Paul in Paris. They arranged their game 3 days ago. Paul arrives at Rocquencourt by public transportation. He has already reserved a cybercar to take him to the tennis club. While waiting for the cybercar to arrive, he thinks about where they should go for lunch after the game. He uses his PDA to take a quick review on the Rocquencourt City Guide System, which provides a restaurant guide service presenting slide shows of proposed restaurants along with text critics from related periodicals. While he is watching a slide show, the cybercar arrives. He continues his restaurant browsing in

.ACM COPYRIGHT NOTICE. Copyright by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept., ACM, Inc., fax +1 (212) 869-0481, or [email protected]. 1

2

http://www.extra.research.philips.com/euprojects/ozone/

3

1

http://www-rocq.inria.fr/arles/download/ozone http://www.w3.org/2002/ws/

assume this for any component, we adopt a composition scheme based on centralized coordination [5]. However, this centralized coordination scheme does not preclude peer-to-peer interaction among composed service/resource components; this interaction will normally take place under the initialization and control of the coordinating entity.

the cybercar, benefiting from the cybercar’s large screen and speech recognition facility…” We present in this paper our approach towards the support of complex user task synthesis within the AmI environment. We elaborate a fine-grained architectural model, which precisely models environment functionality in the form of software components as well as functionality integration into user tasks. Based on this model, we enhance our WSAMI middleware infrastructure to provide for dynamic, situation-sensitive composition and reconfiguration of user tasks based on the particularity of components.

Composition is realized by employing a coordination component and a computation component, as depicted in Figure 1a. The coordination component encloses conversation functionality with each service/resource component and is stateful. It is assumed that the coordination component does not itself contain any processing functionality on the data exchanged with the service/resource components. The computation component adds processing functionality to the coordination component and is stateless.

In the following, the second section details the elaborated architectural model. The third section presents our approach to synthesizing user tasks, building upon the architectural model. To illustrate our approach, we apply it to the presented scenario. Finally, the fourth section concludes, discussing related work. AN ARCHITECTURAL MODEL COMPUTING ENVIRONMENT

FOR

THE

A combination of a coordination and a computation component provides incomplete composite functionality. We call this a generic composite service/resource component. This component exposes two public WS interfaces: (i) the low interface, which connects to the elementary components being composed; and (ii) the high interface, which allows access to the composite functionality. The result of the composition is a composite service/resource component, which exposes a public WS interface, as in the case of an elementary component. This interface is the high interface of the generic composite component.

AmI

We model environment functionality, identifying a number of component classes. Our model incorporates: (i) modeling of the environment’s services; (ii) modeling of the environment’s resources, for example I/O devices; (iii) reasoning on patterns for composition of services/resources, for example favoring a centralized coordination or a peerto-peer scheme; (iv) modeling of “client” functionality, that is, functionality accessing the services/resources of the environment; (v) modeling of user interface devices, which capture user interaction; and (vi) modeling of users’ tasks and their composition from all the above functional entities. This model provides the base for managing the different forms of environment’s functionality, allowing for the guided composition of complex functionality. We detail in the following the component classes of our model.

The presented composition scheme may be applied recursively, meaning that a composite component may participate in another composition providing a new composite component. This results in nesting an arbitrary number of composition levels. Tasks

To provide complete user task functionality, end use functionality, that is, client application functionality shall be added to the components we have introduced so far. End use functionality is provided by an end use coordination component, a computation component and a user interface component, as depicted in Figure 1b. This end use functionality may either connect to an elementary/composite component or compose a number of elementary/composite components. The result of adding end use functionality to service/resource components is a task.

Service/Resource Components

We model an atomic service or resource of the environment as an elementary service/resource component [7] exposing a public WS interface, as depicted in Figure 1a. We consider I/O devices as a special class of elementary service/resource components, which we call I/O components. A number of elementary service/resource components may be composed to provide composite functionality. Service/resource components of the environment may follow a “passive” behavior model, meaning that they expect to be invoked by a controlling entity wishing to use their functionality. For service/resource components which exhibit “active” behavior comprising outgoing operations, such as a sensor control component, there may be an initialization phase, during which the controlling entity manifests itself to the component as the target of its outgoing behavior by suitable control operations on the component. Alternatively, a more generic behavior model may envisage peerto-peer interactions among composed components. This requires that components be aware of the composition and contain inherent composition functionality. As we cannot

The user interface (UI) component interacts with the user. We decompose the UI component into a UI front-end component and a UI back-end component. The UI front-end component is an I/O component and contains generic functionality, while the UI back-end component contains taskspecific functionality. We give the following examples illustrating this distinction: •

2

A graphical UI is developed as a Web application. The task-specific UI back-end is based on technologies like JSP/servlets, while the UI front-end is a generic Web browser. The two components communicate via HTML/HTTP.



To simplify the task-specific UI back-end component, we consider it stateless, just following the states of the end use coordination component. However, the UI front-end component is a generic I/O component, which may be stateful.

A speech recognition UI comprises a generic speech recognition front-end turning speech into text, and a taskspecific back-end interpreting text into commands meaningful to the task. The two components communicate via WS interfaces. composite service/resource

task

elementary service/resource generic composite service/resource

generic task user interface

service/resource low

high coordination

low

UI front-end

low UI back-end

end use coordination

elementary/composite services/resources

service/resource computation

computation

Figure 1a-b. Elementary/composite service/resource component; task model To use any task on the PDA, the user launches TS, which provides a list of the available tasks. When the user selects a specific task, TS uses ND to retrieve the required service/resource components. TS sets up the default partial configuration, if no preferable remote I/O and UI components are available. Furthermore, it retrieves the rest of the required service/resource components and completes the composition. Selection among several available components may be based on machine-oriented semantic information [4] on the component; the user may be also asked to guide the selection aided by human-oriented component semantics. The policy to be applied depends on the particularity of components, e.g., TS may select I/O and UI components directly and request the user to select service components. While a specific task is being executed, TS periodically checks for the availability of new service/resource components by querying ND to see if a new ND node has been discovered. If this is the case, TS uses ND to retrieve all the suitable service/resource components residing on the new node. Based on the differentiation of components and the defined policy, TS may decide whether a reconfiguration shall be performed automatically or after asking the user. For example, new powerful I/O or UI components may be directly employed.

A combination of an end use coordination, a computation and a UI back-end component provides incomplete task functionality. We call this a generic task component. This component exposes: (i) a public WS low interface that connects to elementary or composite service/resource components; and (ii) a public low interface that connects to the UI front-end component, which may be a WS interface, as in the example of the speech recognition UI, or other special interface, as the HTML/HTTP interface of the other example. We now provide for the case in which user interaction may come from more than one source. For example, a user executing a task from his/her PDA may use for part of the task execution, exclusively or in parallel, another user interface device: a traditional one such as a power-plugged workstation, or an advanced one such as a large interactive wall screen. We model this capability by allowing for user interaction distribution into more than one UI components. SYNTHESIZING USER ACTIVITIES

We exploit the elaborated architectural model to provide for dynamic, situation-sensitive composition and reconfiguration of tasks, specializing according to the particularity of integrated components following policies defined by the system designer. To this end, we enhance the WSAMI middleware with an additional middleware service, called the task synthesis service (TS). To illustrate our approach within the concrete Ozone’s context, we assume that a generic task component always resides on the user’s device, e.g., a PDA. Every task deployed on the PDA is registered statically with the local TS. An example of differentiation between components is that, for a specific task deployed on the PDA, TS may maintain a pre-defined default partial configuration that binds – when applicable – the task’s I/O and UI component placeholders to local – always available – I/O and UI components.

In the next section, we apply the introduced task synthesis mechanism to the scenario presented in the first section. A Closer Look to the Scenario

The restaurant guide service is a composite service component integrating two elementary service components, which enclose two distinct information services: a restaurant slide show site and a restaurant critics site. The task “accessing a restaurant guide service”, namely, a client application to the restaurant guide service resides on the PDA offering a classical WIMP user interface, which is a combined UI front/back-end component. The user may access the service and browse through the proposed restaurants. After select-

3

ing a specific restaurant, the user may navigate through the slide show and view the text critics, by means of the PDA’s image viewer and text viewer, launched by the application. The two viewers are output components composed locally by the task. Further, the task comes along with a speech command interpretation capability – a UI back-end component – allowing the user to navigate through the slide show and view the text critics by using simple commands. This capability provides an alternative to the WIMP interface for the part of the application concerning the slide and text navigation. However, there is no speech recognition unit – a UI front-end component – on the PDA, as this would require rich resources. The existing command interpretation unit may be connected to an external speech recognition unit via a WS interface. Since the speech recognition unit is not available locally, the user uses the WIMP interface to navigate through slides and text on the PDA.

tailed component model identifying the different classes of functionality within the environment allows for the guided composition of complex functionality. In project Gaia [3], integration of environment’s resources into an application is treated by a system infrastructure encompassing conventional operating system functions; thus, a traditional application may be partitioned onto different devices and be dynamically reconfigured while executing. Gaia builds upon distributed CORBA objects. Gaia introduces an application partitioning model that extends the traditional ModelView-Controller model [6]. Gaia’s building on distributed objects implies a rather strong coupling among components, which is undesirable for highly dynamic environments. Gaia’s application partitioning model resembles our component model; however, our concept of a task is more general than the concept of an application: a task is a user’s activity that may span several applications.

To use the task “accessing a restaurant guide service” on the PDA, while waiting for the cybercar, Paul launches TS, and selects the specific task from the list of available tasks. As no other I/O and UI components are available, TS sets up the default configuration integrating the local image viewer and text viewer; the speech control capability will not be used. Furthermore, TS retrieves the remote restaurant guide, restaurant slide show and restaurant critics services and completes the composition; Paul is asked to select between two available restaurant critics services. When getting into the cybercar, TS perceives the availability of new service/resource components. It retrieves the cybercar’s image viewer, text viewer and speech recognition unit and integrates them automatically; the first two replace the existing PDA’s viewers in the task’s configuration. Reconfiguration is done dynamically. Access to the restaurant guide service is not interrupted. If in the middle of a slide show, Paul may use a speech command like “next!” to ask for the next slide to be displayed on the cybercar’s large screen. Paul must still use the PDA’s WIMP interface to select another restaurant.

Building upon the well-defined Ozone WSAMI middleware, we have elaborated a task synthesis middleware service, which provides for dynamic, situation-sensitive composition and reconfiguration of user tasks, specializing according to the particularity of integrated components. This work has been based on a fine-grained component model, which precisely models functionality within an AmI environment. REFERENCES

1. Joao Pedro Sousa and David Garlan. Aura: an Architectural Framework for User Mobility in Ubiquitous Computing Environments. In Proceedings of the Working IEEE/IFIP Conference on Software Architecture, Montreal, August 25-31 2002. 2. Valerie Issarny, Daniele Sacchetti, Ferda Tartanoglu, Francoise Sailhan, Rafik Chibout, Nicole Levy, Angel Talamona. Developing Ambient Intelligence Systems: A Solution based on Web Services. In Journal of Automated Software Engineering, 2004. To appear. 3. M. Roman, C. K. Hess, R. Cerqueira, K. Narhstedt, and R. H. Campbell. Gaia: A Middleware Infrastructure to Enable Active Spaces. Technical Report UIUCDCS-R2002-2265 UILU-ENG-2002-1709, University of Illinois at Urbana-Champaign, February 2002.

CONCLUSION

A number of research efforts have focused on the dynamic composition of functionality in mobile computing environments. In project Aura [1], an approach similar to ours has been taken. Tasks are represented as abstract service coalitions; abstract services are implemented by wrapping existing environment’s applications to conform to Aura APIs. Diverse interaction mechanisms among component services are encapsulated into connectors. Service abstraction in Aura follows a proprietary model, while our approach builds upon the widely accepted WSs standard. On the other hand, WSs define a specific interaction mechanism based on standard protocols, which is restrictive compared to Aura connectors. However, component interoperability is thus directly enabled, while the pervasiveness of the Web and the minimal requirements that WSs pose upon software components guarantee the availability of rich conforming functionality in any environment. Further, our de-

4. T. Berners-Lee, J. Hendler, O. Lassila. The Semantic Web. Scientific American, May 2001. 5. Benatallah B., Dumas M., Fauvet M. C. and Rabhi F.A. Towards Patterns of Web Services Composition. In S. Gorlatch and F. Rabhi (Eds): "Patterns and Skeletons for Parallel and Distributed Computing". Springer Verlag (UK), 2002. 6. G. E. Krasner and S. T. Pope. A Description of the Model-View-Controller User Interface Paradigm in the Smalltalk-80 System. ParcPlace Systems, Inc., Mountain View 1988.

4

7. Borriello, G. and Want, R. Embedded computation meets the World Wide Web. In Communications of the ACM, 43, 5, 2000.

5

Suggest Documents