popularity of personal digital assistants and cell-phones is turning Ubiquitous ..... When the user enters the room, her badge is automatically detected and her ...
Ubiquitous Application Model Manuel Román and Roy H. Campbell {mroman1, rhc}@cs.uiuc.edu Department of Computer Science University of Illinois at Urbana-Champaign 1304 West Springfield Av., Urbana, IL 61801 Abstract Advances in network communications and embedded technologies, and the increasing popularity of personal digital assistants and cell-phones is turning Ubiquitous Computing into a reality. The high availability of data and services promotes a new computational environment that goes beyond the physical limits of personal computers. This paper presents a generic model for ubiquitous applications, which enables the construction of applications that span the limits of individual devices and are not limited to particular scenarios. The model takes into account issues such as user mobility, frequent changes in the execution environment, the existence of an Active Space infrastructure, and uses digital and physical context as a valuable input source. Key Words: Ubiquitous Computing, application model, mobility, adaptability.
1. Introduction We envision a society in which mobile users access computational resources and services through hundreds of devices embedded in the physical environment as well as in wearable and handheld computing devices. This vision is based on the computational model defined by ubiquitous computing [], which proposes bringing the digital world into the physical one. However, besides all the anticipated benefits, ubiquitous computing raises many questions and challenges that will have to be addressed in order to take full benefit of this new computational paradigm. One of the key questions that must be addressed is what does ubiquitous computing enable, and how can we (developers and users) take benefit of it. Ubiquitous computing scenarios are characterized by the coexistence of devices with different hardware properties (e.g. CPU speed, memory availability, and network communication technologies) and heterogeneous software services and infrastructures (e.g. middleware protocols, and operating systems). This environment greatly differs from traditional desktop scenarios and complicates the process of creating applications. One of the most important issues is that existing assumptions for traditional desktop applications (e.g. default availability of keyboard, monitor and mouse, and no application mobility or adaptability) do not longer apply. There is an important research effort in the area of Ubiquitous Computing, especially in understanding how to orchestrate large collections of heterogeneous devices associated to particular spaces (e.g. offices, houses and classrooms). We propose an application model for ubiquitous applications that takes into account ubiquitous computing requirements,
standardizes the process of developing ubiquitous applications and leverages existing software infrastructures for ubiquitous computing [standford] [mit] [Gaia OS]. These applications are not customized for particular scenarios and can instead adapt to different environments according to the specific properties of such environments. The application model is based on the model defined by the Model-View-Controller pattern [] and customizes it to the specific requirements of ubiquitous computing. The new application model proposes not only a new way of developing applications, but also a new way of using such applications. The paper is organized as follows: section 2 describes the proposed ubiquitous application model. Section 3 presents an example of a ubiquitous application we have already built based on the ubiquitous application model and using our Active Spaces software infrastructure. Section 4 lists the main properties of ubiquitous applications. Related work is described in section 5, and the paper concludes in section 6 and presents our future work in section 7.
2. Ubiquitous Application Model This section presents our proposed ubiquitous application model, which provides a standard mechanism for building applications for ubiquitous computing scenarios. This application model is based in the traditional Model – View – Controller paradigm, though it extends it and adapts it to Ubiquitous Computing scenarios.
2.1 The Traditional Model – View – Controller MVC [1] defines a modular scheme that models the behavior of any interactive application by clearly encapsulating (1) the model of the application domain (model), (2) the visualization of the model (view), and (3) the mechanisms to interact with the model (controller). This modular architecture simplifies the task of modifying and extending the application, as well as reusing specific components. A model has one or more views attached, which are responsible for displaying the data in some particular way. The benefit of this separation between model and view is that the same model can be rendered in different ways. The model explicitly knows about the assigned views and is responsible for updating them whenever a change in the model’s state is detected. The missing element in this scenario is the controller, which allows users to interact with the application model. A controller stores a model-view pair as well as a reference to an input sensor (e.g. mouse, keyboard and pen), which captures input events generated by users. The result of the input event (e.g. left mouse key pressed) depends on the associated view and the actions derived from the control mechanism are automatically sent to the view’s associated model.
2.2 The Model – Presentation – Adapter – Controller - Coordinator The concepts defined by the traditional MVC are valid for any interactive application, regardless of the specific environment where applications run. An application has a model, externalizes such model so users can perceive it, and has some mechanisms to
modify the state of the model. However, most of the existing implementations of MVC are customized for traditional application environments (device centric), and therefore it is difficult to reuse them in the context of Ubiquitous Computing. The ModelPresentation-Adapter-Controller-Coordinator (MPACC) is an application model that maps the MVC pattern into Ubiquitous Computing scenarios. This new model takes into account issues such as the non-existence of a single interaction device, contextual properties associated to the user and the space where the application runs, automatic model-view data type adaptation, mobility of the view, model and controller, and applications running on behalf of a user or a space, instead of in the context of a particular device. The MPACC defines a model that standardizes the way in which applications for ubiquitous computing environments are built. From the user perspective, the ubiquitous application model defines a new way of using and customizing applications. Traditional MVC presents the state of the model to users by means of views. Views are responsible for rendering the state in some visual form. In the ubiquitous application model, presenting the model’s state to the user does not necessarily imply rendering it. The model can be externalized in any possible way that affects users’ senses. Therefore in this new model, the user cannot only see the application’s model, but also hear it, smell it, and touch it. One of the goals of this application model is to define a pattern that standardizes the way in which applications for ubiquitous computing environments are designed, built and assembled. The ubiquitous application model defines five elements: Model, Presentation, Adapter, Controller, and Coordinator. Figure 4 illustrates a UML diagram of the application model. Coordinator
1..1
0..*
Model
0..* 1..1
Adapter
1..1 1..1
0..* 1..* Presentation 1..* 0..1 1..*
Externalization Affects
Controller 0..* 1..* 1..*
Affector Figure 4. The Ubiquitous Application Model
An application is composed of a coordinator, a model, one or more presentations, one or more controllers, and optionally one or more adapters. The coordinator is responsible for managing all these four elements according to specific policies. The ubiquitous application model does not impose any restriction on the implementation of the
components (e.g. the can be implemented as a collection of distributed sub-components or as a single component). 2.2.1 Model The Model is the implementation of the application’s central structure, which normally consists of data and a programmatic interface to manipulate the data. A model can be as simple as an integer with associated methods to increase, decrease and retrieve its value and representing a counter, or as complicated as a model that stores, processes and manipulates the vital signs of a patient in a surgery room, who is part of a medical application. Whenever the state of the model changes, it automatically notifies all attached presentations. The model has one or more presentations that externalize its state, an optional number of adapters that transform the state format to the presentation input format, and zero or more controllers that modify the state of the model. 2.2.2 Presentation Presentations are software components that transform an application model into an output format that can be perceived by any human sense: touch, smell, sight, and hearing. This generalization contrasts with the traditional MVC implementations where the model is normally externalized through a view. Presentations have one associated model, zero or more controllers and zero or one adapters. 2.2.3 Adapter Models are not coupled to particular presentations. Models are responsible for managing a specific type of data, and export well defined mechanisms to manipulate such data. Presentations, on the other hand, are responsible for interpreting such data and externalizing it. However, in some situations, data exported by a particular model cannot be directly manipulated by existing presentations. Therefore, the model’s data has to be adapted to the requirements of specific presentations. Adapters are the components responsible for this task. They transform the data exported by a model in such a way that can be externalized by a specific presentation. The decoupling of models and presentations makes it impossible for specific models to implement the adaptation mechanisms. Models do not know a priori the type of presentations they will have attached. For this reason, adapters are instantiated automatically by the system (external infrastructure) when a presentation is attached to a model and the data types are incompatible. Adapters are associated to a single model, but they can be used by different presentations. 2.2.4 Controller The Controller encapsulates the mechanisms to modify the state of the model. However, unlike the standard MVC controller, the controller defined by the ubiquitous application model not only coordinates input devices, but in general any source of information (e.g. physical and digital context data) that affect the application. Controllers provide the interface between the application model and any source of information (e.g. physical and digital context data) that can affect the application. The affector class depicted in figure 3 is responsible for abstracting any type of data that can
be of interest to the controller. Upon receipt of external data, the controller modifies the model accordingly. These changes are automatically propagated to all presentations associated to the model. Since it is possible for a user to have more than one device associated, it is also possible to have controllers running in different devices and therefore active at the same time. This contrasts the MVC where only one view/controller pair can be active at the same time. The ability of having more than one controller active at the same time allows space-multiplexed configurations [2]. This type of configuration allows having more than one controller, each one them associated to a different functional aspect of the application (e.g. a wheel to control the sound level, a switch to go to the next song and a touch screen to edit the play list of a music application). However, the ubiquitous application model introduces the concept of time-space-multiplexed configurations. Spatially multiplexed controllers can be time multiplexed according to different applications running at different times (e.g. the same wheel, switch and touch screen can be used to control a video projection application). Controllers map input events into messages meaningful for the model. While it is possible to programmatically define those mappings (as in MVC), the ubiquitous application model allows users to configure those mappings at run time. These dynamic controllers allow a continuous customization of the behavior of the application according to the user context and requirements. We use a scripting language called Lua [] to rapidly define the actions to take upon receipt of specific events. 2.2.5 Coordinator The Coordinator is responsible for managing the metalevel of the application. It stores references to the components that compose the application, as well as policies regarding adaptation, customization and mobility of the application. As illustrated in figure 3, the Coordinator class can have zero or more controllers associated. These controllers notify Coordinators about specific events, and as a result, the Coordinator can modify the application. For example, a user with a music application associated can specify that whenever he/she leaves a room, he/she wants to automatically move the controller of the application as well as the presentation (audio decoder) his/her PDA. When the Controller notifies the Coordinator about the user leaving a room, the Coordinator automatically detaches the music presentation and the controller from the room (e.g. speakers, and GUI running on a plasma display) and attaches a new music presentation and a new controller, which run in the user’s PDA.
3. Application Example: Ubiquitous Slideshow This section presents an application, which has been built according to the guidelines defined by the ubiquitous application model. The application exports functionality to display and control slides using any available resource in the space, including devices owned by users located in the space. While this type of application is not new, the novelty comes from the way the application is built and used. In general, for all ubiquitous applications we will assume the existence of a software infrastructure for Active Spaces (in our case Gaia OS []), which allows programmatically
manipulating resources contained in the space (i.e. services, devices and tasks), and defines a standard execution environment by exporting a set of well defined services. We will also assume the existence of permanent network connectivity both wired and wireless. For this particular example, we will assume that the user interested in using the slideshow application is responsible for: (1) creating all the components required for the application, and (2) assembling all components together in order to activate the application. Although this approach gives the user full control over the application structure, it is not the common case. As it will be explained at the end of this section, in most of the cases the application coordinator will automate most of the mechanisms, therefore simplifying the task of the user. Available infrastructure The ubiquitous slideshow application can run in any arbitrary active space. For this particular example we used our Active Room, which is equipped with six Pentium IV 1.5GHz and 256MB of RAM, two plasma displays, one projector, a wired Ethernet 100Mbps LAN, a wireless network (Wavelan 11Mbps), two high definition TVs, audio equipment, X10 devices to control electrical appliances, badge detectors [], and I-Buttons []. Besides the devices contained in the room, for this example we also use three Windows CE devices wireless enabled, two PocketPCs (Compaq iPAQ H3650 running WindowsCE 3.0) with 32MB of RAM and an Intel StrongARM processor running at 206MHz, and one HandHeld Pro (Jornada 680 running Windows 2.11) with 16MB of RAM and a SH3 Hitachi processor running at 133MHz. All this devices are managed by our Active Space software infrastructure, Gaia OS. Overall Picture This example consists on a user (slideshow presenter) entering the active room, and manually creating and activating the slideshow application. Figure 2 illustrates all the components of the application. When the user enters the room, her badge is automatically detected and her personal storage (including all her slideshow presentations) is automatically added to the room’s storage. The user selects one of the desktops of the room to create the model of the application, the projector to display the slides, starts a GUI on her PDA (iPAQ PocketPC) that will be used as the controller, and starts another GUI in one of the plasma displays to display the list of her available slideshows. After assembling all the components, the application becomes active, and the user selects the right presentation using the controller running on her PDA and starts the presentation. As the attendants get into the room, two of them decide to receive the slides of the presentation in their PDAs (an iPAQ and a Jornada). Therefore, they start a slide viewer presentation in their devices, register it with the application, an as a result they automatically get the currently active slide.
Slide Show Model
PUT REAL PICTURES HERE
Slide Renderer Presentation
User Detection When the slideshow presenter first enters the room, the Gaia OS Discovery Service [] detects her (using the badge detector devices), and uses the Gaia OS Event Manager [] to send an event notifying that a new user entered the space. The Gaia OS Data Object Service [] (responsible among other things for managing the storage in Active Spaces) receives the event, interacts with the user’s personal proxy, and retrieves the reference of the component that exports the files that the user has specified must be available in the current Active Space. When the user leaves the space, the Discovery Service sends a new event and the Data Object Service automatically removes the files from the room’s common storage. Application Components Instantiation Once inside the room, the user instantiates the components required for the application (note that being physically located in the room to instantiate the components is not a requirement, it could be done remotely). The application described in this example consists on a model, a coordinator, one controller, and two presentations. The model of the application (DataObjectExporter) is a component that manages a list of available slide presentations, and once a presentation is selected, it creates a subcomponent that interacts with a Microsoft Power Point object to retrieve the slides. The coordinator simply stores references to the different components of the application and provides functionality to assemble and modify the application structure. For this particular example, the coordinator does not implement any policy and therefore has no meta-controllers associated (the user manually implements most of the tasks of the coordinator). The controller is a Microsoft MFC GUI component that allows selecting and starting presentations and navigating through the slides. One of the presentations is a Microsoft MFC GUI component that displays the list of available slideshows managed by the model. The other presentation is a GIF Presenter, responsible for rendering the slides (by default, the model exports the power point slides as GIF images). All these components are depicted in figure 2. In order to instantiate these components, the user interacts with the Gaia OS Unified Object Bus []. The Unified Object Bus (UOB) exports functionality to manage the lifecycle (i.e. creation, naming, destruction and dependency management) of heterogeneous components (e.g. CORBA components, Java Components, Executable Components, and Script Components). The UOB can dynamically ship code across the system, and exports two main abstractions, namely UOBHost, and Component Container. Component Containers are placeholders for components, which provide functionality to create, delete, inspect and manage the dependencies of components. A UOBHost is any device (e.g. desktop, workstation, and handheld device) capable of hosting the execution of Component Containers, and providing functionality to create, delete and retrieve Component Containers. The user utilizes the Active Space Browser (a graphical tool to interact with the UOB) to instantiate the components required for the application. This browser displays all available UOBHosts in the space, all components running on these UOBHost, and the components running in every Component Container. The browser also shows a list of
services, devices, and people contained in the space. Figure 7 illustrates the Active Space Browser GUI. The user instantiates the model in the UOBHost named PC2401-2.cs.uiuc.edu. Next, it retrieves the list of available devices and chooses the plasma display as the target for the presentation component responsible for displaying the list of slideshows. The slide viewer is created in the projector device and finally, the controller is instantiated in the PocketPC device (wireless36.cs.uiuc.edu UOBHost, according to figure 7). The Gaia OS Discovery Service detects all newly created components and automatically keeps track of their status by using a leasing mechanism. Upon component detection, the Discovery Service uses the Event Service to notify about the new component (i.e. “EnteredSpace” event), and in the same way, when a component fails to renew the lease, the Discovery Service sends another event to notify the rest of the system (i.e. “LeftSpace” event). The Gaia OS Space Repository listens to the discovery events, and automatically updates its internal database as components enter and leave the space. This Space Repository exports and API that allows searching for specific components contained in the space. Therefore, at this point the Space Repository contains the references of all components that will be used for the application. Application Assembling Once all the components have been created, the user has to create the application by assembling all the components. The user utilizes another configuration tool called Application Builder (figure 8). The Application Builder exports functionality to create, modify, and delete applications. This tool uses the Gaia OS Space Repository to retrieve available components active in the space. The user first selects the Application Browser’s option called “New Application”. As a result, the configuration tool requests the name of the application, the space name where the application has to be created, and presents a list of available models running in the current space. The user enters the name of the application, selects the Slideshow Model from the list and leaves the target space name blank (meaning that the application has to be created in the local space). With all this information, the application builder automatically chooses a UOBHost and instantiates the application coordinator, which is automatically assigned the reference of the selected model. After the application has been created, it is automatically discovered, registered in the space repository, and therefore is listed in the Application Builder. At this point the application already exists but it does not have any associated presentation and controller. Next, the user has to edit the application, which allows registering presentations and controllers with the application, and also allows defining application specific rules (rules that affect the coordinator). The Application Builder automatically contacts the Gaia OS Space Repository and retrieves a list of compatible controllers and presentations. The user selects from the list the controller running on her PDA, the presentation component responsible for displaying the available slideshows, the presentation component that renders the slides, and registers them with the application. As a result, the coordinator automatically stores the references of all these components, automatically assigns the model reference to the presenters and the controller, and finally attaches the presentations to the model. When the presentations are attached, the model automatically sends an
event to notify the presentations that have been attached to a model. As a result, the presentation displaying the slideshows contacts the model, retrieves the available slideshows and displays the list. The other presentation does not take any specific action, since no slideshow has been selected yet. At this point, the application is active, and actions on the controller will affect the model and therefore the result displayed by the presentations. When it is time to start the presentation, the user selects, starts, and navigates through the slides using the controller on her PDA. Application Modification During the presentation, two of the attendants decide to get the slides in their own PDAs. Therefore, they create a slide rendering presentation in their devices (which is automatically discovered and stored by the space) and use the Application Builder to edit the slideshow application and attach the presentations running on their devices. Once the presentations are attached to the model, they display the currently projected slide. However, in this scenario, one of the PDAs uses a slide viewer that can only parse Bitmaps. The model of the application manipulates the slides as GIF images, and therefore there is a type mismatch. In this case, when the Bitmap presentation is attached to the model, the ubiquitous application model infrastructure interacts with the Data Object Service, which automatically creates a GIF-Bitmap adapter. This application illustrates all the mechanisms required to create, activate and modify an application. However, we do not expect all users to follow this tedious process. Instead, we will create application templates, which contain a list of default components required for the application, as well as default configuration rules and policies. When one of these templates is chosen, the Application Builder automatically interacts with the Active Space, and instantiates, assembles and activates the application. Once the application is active, users can modify existing rules, as well as modify the application’s structure. We will also provide a service in every Active Space that stores and exports these template definitions, therefore acting as an application database. This service can assist users in finding out applications suitable for the space, and learning how they were configured (i.e. which resources were used and how were they used) in the past.
4. Ubiquitous Applications Issues Ubiquitous Computing defines a new computational model that greatly differs from the one defined by traditional desktop computers. As an example, the interaction model and functionality we expect when sitting in front of our personal computer are completely different from the ones we expect when interacting with a “smart room”. Therefore the computational model introduced by Ubiquitous Computing brings new issues that must be carefully analyzed in order to understand how to build applications that take full benefit of this new paradigm. We present in this section some properties we have identified so far, which we believe are relevant when considering ubiquitous applications. Application Ownership Traditional applications are owned by devices. As a result, users are forced to use these particular devices in order to be able to access the functionality exported by the
applications. Too much emphasis is placed in the devices, complicating in some cases the user-application interaction issue. However, in the case of Ubiquitous Computing, applications cannot be permanently associated to a single specific device, or to a collection of well-defined devices. Instead, applications instantiate themselves using the most appropriate resources according to specific properties, such as for example, physical context (e.g. user location, light intensity, and number of people present). In this scenario, application ownership is transferred from particular devices to users and spaces. As a simple example of a user owned application consider a music jukebox application. The application is associated to the user, and as the user context changes, the application automatically finds and uses the most appropriate resources to continue playing and controlling the music. For example, the user may initially be in his/her office and move later to his/her car. While in the office, the music application uses one pair of computer speakers to play the music, retrieves the music from a distributed storage that contains files located at the user’s home and at the office, and uses a customized GUI that runs in the user’s PDA to control the song list as well as the music volume. As the user moves into the car, the application releases the speakers from the office and automatically starts using the speakers of the car. In the same way, the previous GUI running on the PDA and used as the controller is released and the application automatically uses the buttons located in the steering wheel of the car to control the music. The songs, though, are still retrieved from the user’s house and office. In this scenario, the user perceives a single music application that follows him/her as he moves. This contrasts with traditional applications where the user would have to learn and use two different music applications, one at the office, and another one at the car. This application places the main emphasis in the application in itself, instead of the devices used to execute and control the application. As an example of a space application consider an airport and an application that detects passengers getting into the airport and automatically gives them personalized instructions regarding the status of their flight and a navigational tool to guide them in real time to the gate. In this example, it is difficult to picture the application in the context of a single device. The application does indeed run “in the context of the airport” using the most appropriate resources at any particular time (these resources include the devices carried by the passengers, such as PDAs and phones).
Application Adaptability Application adaptability refers to two fundamental aspects of the application. On the one hand it deals with adapting the application internal structure, and on the other hand it refers to application data adaptation. Application structure (or skeleton) adaptability is concerned with the structure of the application, that is, the internal configuration of the components that constitute the application. These components can be normally classified into three main groups according to the functionality they implement. Some components are responsible for manipulating the data of the application, other components implement functionality to allow users access and control such data and finally, another group of components is responsible for presenting the data to users. Applications must be capable of adapting their internal structure as a result of external or internal events. For example, at a particular time t1, an application may be configured to receive input from a keyboard and display results in a monitor. However at time t2 and due to some changes in the
context associated to the user, the most effective configuration of the application could consist on speech recognition as the input mechanism and audio as the output. In both configurations the application is still the same, but the way we interact and receive results from the application changes. This structural adaptability may in some cases require data adaptation. Consider the previous example, when the application changes the output mechanism from the display (t1) to the audio output (t2), the data format also changes. While at t1 the data being displayed may be ASCII text, the data managed by the audio output may be PCM. Therefore the application must be able to provide or have access to a specific service capable of transforming the data on the fly.
Application Mobility Transferring the application ownership from devices to users and spaces spans the physical limits of individual devices and makes it possible to associate a permanent computational aura to these users and spaces. However, achieving this functionality requires that the ubiquitous applications must be mobile, that is, capable of migrating the different pieces of the application to different devices. This mobility imposes some requirements on the application architecture. It requires applications to be composed of distributed components capable of seamlessly communicating both locally and remotely, and the application must be able to migrate some of the components, without affecting the global consistency of the application. Application mobility is not an issue on traditional applications, which normally are confined to a particular device and cannot be moved from there. The only way to continue with the execution of an application in another device is by explicitly starting the application in the new device. However, users do not perceive continuity in the application execution, and in some cases different applications are involved (moving from the desktop’s Address Book application to the PDA’s one). Application mobility involves in some cases adaptation. Depending on the context, some components may need to be changed. For example, a user watching a movie in a living room will use an application with and MPEG decoder. However, as the user moves and gets into the bus, part of the application moves to his PDA, and therefore the MPEG component has to be replaced by another decoder appropriate for the particular PDA.
Importance of Context Traditional applications are in most of the cases disconnected from the context associated to the user. Therefore it is not possible for these applications to customize their behavior to the particular user’s situation. This is one of the main differences with Ubiquitous Computing scenarios, where context plays a really important role [] [] []. However, the lack of context in traditional applications may be justified in the case of desktop computers, mainly due to the restricted human-computer interaction pattern. The way we interact with a desktop consists on going to a particular room where the computer is located, sitting in front of it and using a keyboard and a mouse as the input mechanisms, and a display, a printer and speakers as a the most common output devices. Furthermore, applications running on the desktop have no effect whatsoever in the physical space
where the user is located. In this situation, information such as user location may not be relevant since the location is enforced by the computer location. When considering devices such as PDAs or smart phones, context becomes more important. For example, information such as user location is useful since users do not have to be located in a particular place in order to use the device. Therefore, by using location information applications can customize the information they present to the user (it is not uncommon to see PDAs with a GPS [] attached to them). However, even in this last scenario the use of context is still quite restricted. There are some PDAs that incorporate a light sensor, and use the sensor information to modify the screen brightness. However, it would be possible to pass this information to the applications and trigger changes in the application itself. For example an e-book application could detect that the amount of light in the user’s environment is not enough and automatically suggest the user that it reads the text to him/her (text to voice adaptation) instead of displaying it. In Ubiquitous Computing scenarios context is a key property that has a strong effect on applications. We believe that context cannot be considered and optional source of information, and instead it must become a standard component of ubiquitous applications. As a result, ubiquitous applications can gather the state of the context, automatically adapt when required, and even change the state of the context. This requirement is essential to accomplish one of the fundamental aspects of Ubiquitous Computing, namely, bringing the digital and physical world together. Consider for example a user located in an “active space” (space containing sensors and devices and capable of interacting and reacting according to user actions), and a music application owned by the user. As the user enters the space some type of technology such as badges or I-Button [] automatically detects him/her. As a result the music application is automatically notified. Next, the application uses the user location to learn about the most suitable devices to play and control the music. However, the application not only uses location but also information such as number of people in the room, time of the day and type of space (e.g. conference room, office or user’s living room) to decide what is the most appropriate application configuration. Some example configurations could be the application using the space speakers (if there are no more people present in the space and the type of space allows so) or using instead the user headphones if more people are present in the space. However, context is not only used when the application is instantiated, but instead it is used continuously to better adapt to different properties. For example, in the previous music example, while the application is playing the music using the user’s office speakers, it detects that in the next office a meeting is starting. As a result the application automatically limits the maximum volume value, so the music does not disturb the people attending the meeting.
External Infrastructure Support Users’ activities take place in particular physical spaces (e.g. meetings are held in meeting rooms, cooking takes place in a kitchen, and giving a presentation normally happens in a conference room). Therefore, it is reasonable to assume the existence of a software infrastructure that manages the space resources and supports the execution of
ubiquitous applications or tasks, as well as the existence of network connectivity that allows interaction among all devices and services present in the space. Projects such as the Interactive Workspaces from Stanford [] also assume the existence of an external infrastructure. Traditional applications (device owned) rely on the functionality provided by the device operating system. This operating system abstracts the complexity of the device and exports and manages all the available resources in a standard way. From the perspective of ubiquitous applications, a similar type of infrastructure is required to simplify the development and usability of such applications. We call such software infrastructure an operating system for physical spaces [], and it is a metaoperating system that runs on top of existing operating systems (e.g. Windows, Linux, Unix, WindowsCE and PalmOS) and presents physical spaces as programmable entities or simply “active spaces”, which hide all heterogeneity (software and hardware) and export a homogeneous programming interface for a set of well defined services (e.g. discovery, space repository, security, event management, and data manipulation). We would like to mention that in some situations the assumption of the existence of an active space might not apply. Consider for example an application that allows users to communicate with remote peers. Initially, a user is interacting with a remote peer from a conference room. The application takes benefit of resources such as projectors, cameras and microphones present in the room. However, as the user leaves the room and decides to walk home, the application must readapt. While it is possible to assume the existence of an active city, the user will use his own devices to continue with the remote meeting; therefore support from the active city is not required. From our perspective, the user has an associated user space, which is composed by all the devices the user is carrying (e.g. sensors, PDA, and phone). Therefore, an instance of the previously defined metaoperating system runs in the context of the user space and therefore provides support to the ubiquitous applications running in such context. In our case, an example application would be the previous remote communication application, which interacts with the user active space to move and adapt the application from the conference room to this space. As a result the application moves the video to the PDA (adapting the video format) and the audio to the user’s phone. An infrastructure such as the one proposed in this section (OS for physical space) is essential to be able to develop applications that fully exploit the potential of Ubiquitous Computing. An important benefit is that applications do not have to be developed for specific scenarios. By using the standard functionality exported by active spaces they can run in different scenarios without having to be specifically customized. Application Architecture According to the properties previously mentioned, we believe it is essential to reconsider not only the way we develop applications but also the way we use these applications. From the development point of view it is required to move away from monolithic devicecentric applications. The internal architecture of applications must be modular and highly decoupled, in such a way that it is possible to migrate, replace and adapt different
components as required by the execution environment, while maintaining application coherency and consistency. Components must also be able to effectively and seamlessly interact locally and remotely. A component-based architecture is a main requirement, not only from the point of view of adaptability and mobility, but also from the point of view of reusability. The same components should be used unmodified in different applications. From the point of view of usability, ubiquitous applications pose a new challenge regarding the way users activate and interact with applications. Traditional device-centric applications are predefined. That is, applications already define what are the specific components that compose the application, and can assume a default input mechanism (keyboard, mouse and pen) as well as a default output mechanism (display). While it is possible to use component-based technologies such as CORBA [], COM [] and JavaBeans [] to build such applications, this composition is done at development level, that is, developers choose the components they require for the application and implement the application logics that define the interaction rules among components. However, this component assembling is done at the application development level. Therefore, users have no control over the structure of the application. Due to the nature of Ubiquitous Computing scenarios, it is not possible to create predefined applications that assume a set of well-known resources. Ubiquitous applications are dynamically assembled on demand whenever their functionality is required. The sheer amount of services and devices present in Ubiquitous Computing scenarios complicates the process of creating applications that can assume a specific set of resources. However, even if it is feasible to assume a specific set of resources, users must be allowed to customize applications according to their personal preferences or requirements. We believe in the concept of adhoc ubiquitous applications, which implies that these applications are created according to the available resources, the functionality required by users, the context of the application and the preferences of the users. As an example, consider a slideshow application. A traditional device-centric application would use by default the display of the device to render the slides, and the mouse and keyboard as the input mechanism. In the case of a ubiquitous application, though, the scenario changes considerably. Assume that the slideshow takes place in an active conference room. As the speaker enters the conference room, he/she chooses a slideshow service, selects which devices are to be used to render the slides, specifies how to control the slides (e.g. voice control, hand gesture recognition, or a GUI running on a specific device), defines a set of rules such as for example, start the presentation when 80% of the audience is present, and finally assembles and activates the application (space-owned application). As the attendees get into the conference room, some of them decide to modify the slideshow application so it automatically sends the notes of the slides to their PDAs.
5. Related Work The ubiquitous application model is influenced by a myriad of research topics such as ubiquitous computing, distributed computing, reflective middleware, operating systems,
and human computer interaction. This section presents some relevant research that has affected the design of the ubiquitous application model. Graspable Interfaces [2] presents an evolutionary model for GUIs where physical objects are used to interact with applications, therefore making the access to the digital world more natural. Tangible User Interfaces (TUI) [3] are a generalization of graspable interfaces. This new approach presents a model that combines digital and physical entities (bits and atoms). The main difference with graspable interfaces is that in the case of TUIs, physical objects can be both the input and output of the application. In general, TUIs propose moving the GUI from the desktop’s monitor to the physical space inhabited by the user. Therefore, the world becomes the interface. TUI defines a new graphical user interface model called MCRpd (Model-Controller-Representation physical, digital) Tangible bits [4] are based on the traditional MVC [1]. This model reuses the model and controller components from MVC and replaces the view component by the physical and digital representation. Audio and video are examples of digital representations, while physical entities such as “bricks” [5]. This approach is solely based on using physical objects as controllers (physical objects are both representations and controllers). Cooperstock et al [6] describe the lack of flexibility presented by existing GUIs to interact with a myriad of devices. It is hard for users to fully manipulate devices according to their needs. They introduce the concept of Reactive Environment, which is normally associated to a space responsible for the low-level management of all the contained devices. They present as an example a Teleconferencing application, however, this application is customized for the particular environment and for the specific devices contained in the room. Brewster et al [7] describe how to use non-speech sounds to convey information to the user. This is an example of an application that does not rely on a visual representation to present data to users. Context has already been identified as one key property in mobile applications [8] [9] [10] [11] [12]. The behavior of applications and especially the output of the application can be greatly affected. Context must become a standard component in every application development, and for this reason, a new application model is required [13] that includes such context as standard orthogonal component. There has been some previous work on formal descriptions of the Model – View – Controller [14], as well as frameworks based on a similar model and intended to provide a systematic design method [15]. These approaches provide formal tools to define the behavior of interactive applications. Ubiquitous applications are built upon an extended MVC design; therefore, all previous formal tools can be applied to abstract and formally describe their behavior.
6. Conclusions This paper presents an application model for ubiquitous applications. This new application model reuses the fundamental concepts introduced by the traditional Model-
View-Controller and adapts them to the computational model defined by Ubiquitous Computing. As a result, the new application model captures issues such as application mobility, application adaptability, dynamic reconfiguration and user centric applications. As defined by Abowd and Mynatt [Abowd, 2000 #3] applications running in the context of Ubiquitous Computing are better defined as tasks. These tasks are normally associated to users and do not have a clear time frame. They are started but do not have a clear end. The ubiquitous application model presented in this paper provides the tools to build such applications, by taking benefits of issues such as mobility and adaptability. Finally, the ubiquitous application model not only proposes a new way of creating applications, but also a new way of using such applications. These applications are dynamically assembled, can change their structure over the time to adapt to different situations, and are not confined to specific devices.
7. Future Work At the time of writing this paper, the ubiquitous application model is still in its initial stage. We have already implemented the application framework in the context of Gaia OS [], and have also built some prototype applications based on the model. There is still quite a lot of work to do regarding the format of application templates, application reconfiguration rules, services required for creating, assembling, modifying and deleting applications, and the use of scripting support to easily customize applications. We are also working in our Active Space software infrastructure, and are planning to have different Active Spaces in our Computer Science building. This will allow us to experiment with user owned applications, and how to automatically have the application moving from space to space, following the user.
References 1.
2. 3. 4. 5. 6.
Glenn E. Krasner and Stephen T. Pope, A Description of the Model-ViewController User Interface Paradigm in the Smalltalk-80 System. 1988, ParcPlace Systems, Inc.: Mountain View. George W. Fitzmaurice, Graspable User Interfaces, In Computer Science. 1996, University of Toronto: Toronto. Hiroshi Ishii and Brygg Ullmer. Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms. In Proceedings of the CHI, Atlanta, 1997 B. Ullmer and H. Ishii. Emerging frameworks for tangible user interfaces. IBM Systems Journal, 2000. 39(3&4). George W. Fitzmaurice, Hiroshi Ishii and William Buxton. Bricks: Laying the Foundations for Graspable User Interfaces. In Proceedings of the CHI, 1995 Jeremy R. Cooperstock, Sidney S. Fels, William Buxton and Kenneth C. Smith. Reactive Environments: Throwing Away Your Keyboard and Mouse. Communications of the Association of Computing Machinery (CACM), 1997. 40(9): p. 65-73.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Stephen Brewster, Grégoy Leplatre and Murray Crease. Using Non-Speech Sounds in Mobile Computing Devices. In Proceedings of the First Workshop on Human Computer Interaction with Mobile Devices, 1998 Anind K. Dey and Gregory D. Abowd. The Context Toolkit: Aiding the Development of Context-Aware Applications. In Proceedings of the Workshop on Software Engineering for Wearable and Pervasive (SEWPC), Limerick, Ireland, 2000 Anind K. Dey and Gregory D. Abowd. Towards a Better Understanding of Context and Context-Awareness. In Proceedings of the CHI Workshop on The What, Who, Where, When, and How of Context-Awareness, The Hague, Netherlands, 2000 Anind K. Dey, Gregory D. Abowd and Daniel Salber. A Context-Based Infrastructure for Smart Environments. In Proceedings of the First International Workshop on Managing Interactions in Smart Environments (MANSE'99), Dublin, Ireland, 1999 Hans-W. Gellersen, Albrecht Schmidt and Michael Beigl. Adding Some Smartness to Devices and Everyday Things. In Proceedings of the Third IEEE Workshop on Mobile Computing Systems and Applications, Monterey, California, 2000 A. Schmidt, K.A. Aidoo, A. Takaluoma, U. Tuomela, K. Van Laerhoven and W. Van de Velde. Advanced Interaction in Context. In Proceedings of the Handheld and Ubiquitous Computing, 1999 Tom Rodden, Keith Cheverst, Nigel Davies and Alan Dix. Exploting context in HCI design for Mobile Systems. In Proceedings of the First Workshop on Human Computer Interaction with Mobile Devices, 1998 Richard Helm, Ian M. Holland and Dipayan Gangopadhyay. Contracts: Specifying Behavioral Compositions in Object-Oriented Systems. In Proceedings of the ECCOP/OOPSLA'90, 1990 D. D. Cowan and C.J.P. Lucena. Abstract Data Views: An Interface Specification. IEEE Transactions on Software Engineering, 1995. 21(3): p. 229-243.