ware development models, able to easily adapt the applica- tion to the client execution ... tual user's application interface on a specific execution en- vironment ...
Dynamic user interface adaptation for mobile computing devices Mario Bisignano, Giuseppe Di Modica, Orazio Tomarchio Dipartimento di Ingegneria Informatica e delle Telecomunicazioni Universit`a di Catania - Viale A. Doria 6, 95125 Catania - Italy Email: {Mario.Bisignano, Giuseppe.DiModica, Orazio.Tomarchio}@diit.unict.it
Abstract A large number of heterogeneous and mobile computing devices nowadays are employed by users to access services they have subscribed to. The work of application developers, which have to maintain several versions of user interface for a single application, is becoming more and more difficult, error-prone and time consuming. New software development models, able to easily adapt the application to the client execution context, have to be exploited. In this work we present a framework that allows developers to specify the user interaction with the application, in an independent manner with respect to the specific execution context, by using an XML-based language. Starting from such a specification, the system will subsequently ”render” the actual user’s application interface on a specific execution environment, adapting it to the used terminal characteristics.
1. Introduction Research on ubiquitous computing originates from early works by Weiser [16] in 1991, when he described his own vision of an environment full of computing devices able to communicate to each others, but at the same time gracefully integrated with human users. His vision was guided by the idea that ”the most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it” [16]. Many critical elements which, at that time, made that scenario as a futuristic vision, now are available commercial products: let us think about portable and handheld computer (PDAs), smartphones, wireless communication networks (WLAN, GPRS, UMTS), devices to remotely monitor and control various equipments [11, 3, 8]. In these environments, due to the heterogeneity and mobility of users’ devices, applications cannot rely on a reliable and stable execution context. Context-aware mechanisms are needed in order to build adaptive applications, location-based services, able to dynamically react to changes in the surround-
ing environment[12, 4]. Current approaches, both as traditional models for distributed computing and related middleware [6], do not fully satisfy all the requirements imposed by these environments. One of the main requirements (which we focus on in this paper) to be able to build ubiquitous and pervasive computing scenarios is the possibility for the user to access the same application on heterogeneous terminals, through different network technologies using different access techniques [11]. In order to accomplish this, while avoiding at the same time the burden of reimplementing from scratch the same application for a different device and/or for a different network technology, new programming paradigms along with the related middlewares are needed, providing the adequate level of transparency to the application developer. In particular, from the developer’s perspective, developing a new user interface and new content types each time a new device penetrates the market is not a feasible solution. In this paper we present the basic architecture of a framework whose goal is to make the presentation layer of the application, that is the user interaction, adaptive and (as much as possible) independent from the specific execution context. Several existing approaches dealing with Web content fruition [13, 5, 15], aim at designing services tailored for specific categories of devices (typically PDAs and cellular phones WAP-enabled). But all these approaches, rely on the assumption that the application (the web browser) is already present and installed on the target device. However, even if common techniques can be adopted, a different approach should be followed when considering generic applications to be adapted, as we are resolved to do in this work. In this case, the research work has been focused on the use of metalanguage and/or metamodel for interface and user interaction description, such as the approach followed by UIML [1], AUIML [2], XUL [17], and Dygimes [9, 10]. In our framework, the presentation level (user interface), together with the user interaction model, is described at an high and abstract level in a device independent way. Moreover, the system is supported by a set of adaptation components (and/or renderer), each one specific for the current
Proceedings of the The 2005 Symposium on Applications and the Internet Workshops (SAINT-W’05)
0-7695-2263-7/05 $20.00 © 2005 IEEE
execution context on the client side. A given render is in charge of adapting the application’s user interface on a specific execution environment, according to the used device features. Depending on the user needs and the application characteristics, this step can be done either off-line [1], thus distributing the result to the client later on, or dynamically on-line [14]. A prototype of the framework has been already implemented, together with a ”renderer” for the Java 2 Micro Edition (J2ME) environment.
2. System architecture The overall scenario is the one, very common nowadays, where a service provider wants to offer a set of information services to a wide variety of potential users, each of them accessing the system using whatever device he has in that moment, and wherever he is. Currently, this means that software developers have to build different versions of the service user interface according to the user’s device features.
Application Deployment Java Interface Renderer J2ME Interface Renderer
XHtml Interface Renderer
Application Interaction Specification
XML Vocabulary
Content Repository
Application Business Logic Media Content Adapter
Profile Management
Content request
Actual User Interface
Adapted Content
Remote event handling (through SOAP)
Figure 1. Framework architecture
The framework will relieve application developers from rewriting the software code related to the application’s user interface for each device’s execution environment. The core of the framework includes an abstract XML based specification language, which allows to describe the user interaction with the application. This description will be subsequently transformed (”rendered”) into the actual user interface for the specific device used by the client. The framework performs also the adaptation of the media contents according to the client capabilities. An event handling system is present, which is able to manage both local events (events that can be locally processed), and remote events (events that should be remotely processed by the application business logic). Figure 1 depicts the overall architecture of the designed framework: the following paragraphs will describe the features of each module and their interaction.
2.1. XML Vocabulary A model for the user interface, the single interaction elements, their state and associated events has been defined (XML vocabulary). To define this model, technologies based on standard markup languages such as XML have been adopted, so as to guarantee an high level of flexibility and the possibility to customize this model. While defining this interaction model, an intent-oriented approach as been adopted, where each element interacting with the user is modeled at an high level, independently of its actual displaying. This is the main reason why we prefer to talk about ”user interaction specification” rather than ”user interface”. When defining the items of the vocabulary we endeavor to consider all the possible combinations of objects representing the interface’s interactions. The input object enables the user-application interaction through the insertion of data that, furthermore, undergo validation and restriction procedures (type, length, etc..). Among the others, objects have been designed to support multiple choices and list of items. We also focused on the problem of the content displaying. Only a few devices are able to display graphic and multimedia elements, since this capability is limited by the hardware (size and colours of the display) and the software (transcoding libraries) features of the device. To this end, we have introduced in our vocabulary some XML tags whose attributes allow the developer to specify whether the graphic and multimedia content are needed for the semantic of the application or not. Finally, some tags of the vocabulary permit to specify whether the event associated to a given interaction object must be handled locally or remotely on a server.
2.2. User Interface Rendering Once the application developer has specified the userapplication interaction using the previous XML vocabulary, a specific ”interface renderer” will produce the actual user interface to be displayed on the user terminal. Each interaction scenario at the applicative level (for example, ”selection of an element from a list”, ”insertion of a numeric value”, ”pushing a button”, etc) is realized by means of adhoc ”renderer” (specific for each type of client’s execution environment), that produces a user interface that satisfies functional requirements of the application to the best allowed by the client capabilities. The choice of the interface renderer is made in a dynamic fashion, according to the client profile. The profile management module has just the role to manage different client profiles, choosing the suitable interface renderer for that device. Based on client profile such module will choose to send to the client device both the renderer and the XML interface description, or only the latter (if the device already owns the rendering
Proceedings of the The 2005 Symposium on Applications and the Internet Workshops (SAINT-W’05)
0-7695-2263-7/05 $20.00 © 2005 IEEE
application), or directly the rendered interface. Although it has not been fully implemented, the mechanism allows to take into account three different kind of profiles: device profile, network profile, user profile. The need to define a standard for the device profile description has been growing in the latest years; the W3C has promoted the CC/PP working group (Composite Capability/Preference Profile)[7], that has recently issued a Recommendation (Structures and Vocabularies). This W3C recommendation is the basis for the UAProf (User Agent Profile) specifications, defined by the Open Mobile Alliance for mobile telephony, that have been taken as a reference for the definition of the profile concept within our work. These profiles are very complex, including many information both related to the hardware features of a device and to the software environment available on it. In our implementation work we considered only a limited number of profile’s parameters, the ones we believe are fundamental to check the viability of the approach, e.g. screen size, number of supported colors, supported image type, etc.
2.3. Content adaptation What has been said until now, is valid for user interface’s rendering. But, when considering also multimedia content, it becomes likewise important to deal with content adaptation. In this case, apart from considering user device features, network parameters should be taken into account, since, for example, bandwidth can be a constraints for streaming multimedia contents. The system includes the possibility to integrate a ”media content adapter”, capable of performing the real work of adapting a specific content according to the global context of the end user. Techniques that could be adopted will depend on the actual content format: they include image resizing, transcoding from a video/audio format to another one, color numbers scaling, resolution changing, etc. However, technological solutions addressing this issue are for some aspects already available in literature or in the commercial arena, and thus are beyond the scope of this paper.
2.4. Application Business Logic The last issue regards the interaction among the user interface and the business application logic. Of course, this issue is strictly correlated to the adopted application’s deployment model. The framework does not impose any fixed deployment model: this means that the interface generation can be performed both on-line, when the application is requested, and off-line, by delivering the application together with the rendered interface for the device on which it is supposed to run. If the application is a stand-alone one, and the user device has enough computing power, both the user in-
terface and the application business logic can be run on the client device. In this case handling the events triggered by the user’s interaction is a straightforward task, since all of them will be locally handled. But, if the user device is not able to execute the business application logic or, as it often happens, the application is not stand-alone and needs to access to remote resources, then the problem to manage events (that will be called remote events) generated by the interface arises. The developed framework allows to manage this issue by distinguishing between local events and remote events. The former are locally handled by the user device, while the latter are handled by a suitable business logic available on a remote server. This distinction can be directly made at application’s design time: when describing the user interaction specification, it can be specified if a given interaction will generate a local or a remote event.
3. Example application A prototype of the overall framework, including a renderer for J2ME enabled devices, has been implemented. In this section, we provide a simple, but at the same time complete example application, which shows the J2ME renderer in action and the potentialities of the overall framework. The application allows users to get information about a movie (the title, the plot, the timetables of show, the theaters) through different mobile devices. When the user requests the service, the generic user interaction specification - developed using our XML vocabulary - is transmitted to his J2ME enabled mobile phone. In the developed example, both the user interaction and the J2ME renderer are packaged and downloaded to the user device as a single MIDlet. By executing this MIDlet, the renderer interprets the generic user interaction provided, and dynamically builds and renders the actual user interface. Once the user has logged in, he can search among theaters, movies, and show’s times that reflect his preference data. These activities generate remote events which are processed by the application logic on the service provider side: all the communications and requests processing are managed through the SOAP protocol. After the selection of a movie, the application presents all the movie’s information, the playbill and the trailer. The latest two information enable the content adaptation features of the framework: the service provider owns the playbill in a specific graphic format (such as for example JPEG) and the movie trailer in a specific video format (i.e. MPEG1). As far as the image is concerned, the adapter transcodes it into the PNG format (which is the one currently supported for sure by all MIDP 1.0 devices). Then the renderer adapts its size to the device screen size and visualizes it (see Figure 2a). In this case, the image will be center aligned (if possible) and will be adapted, keeping its original aspect as specified by the XML user interaction description. In Figure 2b,
Proceedings of the The 2005 Symposium on Applications and the Internet Workshops (SAINT-W’05)
0-7695-2263-7/05 $20.00 © 2005 IEEE
it is shown the rendering obtained in the case that the device does not support image displaying or the user has chosen to not display images (for example if he is connected through a slow and costly network connection). Regarding the trailer, if the device supports the MMAPI (Mobile Media API) of the MIDP 2.0 profile, the user will be capable to display and play it locally. MMAPI enable managing of multimedia contents on J2ME devices, providing multimedia-rich wireless J2ME applications. In our case, the generated user interface support the basic video player functions, such as play, stop, pause and volume control (shown in Figure 2c). All of these functionalities are managed like local events, since, after the downloading of the trailer, it can be locally controlled. Of course, if the device is not MIDP2.0 compliant or the user has specified to not display videos, the trailer can not be played, and it will not be downloaded, saving considerable network bandwidth.
(a) with the playbill
(b) without the playbill
(c) with the movie trailer
Figure 2. Example application
4. Conclusion Implementing dynamic user interfaces (UIs) with traditional models, by hard-coding all the requirements for heterogeneous computing devices adaptation, is an error prone and expensive task. The approach that we propose in our work is based on a distinct separation of the presentation layer from the application business logic. Furthermore, the presentation layer has been structured in such a way to promote the automatic generation of the actual user interface starting from an abstract specification of the user interaction. For this purpose, an XML based language has been defined, giving the application developer the means to represent the user interaction with the application in an abstract way. This ”intent-oriented” language can be interpreted by apposite renderer modules which in turn, according to the client device’s profile, will generate the user interface fitting to its features. The implementation of a J2ME renderer prototype has been also described, showing the viability of the approach. A standard and interoperable mechanism based on SOAP allowed us to also easily manage remote events generated by the interface.
References [1] M. Abrams, C. Phanouriou, A. Batongbacal, S. Williams, and J. Shuster. UIML: an appliance-independent XML user interface language. Computer Networks, 11-16(31):1695– 1708, May 1999. [2] P. Azevedo, R. Merrick, and D. Roberts. OVID to AUIML User-Oriented Interface Modelling. In Proc. of TUPIS’2000, York, UK, Oct. 2000. [3] G. Banavar and A. Bernstein. Software intrastructure and design challenges for ubiquitous computing applications. Communications of the ACM, 12(45):92–96, Dec. 2002. [4] S. Banerjee, M. Youssef, R. Larsen, A. Shankar, and A. A. et al. Rover: Scalable Location-Aware Computing. IEEE Computer, 35(10):46–53, Oct. 2002. [5] T. Bickmore and B.N.Schilit. Digestor: device-independent access to the World Wide Web. Computer Networks and ISDN Systems, 8-13(29):1075–1082, Sept. 1997. [6] L. Capra, W. Emmerich, and C. Mascolo. Middleware for mobile computing. In Tutorial Proc. of the International Conf. on Networking 2002, Pisa, Italy, May 2001. LNCS 2497, Springer Verlag. [7] CC/PP Specifications. Available at http://www.w3.org/Mobile/CCPP/. [8] M. Hannicainen, T. Hamalainen, M. Niemi, and J. Saarinen. Trends in personal wireless communications. Computer Communication, 25(1):84–99, Jan. 2002. [9] K. Luyten and K. Coninx. An XML-based runtime user interface description language for mobile computing devices. In Proceedings of the 8th International Workshop on Interactive Systems: Design, Specification, and Verification, Glasgow (UK), June 2001. [10] K. Luyten, K. Coninx, C. Vandervelpen, J. bergh, and B. Creemers. Dygimes: Dynamically Generating Interfaces for Mobile Computing Devices and Embedded Systems. In Proc. of MobileHCI 2003, Udine (IT), Sept. 2003. [11] M. Satyanarayanan. Pervasive computing: vision and challenges. IEEE Personal Communications, 4(8):10–17, Apr. 2001. [12] B. Schilit, D. Hilbert, and J. Trevor. Context-aware communication. IEEE Wireless Communications, 9(5):46–54, Oct. 2002. [13] B. Schilit, J. Trevor, D. M. Hilbert, and T. K. Koh. Web interaction using very small internet devices. IEEE Computer, 10(35):37–45, Oct. 2002. [14] O. Tomarchio, G. D. Modica, D. Vecchio, D. Hovanyi, E. Postmann, and H. Portschy. Code mobility for adaptation of multimedia services in a VHE environment. In IEEE Symposium on Computer Communications (ISCC2002), Taormina (Italy), July 2002. [15] J. Trevor, D. Hilbert, and B. Schilit. Issues in Personalizing Shared Ubiquitous Devices. In Proc of Ubicomp 2002, Goteborg, Sweden, Sept. 2002. [16] M. Weiser. The Computer for the 21st century. Scientific American, Sept. 1991. [17] XUL (XML User interface Language. http://www.mozilla.org/projects/xul/.
Proceedings of the The 2005 Symposium on Applications and the Internet Workshops (SAINT-W’05)
0-7695-2263-7/05 $20.00 © 2005 IEEE