Creating Multiplatform User Interfaces by Annotation and Adaptation∗ Yun Ding, Heiner Litz
European Media Laboratory GmbH Schloss-Wolfsbrunnenweg 33, 69118 Heidelberg, Germany
[email protected],
[email protected] ABSTRACT This paper presents our novel framework, which creates user interfaces (UIs) for a variety of devices by annotating and reusing an existing one originally designed for large devices. It distinguishes itself from previous work by the unique combination of reusing existing UIs, intuitive graphical support and adaptation-based approach. It is extensible by supporting UI developers to build and integrate their customized transformation strategies into our framework. Categories and Subject Descriptors: D.2.2 [Software Engineering]: Design Tools and Techniques user interfaces General Terms: Algorithms, Design Keywords: Multiplatform user interface development tool, single-authoring techniques.
1.
INTRODUCTION
Pervasive computing environments typically involve a diversity of devices ranging from the powerful workstations to the tiny cellular phones. Their differences in screen sizes, computing, memory and networking capacities pose great challenges on the development of user interfaces (UIs), which are to be rendered on multiple target devices. Hand-coding UIs for each device is complex, time-consuming and therefore not acceptable. A quite common and recurring situation is that UIs for large devices are already available. In this paper, we propose a new framework that supports graphical annotation of existing UIs and adapts them to customized UIs for multiple platforms. We first describe related work with the particular focus on model-based UI development. Then we present our concepts while relating them to modelbased concepts. We sum up and point out future work.
2.
and font), their positioning or layout inside a window, and their distribution into higher level IOs (e.g., containers like panes). Despite its appealing idea of automatically generating multiple UIs from a single specification, model-based UI design has not been adopted in the software industry [7]. The main reasons are the resources (e.g., time, information and complexity) required to specify the models [4] and the complexity of the system itself. To avoid having to specify the models from scratch, the concept of reverse engineering user interfaces (e.g., MOBILE [3]) allows the abstraction of a task model from a final UI. The designer can annotate each window and UI component with a newly created user task. The tasks can then be arranged into a task model. To reduce the large amount of decisions a model-based system has to made based on diverse models, adaptation-based approaches like the single-authoring technique ScalableWeb [5] and the concept of graceful degradation [1] deduce UIs for more constrained devices from a UI designed for a less constrained device.
3.
OUR SOLUTION
Instead of starting from formal models, we begin with an existing UI designed for devices with the largest screen size. Our main ideas are: (1) Information captured in the task, data and presentation model is visually embedded in the final UI. Hence, it can to some extent be extracted from it by the system. For instance, by counting the number of buttons in a button group, the system gets an idea about the number of possible values. (2) Additional annotations on the UI performed by the designer provide semantics or design rationale.
RELATED WORK
In model-based UI development, a developer specifies several models (e.g., task, data, presentation, platform and user models) in a declarative language. From this specification, UIs for various platforms can be generated [2]. The generation process involves different steps such as the selection of interaction objects (IO), their attribute setting (e.g., size ∗The work is funded by the Klaus Tschira Foundation and the German Federal Ministry of Education and Research under the grant 01 ISC 27B for the project DynAMITE. Figure 1: Architecture of our framework Copyright is held by the author/owner. IUI’06, January 29–February 1, 2006, Sydney, Australia. ACM 1-59593-287-9/06/0001.
Figure 1 shows the architecture of our framework. Starting from an UI designed for the source device with the largest
Figure 2: A screenshot of our graphical annotation environment screen size, an UI developer graphically inserts annotations. Taking the annotated UI and the constraints of the target platform as inputs, the Adaptation Engine exploits different strategies to create customized UIs for the target device.
3.1
Annotations
We use the annotations SPanel, SOrder and SGroup. Each SPanel encompasses a subarea of the UI. It groups a set of logically related interaction objects. Each SPanel is associated with several properties such as the layout and the importance indicating whether it is essential or optional for a particular platform like PC or PDA. Figure 2 is a screenshot of our graphical annotation environment with the GUI for a hotel reservation application. The SPanel is integrated as a new Swing component into the interface builder. The components of the GUI are grouped into seven SPanels. A task model decomposes the user task into subtasks which are mostly arranged in a tree-like structure. With respect to a task model, each SPanel can be considered as a leaf task or an arbitrary task in a higher level, depending on its granularity. A SPanel is actually a graphical presentation of objects manipulated by the associated task. For instance, the SPanels in figure 2 represent the objects manipulated by the tasks get location, get travel date, get no. of rooms and persons, and so on. SPanels are not isolated but logically connected due to the relationship between their associated tasks. By considering how the components are arranged in an existing UI, one might get an insight on their relationship. For example, if SP anela is placed left to or on top of SP anelb , one might guess that the associated task T aska should be executed before the task T askb . This ordering information is needed when the GUI is splitted into several pages. For instance, the SPanel start search hotel should appear on the last page. If the relationship can not be easily derived from the layout, developers may use the annotations SOrder and SGroup to explicitly specify it. SOrder defines the temporal ordering and hence the layout sequence of the components. SGroup is used to group together two or more SPanels, since
they depend on each other, or simply for aesthetical reasons. The SPanels of a group should appear within the same page whenever possible. With respect to a presentation model in model-based systems, SOrder and SGroup are related with both the layout of IOs within an encompassing container and their distribution into several containers.
4.
TRANSFORMATION
Although we have implemented several strategies this paper focuses on the strategy component transformation which is the most challenging one. In general, the transformation strategy considers (1) which components should be transformed, (2) which transformation rule should be applied, and eventually (3) how to technically perform the transformation. Here we focus on how to transform a source UI to an alternative one which consumes less space. Figure 3 shows an example.
Figure 3: An example of component transformation The process of transformation should only change the appearance of the UI while preserving its behaviors. The same input must be captured and displayed, and the same action must be executed in spite of the difference both in the input methods and in the supported events of the source and transformed components. We achieve this requirement by keeping the source UI active in the background, and reusing its code for event handling. The events performed on the target
There are many ways to transform a source UI to another one. Only the UI developers know the best way for their application. Thus they should be allowed to use customized transformators. According to our own experience, building transformators becomes quite systematic and straightforward by using a TLB and a DT. The building process consists of (1) constructing a TLB and possibly a DT, (2) replacing some components in the source UI by transformed components, and (3) the event forwarding as described above.
5. CONCLUSION Figure 4: Transformation and Dependency Tables components are converted into events on the source components and forwarded to them. Depending on the events, the source components are used to store the state (i.e., the input captured from the user) or to trigger pre-defined actions. As the transformation is unique to each mapping between a source and a transformed UI, a dedicated transformator which is the entity performing the transformation is needed for each mapping. Each transformator uses a Transformation Lookup Table (TLB) to keep the components of the source UI and the mapping between the source and the target components (see figure 4). Each TLB consists of two columns. The left column stores the source components, while the right one stores the associated transformed components. Since the aim of a transformation is to reduce the size of the UI, each source component must not be replaced by an independent component. Instead, it might be mapped to an element of the transformed component. For example, the label labelname is replaced by the list element which is represented by the String “Name” in the TLB. Moreover, multiple source components (of the same class) might be transformed to a single component of the target UI. In our example, five textfields are replaced by a single textfield. Hence, one transformed component may be related to multiple source components, or in other words, the mapping between the source and the target components is a (n : 1) mapping, whereas n >= 1 holds. Whenever an event occurs on a transformed component or one of its element, the transformator uses the TLB to find the related source component in order to pass it the event. In our example , whenever the mouse is moved away from the focused textfield (which is a Swing FocusLost event), its content should be stored into the mapped source textfield. However, which source textfield is meant? In order to find the related original component in the source UI in case of a (n : 1) mapping, additional contextual information is needed. For instance, the selected element (“Name” or “Surname”) of the drop-down list tells whether the textfield textf ieldname or textf ieldsurname in the source UI is currently related to the transformed textfield. Hence, there is a dependency relationship between the labels and the textfields. We use a Dependency Table (DT, right part of figure 4) to record this dependency relationship between the components of the source UI. Putting it together, the transformator of our example works in the following way: whenever the focus on the transformed textfield is lost, it examines the selected list element, let it be “Name”. It finds the related label labelname through the TLB (step 1), which points to the textfield textf ieldname in the DT (step 2). The transformator then updates the state of the textf ieldname using the content of the transformed textfield.
We presented our framework for creating multiplatform UIs by annotation and adaptation. We related our concepts with model-based UI development for their validation. However, our framework should not be viewed as “graphical model-based”, since we do not explicitly have models and it is not our primary intention. The strength of our framework is the unique combination of reusing existing UIs, graphical annotation, and adaptation-based approach. Reusing an existing UI means also reusing the decisions made by the designer regarding, for example, the selection of IOs or the layout. Additional graphical annotations on the UI make these decisions more explicit and reliable. Moreover, they raise the abstraction level in the manner of reverse engineering without explicit mentions of models. Upon arriving the higher level of abstraction, we perform adaptation to create multiple UIs instead of using forward engineering. As a result, our system is less complex than model-based systems. The graphical support makes our system more intuitive and easier to use. With the exception of [6], we are not aware of previous work which describes how to perform component transformation and particularly the associated event transformation. Our TLB and DT differ from the hash table used in [6] by explicitly keeping a mapping between the source and the transformed components. It allows for building transformators more systematically. We have implemented a prototype of our framework. We will evaluate our tool regarding both the quality of the results and the acceptance by UI developers.
6.
REFERENCES
[1] M. Florins and J. Vanderdonckt. Graceful Degradation of User Interfaces as a Design Method for Multiplatform Systems. In Proc. 9th International Conference on Intelligent User Interfaces (IUI 2004), 2004. [2] A. Puerta. A Model-Based Interface Development Environment. IEEE Software, 14(4):40–47, 1997. [3] A. Puerta, E. Cheng, T. Ou, and J. Min. MOBILE: User-Centered Interface Building. In Proc. of the Conference on Human Factors in Computing Systems (CHI’99), 1999. [4] J. Vanderdonckt and P. Berquin. Towards a Very Large Model-Based Approach for User Interface Development. In Proc. of the first International Workshop on User Interfaces to Data Intensive Systems (UIDIS99), pages 76–85, 1999. [5] C. Wong, H. Chu, and M. Katagiri. A Single-Authoring Technique for Building Device-Independent Presentations. In Proc. of the W3C Workshop on Device Independent Authoring Techniques), 2002. Available as http://www.w3.org/2002/07/DIAT/posn/docomo.pdf. [6] C. Wong, H. Chu, and M. Katagiri. GUI Migration across Heterogenous Java Profiles. In Proc. of the ACM SIGCHI-NZ’02), 2002. [7] Workshop on making model-based user interface design pratical: usable and open methods and tools. Held with the International Conference on Intelligent User Interfaces (IUI 2004), http://www.care-t.com/events/mbui-workshop2004/.