user interface: a technical component in component ... - Laboratoire I3S

14 downloads 39763 Views 3MB Size Report
The dialog between UI and business components is managed by an ... tion comes from new requirements from ubiquitous computing where the desktop PC ... construction of the application UI, which makes the HCI development and mainte- ..... ageEvent method will redirect it to call the chooseCase, with good arguments.
LABORATOIRE

INFORMATIQUE, SIGNAUX ET SYSTÈMES DE SOPHIA ANTIPOLIS UMR 6070

U SER I NTERFACE : A T ECHNICAL C OMPONENT IN C OMPONENT-BASED M ODELS IN R ESPONSE TO H UMAN C OMPUTER I NTERACTION A DAPTATION Anne-Marie Dery-Pinna, Jérémy Fierstone, Michel Riveill, Emmanuel Picard Projet RAINBOW Rapport de recherche ISRN I3S/RR–2004-06–FR Février 2004

L ABORATOIRE I3S: Les Algorithmes / Euclide B – 2000 route des Lucioles – B.P. 121 – 06903 Sophia-Antipolis Cedex, France – Tél. (33) 492 942 701 – Télécopie : (33) 492 942 898 http://www.i3s.unice.fr/I3S/FR/

R ÉSUMÉ :

M OTS CLÉS : IHM, programmation par composants, adaptabilité

A BSTRACT: In this paper we present our component architecture considering HCI as a technical service of a business component just like security or persistence. The dialog between UI and business components is managed by an interaction/ coordination service that allows the reconfiguration of components without modifying them. Such a service has proved its interest in software engineering and we will show that it is well adapted to handle adaptation of HCI.

K EY WORDS : CHI, Component Programming, Adaptability

User Interface: a Technical Component in ComponentBased Models in Response to Human Computer Interaction Adaptation Anne-Marie Dery-Pinna, Jérémy Fierstone, Michel Riveill, Emmanuel Picard Rainbow project, I3S 930, route des Colles 06903 Sophia-Antipolis cedex, France {pinna,fierston,riveill,picard}@essi.fr http://rainbow.essi.fr

Abstract. In this paper we present our component architecture considering HCI as a technical service of a business component just like security or persistence. The dialog between UI and business components is managed by an interaction/coordination service that allows the reconfiguration of components without modifying them. Such a service has proved its interest in software engineering and we will show that it is well adapted to handle adaptation of HCI.

1 Introduction This paper deals with Human Computer Interaction (HCI) adaptation. Its motivation comes from new requirements from ubiquitous computing where the desktop PC is being progressively replaced or extended by new devices such as graphic tablets, PDAs and mobile phones whose User Interfaces (UI) cannot be identical. Current practices in HCI engineering still consist in developing one interface per target platform. This case-by-case method entails extremely high development and maintenance costs. The first problematic is then to adapt HCI to devices and/or to users. Another requirement specific to a mobile application is to be dynamically adaptable to the mobile users’ requirements (for example to be able to dynamically load/unload new functional tasks). In the field of software engineering, components and reconfiguration mechanisms exist, but these results have never been applied to HCI. The second problematic is then to adapt HCI to business constraints. The originality of our solution, based on component-based architecture1, is to consider HCI as technical services of a business component (which contains only the application logical part) just like security or persistence. So, we can manage the dialog between UI and business components by an interaction/coordination service that allows the reconfiguration of components without modifying their source code. Such a

1

This work is financially supported by the “Réseau National en Technologie du Logiciel”.

service has proved its interest in software engineering and seems to be well adapted to handle HCI adaptation. In the second section, we briefly present two examples of applications that illustrate the two kinds of targeted adaptations. The third section describes our solution to the problematic of adaptation to devices or users. The fourth one gives our proposition to manage adaptation to the business constraints. Before concluding with our perspectives, we summarize, in section 5 and 6, what the programmer should do in order to develop an adaptable application.

2 From Examples of Application to a Component-Based Solution To illustrate the modelling UI for a device or a user, we choose the case of a multimedia game. We will present in this paper all the steps needed to create a small version of the famous naval battle. This version shall be used on a station or a PDA, by visually deficient people or not. In such a case, the development of UI must provide to the user a homogeneous and coherent HCI whatever the device is or whoever the user is. Many research works on multi platform UI covers this problematic. Solutions are often built around specific description languages [14] or nowadays in markup languages [7,8] that can be translated into concrete interface languages. Most of them are too systematic and do not take into account plasticity of UI except recent works such as [9,12]. Nevertheless, no solution is well designed for component specification. The need of UI adaptable to business components is inherent to Application Service Provider constraints. It is essential to fill the gap between the components assembly construction of the application’s business side, and the classical single-component construction of the application UI, which makes the HCI development and maintenance difficult. An illustrative scenario is the case of a management firm building customized applications specifically for each client. Applications are built from an existing library of components such as Customer, Salesperson, Bill, and Stock. At design time, an application can be built from two components statically assembled together (Customer and Bill in order to invoice the firm customers). But, sometimes, the assembly should be dynamic (at runtime). In order to facilitate interactivity between the customer and the salesperson, the salesperson must be able to dynamically load a Stock business component on his PDA and confirm whether an article is still available or not. The new HCI has to integrate the Stock UI components. In HCI research, no work directly addresses the problematic of merging/unmerging components at HCI level. Although classical component-based models (EJB, CCM, .Net) generally propose a static components assembly in the components integration step, only research works [2,3,4] allow dynamically adding or removing a new component. The originality of our component architecture is to consider HCI as an orthogonal property of a business component. So we split UI in standalone components. Our proposition is based on three main parts: (a) the interaction service which prepares components for interacting together and its associated interaction server which stores interaction schemas and binds/unbinds components with interactions (cf. [1] for details on this part), (b) the UI components associated with their business components

(cf. 3.1), (c) the merging service for UI components (cf. 3.2), (d) the context-sensitive service which dynamically adapt a UI component and its HCI according to the new context (c.f. 5).

3 UI Components Adaptable to the Device or to the User Our proposition consists in defining a UI component into two parts described by a UI specification: the abstract UI and one or several concrete UI (cf. Fig. 1). In our solution (c.f. 3.1), the UI specification is described with a small platformindependent language, SUNML (for Simple Unified Natural Markup Language). The abstract UI is the SUNML object reification (abstract tree). Renderers project the abstract representation into concrete UI in specific platform languages (c.f. 3.2). 3.1 SUNML SUNML has the particularity to be simple and adapted to structural assembly. The few available abstract widgets of this language (Interface, Dialog, List, Menu, Element and Link) can be assembled in order to represent a larger range of abstract widgets (e.g. a matrix). Elements and links are leafed widgets. Elements contain typed data (string, double, integer …). This data corresponds to the UI information given or retrieved to/from the user. Links represent user interactions and are bound to specific actions (e.g. button in GUI). Dialogs and lists are widgets’ containers. A list contains a set of widgets from the same type (e.g. a list of elements, a list of list of links). In opposite, a dialog can contain a set of widgets from different types (e.g. the main dialog can contain links and elements). A menu is a special list that contains only submenus and/or links. This widget was recently added in the grammar to facilitate the management of global and local menus. See in Appendix B the grammar of this simple language. SUNML is then quite similar to other multiplatform markup languages. Indeed, these languages represent a UI structural description. SUNML has some common tags with XIML [8] (element, dialog …), but is not enough mature in terms of Device Independence and UI abstraction. Our main focus was to handle a simple language to experiment UI structural assembly. SUNML is designed to manage structural merging and to facilitate the reuse of existing SUNML specifications. In particular, the import tag allows reusing existing SUNML specification and the select tag to select only widgets to import in the new UI component. The importation is a static way to reuse another SUNML description. In addition, UI component merging can be a more complex operation. Thus, a structural merging/unmerging service accepts merging commands in a specific UI merging language, WML (for Widget Merging Language), based on specific rules. The add operator includes a widgets’ sub-tree into another one. The select operator gets a subtree of the UI. The union operator is used when the merged UI should correspond to the union of each set of widgets. The intersect operator is used when the merged UI should correspond to the intersection of each set of widget, i.e. in order to get only the common widgets. The substitution operator factorizes a set of widgets into a single

one. WML is designed in order to be easily extended with user algorithms. The resulting assembly becomes a new UI component, which can be assembled or rendered as the other UI components. Fig.1 represents the result of the union merging rule:

Fig.1: Union merging rule.

An Address component in this example is concatenated with the Customer component in order to extend its contact information. A new Customer UI component is created by this merging rule. The union operator avoids having redundant information as the name field. This rule is based on the equals operator which compares the semantic attribute of each field. The developer should be cared on this semantic attribute, which avoids any syntactical confusion: e.g. UI are written by different developers that give the same name for two fields that are semantically different, or otherwise, they give different names for two fields that are identical. The following SUNML file represents the structural merging using the union operator (example of fig.1): /menu … The select tag uses by default the add merging rule (c.f. above SUNML code).

Fig.6: Add merging with a menu

To give behaviour to the Naval Battle interface, the developer registers an interaction schema on the Interaction Server ( registerPattern(String islPattern request) ). To use the interaction service, there is nothing to change in the code. Just use a post-

compiler on all classes that must be interactive, before the RMI post-compiler. There are three participants: the business component, the UI Component and a controller component that is used by the interaction instance: interaction playerInteraction ( game.naval.Player player, comp.uiservice.HMIComponent hmi, game.naval.PlayerInteraction controler) { player.chooseCase(int row, int column) -> player._call; controler.chooseCaseEffect(), hmi.manageEvent(comp.uiservice.HMIEvent evt) -> controler.manageEvent(evt) } There are generally two kinds of rules: business to UI and UI to business. Business to UI has to manage the UI modification (rule 1). UI to business (rule 2) deals with abstract events: an abstraction of all concrete events. These have to be dispatched to call business methods. For each click on a case, an event will be launched. The manageEvent method will redirect it to call the chooseCase, with good arguments. Then the chooseCase method will update the UI. If we want to dynamically add new ways to select a case, e.g. by giving explicitly its row and column, without stopping the application, we have to: 1. Define the SUNML for this particular component and then just use a simple WML rule: nb + cmp (nb is for the naval battle UI and cmp for the new component, the + operator represents the add merging rule). 2. Render the SUNML to create the concrete UI component, this operation calls the merging / unmerging service 3. Add behaviour in this component using an existing interaction schema for binding the business component and the UI. We decide to dedicate this game also to blind people, so the SUNML is rendered using the voice renderer (the voice renderer has a first step traducing the UI into a textual UI illustrated in Figure 7).

Fig.7: the Swing UI component and the sound UI component

Once the voice UI component is created, the interaction schema is used to bind this new component and the business component, and we obtain then two UI components for the same business component. Modifying one of them will modify the other. If the use context changes and the user dynamically creates the voice component, the program can automatically unbind the graphical one.

7 Conclusion In this paper, we have briefly presented the specificity of our component-based architecture. One of the main advantages of SUNML is that it provides a HCI component-based construction which allows reusing existing HCI components. SUNML is simple but this could be considered as a disadvantage due to the limited usability of the generated UI. Moreover the renderers do not integrate the checking of interface plasticity. The main difficulties will be to solve plasticity problems of component merging. Our objective is to propose a programming environment integrating HCI adaptation. The first implementation of this environment already contains a SUNML editor and parser, Swing and voice renderers, a merging service, the Noah server and the Noah service. The programming environment is under development. We will improve its facilities to change one of the basic components (e.g. SUNML parser) and to add new components (e.g. a renderer). So we are optimistic on the integration of HCI research results such as user interface plasticity [10] and context specification tool [11]. In the future, we aim to deal with distributed HCI. In this case, the abstract part could be located on the server side and the concrete parts on several devices. For this purpose, we intend to use the Noah server, which allows the components to run on differ-

ent devices at the same time. Finally, the HCI could be migrable, thanks to the user interface abstract reification, which can maintain HCI data.

References 1. A-M. Pinna-Dery, M. Blay-Fornarino, B. Arcier, L. Mule, and S. Moisan. Distributed access knowledge-based system: Reified interaction service for trace and control. IProc. of DOA 2001, Rome, Italy, pages 76-84, September 17-20 2001. 2. M.T. Segarra, F. André. A Framework for Dynamic Adaptation in Wireless Environments. Proc. of TOOLS Europe 2000, June 2000, Mont St. Michel, St. Malo, France. 3. Eric Bruneton, Thierry Coupaye and Jean-Bernard Stefani. Recursive and Dynamic Software Composition with Sharing. In Proceedings of the 7th ECOOP International Workshop on Component-Oriented Programming (WCOP'02), Malaga (Spain), June 10th-14th, 2002. 4. R. Pawlak, L. Seinturier, L. Duchien, G. Florin. « JAC: A Flexible and Efficient Solution for Aspect-Oriented Programming in Java », A.Y.,S.M., Eds., Reflection 01, v. LNCS 2192, 5. O. Nano, M. Blay-Fornarino, A-M. Dery, and M. Riveill. An abstract model for integrating and composing services in component platforms. In Seventh International Workshop on Component-Oriented Programming ECOOP'2002, Malaga, June 10, 2002. 6. L. Bass, R. Little, R. Pellegrino, S. Reed, R. Seacord, S. Sheppard, The Arch Model: Seeheim Revisited (version 1.0). « The UIMS Tool Developers Workshop » (April 1991) in ACM SIGCHI Bulletin Vol. 24, No. 1, January 1992. 7. C. Phanariou. « UIML: a Device-Independent User Interface Markup Language ». PhD Software Ingeneering, faculty of Virginia Polytechnic Institute and State University Blacksburg, Virginia, September, 2000. 8. A. Puerta, J. Eisenstein, XIML: a common representation for interaction data, In procs of IUI’02, pp 214-215 (see also http://www.ximl.org) 9. G. Calvary, J. Coutaz, D. Thevenin, Q. Limbourg, N. Souchon, L. Bouillon, Vanderdonckt. Plasticity of User Interfaces: A Revised Reference Framework, First International in Proc. TAMODIA'2002, Bucharest, 18-19 July 2002 10. G. Calvary, J. Coutaz, D. Thevenin. A Unifying Reference Framework for the Development of Plastic User Interfaces. IFIP WG2.7 (13.2) Working Conference, EHCI01,Toronto, May 2001, Springer Verlag Publ., LNCS 2254, M. Reed Little, L. Nigay Eds, pp.173-192. 11. Dey, A.K., Salber, D. Abowd, G.D. A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications in Human-Computer Interaction (HCI) Journal, Vol. 16(2-4), 2001, pp. 97-166. 12. F. Paternò, C. Santoro. Teresa. One Model, Many Interfaces. Proceedings of CADUI 2002, Valenciennes, France, May 2002. 13. S. Lavirotte, J-Y. Tigli. Mobility in e-learning context: Distributing Information Depending on Geographical Localization and User Profile, in Proc. Of ICDE 2003, Sponsored by the IEEE Computer Society, Bangalore, India, March 5 - March 8, 2003. 14. D. Grolaux, P. Van Roy and J. Vanderdonckt. « QTk: An Integrated Model-Based Approach to Designing Executable User Interfaces ». In proceedings Design, Specification and Verification of Interactive Systems, (DSVIS’01), Glasgow, June 2001. 15. D. Thevenin. « Adaptation en Interaction Homme-Machine: Le cas de la Plasticité ». Ph.D. thesis, Université Joseph Fourier, Grenoble, 21 December 2001. 16. J. Coutaz, G. Rey. Foundations for a theory of contextors. In Proc CADUI02, ACM Publ., 2002, pp. 283-302.

Appendix A: IDE screenshots There are many view corresponding to the different use cases: the edition view, the manual fusion view, the WML fusion view and the interaction view. 1.

The "Edition" view

The developer can use this view to build the user interface description. This view includes a SunML editor and a set of projection menus in which the developer can find all available renderers.

2.

The "Manual Fusion" view

The developer can now use this view to build new interfaces by assembling existing one (built by him/her, by another developers or already available with the application). The result is also a/many new interface(s). This view includes a set of merging options (union, intersection ...).

3.

The "WML Fusion" view

The developer can also use this view to automatically build new interfaces by using a WML pattern by passing in parameters the existing use interfaces to merge. The result is also a/many new interface(s). This view includes a WML merging tool that manage all registered WML patterns and that can apply a pattern to given abstract trees.

4.

The "Interaction" view

In order to see all interactions between software components (business, controllers, and UI), we use an interaction service/server, called Noah. Noah is designed and maintained by the Rainbow team. Noah has an IDE, NoahEditor, useful for the developer and assembler to see all instantiated interactions.

Appendix B: Grammar of SUNML