Model-Driven Development of User Interfaces:

7 downloads 57887 Views 1MB Size Report
user interface from the application has been a topic of investigation in the HCI field ... user interface elements in the late 1980's on the desktop paradigm made the need ... because it codifies best practices, enables and encourages reuse, and ...
Serbia & Montenegro, Belgrade, November 22-24, 2005

EUROCON 2005

Model-Driven Development of User Interfaces: Promises and Challenges Zeljko Obrenovic, Dusan Starcevic

In this paper we discuss promises and Abstract challenges of model-driven development of user interfaces. MDD's defining characteristic is that development's primary focus and products are models rather than computer programs. These aspects are very important for HCI, as user interface development is complex, multidisciplinary activity, while the implementation platforms do not directly support many of important high-level HCI concepts. Joining MDD and HCI can bring many advantages to both fields. In the case of HCI, it would enable more efficient development of user interfaces, usage of existing standards, tools, and wider acceptance of these results. On the other hand, integrating human-computer interaction accomplishments into the MDD approach can lead toward creation of more complete and better accessible solutions. Still, there are many open problems. One of the main challenges is the absence of the formal description of relevant HCI concepts, which in MDD should serve as the metamodel.

Keywords Human-computer interaction, model-driven development, user interfaces. I. INTRODUCTION T RENDS and industry standards in software engineering community open new possibilities for improving analysis, design, and implementation of user interfaces. Many of the human-computer interaction (HCI) researchers have proclaimed inadequacy of existing widespread user-interface paradigms as they have not changed fundamentally for nearly two decades [1]. New devices and usage scenarios require radically different interaction styles, and consequently, better developing support. Existing user interface development environments cannot support many of the emerging requirements. In this paper we discuss promises and challenges of model-driven development of user interfaces. Modeldriven software development [2] is still not widespread, but the potential is large, and it may represent the first true generational shift in basic programming technology since the introduction of compilers [3]. MDD's defining characteristic is that development's primary focus and products are models rather than computer programs. The models are expressed using concepts that are much less Zeljko Obrenovic is with the Center for Command Information Systems, Serbia & Montenegro (phone: 381-381-11-461221; fax: 381-11-3248681; e-mail: obren@,fon.bg.ac.yu, www.beotel.yu/ obren). Dusan Starcevic is with the Faculty of Organizational Sciences, University of Belgrade, Serbia & Montenegro (phone: 381-11-8950827; fax: 381 381-11-461221; e-mail: [email protected]).

bound to the underlying implementation technology and are much closer to the problem domain relative to most popular programming languages. This makes the models easier to specify, understand, and maintain, and in some cases, it might even be possible for domain experts rather than computing technology specialists to produce systems. Models are also less sensitive to the chosen computing technology and to evolutionary changes to that technology. These aspects are very important for HCI, as user interface development is rather complex, multidisciplinary activity, while the implementation platforms do not directly support many of important high-level HCI concepts. II. EXISTING "MODEL-DRIVEN" APPROACHES IN HCI

Abstract modeling of user interfaces and separating the user interface from the application has been a topic of investigation in the HCI field for many years. For example, the investigation of automatic techniques for generating interfaces attracted lots of work in the past. The main idea was to allow specifying of interfaces at a very high level, while the details of the implementation would be provided by the system. Motivations for this included the hope that programmers without user interface design experience could only implement the functionality and rely on these systems to create high-quality user interfaces. Early examples of these model-based tools include Cousin and HP/Apollo's Open-Dialogue, where the system generated the dialogs to display and request the data, as well as these evolved into model-based systems, such as Mike, Jade, UIDE, ITS, and Humanoid [4]. Another promising approach was the concepts of user interface management systems from the early 1980's. The term "user interface management system" was coined as an analogy to database management systems [5]. The main idea was to abstract the details of input and output devices, providing automatically generated implementations of interfaces, and generally allowing interfaces to be specified at a higher level of abstraction. In last few years there has also been lots of work in area of abstract user interface representations, mostly based on XML, such as User Interface Markup Language (UIML), Extensible Interface Markup Language (XIML), W3C XForms, and Alternate Interface Access Protocol (AIAP) [6]. Many recent initiatives have proposed various means of integration HCI and software engineering [7], such as IFIP WG 1 initiative. However, although this and many similar approaches have been proposed, almost none of them succeeded in

1-4244-0049-X/05/$20.00 (C2005 IEEE 1259

achieving wider acceptance. Very often, proposed solutions required of the developer to learn a new programming language. Sometimes, the proposed solutions addressed the problems that were not so important in the era of desktop PC dominance; the standardization of the user interface elements in the late 1980's on the desktop paradigm made the need for abstractions from the input devices mostly unnecessary, and developers tended to use more specialized tools. More elaborate overview of successful and unsuccessful developing approaches in HCI can be found in Myers' paper [4]. III. WHAT NEW CAN MDD BRING TO HCI? As we have seen, approaches similar to MDD have been

proclaimed many times in the past. Still they have had little or no fundamental impact in the end. Is there any reason to expect otherwise in this case? Situation has changed and computers are now used in variety of situations. New requirement for device and platform independence and for more flexible modeling of interaction is emerging. On the other hand, modeling technology has progressed. we now better understand how to model, while the necessary automation technologies have matured and industry-wide standards have emerged [3]. The techniques and tools for doing MDD successfully have now reached a degree of maturity where this is practical even in large-scale applications, and we can seamlessly integrate such code generators into existing software production environments and processes. Speaking practically, the MDD approach could introduce some of the following advantages for HCI researchers and practitioners: * Better theoretical background. Model-driven development community invested lots of efforts in defining and clarifying modeling and metamodeling terms. Clear and concise definitions are very important aspect in understanding of model-driven development. * Standards. To support MDD, Object Management Group (omg) and others have developed various standards for modeling and model interchange, such as Model-Driven Architecture (MDA), Unified Modeling Language (UML), Meta Object Facilities (MOF), Extensible Metadata Interchange (XMI), and Common Warehouse Metamodel (CWM). There are also various programming interfaces, such as Java Community Process interface/platform standards including JMI, JOLAP, JDMAPI. There are also various other standards, such World Wide Web Consortium (W3C) standards, and Web services models. Standardization provides a significant impetus for further progress because it codifies best practices, enables and encourages reuse, and facilitates interworking between complementary tools. It also encourages specialization, which leads to more sophisticated and more potent tools. * Tools. Many tools that support various elements of model-driven development have already appeared. Many new are expected to appear in short time. Some of them are even available as free open-source systems.

Possibility to use existing tools is very important for HCI, as most of the previous HCI approaches involved development of novel tools, requiring significant learning efforts of developers, and introduced resistance among programmers as they have to learn new tools and languages. * Better acceptance among ordinary software engineers and programmers. By integrating HCI knowledge into MDD approach, the software developers would not need to change their development environment. It is also important to mention that the concepts of models and metamodels are strongly related to the concept of an ontology used in knowledge management communities [8]. By using models and ontology concepts we inherit the possibility to use other knowledge management technologies such as data mining, which can help us in finding interesting patterns in data. IV. MODEL DRIVEN ARCHITECTURE FOR HCI OMG propose four-layered approach for creating model driven-architectures (see www.omg.org/mda/), which can serve as a conceptual framework for introduction of HCI knowledge into software development. By using MDA, we can rely on various existing standards and products. As shown in Table 1 the key feature of this architecture is HCI metamodel. The role of the HCI Metamodel is similar to that which the Common Warehouse Metamodel (CWM) plays in the domain of integration of business data, providing common metamodel of the data warehousing and business intelligence domain. TABLE 1. MAPPING THE METAMODEL OF HCI TO OMG'S MDA LEVELS.

OMG MDA 1 Level

M3 Metametamodel M2 Metamodels Ml Models MO Objects, data

Metamodeling Architecture

The Meta Object Facilities (MOF) The Metamodel of HCI

Platform-Specific Models (XHTML, WML, SMIL, VRML, Java...) Content data (WML, XHTML, SMIL files...)

Due to its multidisciplinary nature, HCI field requires knowledge from many domains, including computer sciences, medicine, psychology, cognitive sciences, and sociology. HCI community has gained lots of knowledge from different domains, and although there are many things that are still unknown, the main problem today is not the absence of knowledge, but the efficient usage of it. Currently most of these data are available as text, in different formats. In order to enable broader and efficient use of HCI knowledge, it is important to formalize and unify existing knowledge base. The main obstacle toward efficient integration of HCI knowledge into software development processes is the absence of unified formal model of HCI concepts which could abstract existing platform specific models, and introduce new concepts not present in currently available platform specific models. The HCI metamodel should provide a common and

1260

standardized language for sharing and reusing knowledge about phenomena from various domains relevant to the design of HCI solutions. The HCI metamodel should have two main roles: abstract existing platform specifics UI metamodels, and introduce new higher-level HCI related concepts. Speaking practically, the metamodel of HCI should achieve [8]: * A set of precisely defined terms and structured definitions of HCI domain concepts instead of mere text-based information; * High expressiveness, enabling us to efficiently describe each aspect of multimedia; * Coherence and interoperability of resulting knowledge bases, using standard modeling and storage technologies; * Scalability of metamodels providing us with means for definition at different levels of abstraction. As there is great semantic gap among existing platform metamodels and HCI concepts, currently the absence of the unified widely accepted metamodel of HCI is a crucial obstacle toward creation of broadly accepted model-driven architecture for user interface development. Still, instead of using one unified HCI metamodel, many applications have used simpler versions of HCI metamodel, creating similar, but more focused, architecture. In the next section we will present two such applications.

specific manifestations. To define the basic concepts, we have created the unified metamodel of multimodal interaction and constraints, which actually represents simplified model of HCI described in the previous section. This metamodel represents abstract, higher-level view on various aspects of interaction for people with or without disabilities. With proposed approach, developers can concentrate on more generic concepts, providing solutions for different levels of availability of specific effects. Limiting environment characteristics or limited abilities of a user are viewed as a break or decrease of throughput in interaction channels. This work is the extension of our previous work in modeling multimodal human-computer interaction [9]. Our modeling framework allows for creation of views on the same interaction modality from different levels of abstraction. For example we can create the abstract menu models that presents a highly abstract view using highlevel concepts, while visual or speech menu models introduce specialized, lower-level details. This specialization could go further, for example, by creating models that describe how visual or audio menus are implemented on a specific implementation platform, for example, using Java Swing, Microsoft MFC, Microsoft SAPI or JavaSpeech packages (Figure 1).

V. APPLICATIONS In this section we present some of our results in modeldriven development of user interfaces. Firstly, we talk about modeling and transformations of human-computer interaction at different levels of abstraction, and then about adaptation and content repurposing. Both examples used simplified version of the HCI metamodel, creating semantically richer frameworks for description and transformation of user interface content.

A. Modeling human-computer interaction on different levels of abstraction MDD can enable efficient modeling at different levels of abstraction, what can clearly benefit development of interactive systems, as implementation platforms usually use primitives that are far away from important HCI concepts such as modality or multimodal integration. In view of that, we proposed a unified approach for describing human-computer interaction with UML in terms of sensual, motor, perceptual and cognitive effects [9]. In our framework we introduces two key concepts: interaction modalities and interaction constraints. User interfaces are described as multimodal systems that produce and exploit various effects, and constraints as limitations in usage of these effects. Proposed model-driven framework does not define any specific interaction modality - such as speech, gesture, graphics, and so on - nor a constraint, such as low vision, immobility, or various environment conditions, but defines a generic unified approach for describing such concepts. Proposed framework, therefore, focuses on the notions of an abstract modality and abstract constraint, defining their common characteristics regardless of their

1261

Fig. 1. Models of a menu on different levels of abstraction. Creating models on different levels of abstraction is very important tool in development of complex systems. In this way, the application could be developed independently of the user interface, providing and using only the abstract interface for selection of items, e.g. the abstract menu interface. By mapping concepts form different menus, it is possible to transform or adapt models and content, which we will describe in more details in the next section. Modeldriven development and associated tools can introduce many advantages to this way of modeling, such as: * Models can have richer semantics, according to the HCI metamodel. In case of the UML this was made with UML extension mechanisms such as stereotypes, tagged values and constraints (Figure 2). * Models can be exchanged using standard XML-based interchange formats. In this way, various tools can share the same model, but it is also possible to developed custom tools that access or change the metamodel. * Models can be mapped to platform independent API specification and accessed programmatically. In this

*

way, models can be transformed from one form to the other. For example, we have developed several tools that transform high-level platform specific models, such as the model from Figure 2, into platformspecific models, for example, into Java platform models. These transformations are based on mapping between platform specific primitives, and high-level HCI concepts. Models can be shared among tools that support given standards. We integrated our results into environments such as Rational Rose or Enterprise Architect, but we also were able to user ontology tools such as Protege 2000.

B. Adaptation and Content Repurposing The question of model transformations lies at the center of the MDA approach. In MDA, platform-independent models are initially expressed in a platform-independent modeling language, and are later translated to platformspecific models by mapping the PIMs to some implementation platform using formal rules. The transformation of the content models can be specified by a set of rules defined in terms of the corresponding higherlevel metamodels. The transformation engine itself may be built on any suitable technology such as XSLT tools. By connecting models of user interfaces, user profiles, and other interaction constraints, we can analyze and transform content in various ways. For example, by comparing effects that the interface produces with effects that constraints restrict, it is possible to see if the effects used by the user interface will be appropriate for some user group, or for some situation. Therefore, we have explored several model-driven content repurposing approaches which take into account sensory, perceptual, motor or cognitive effects of content. Content repurposing is a relatively new discipline which takes content designed for a particular platform and automatically repurposing it to suit another. In content repurposing, we maintain a single copy of the content in its original form and repurpose it to fit the desired scenario in automated fashion [10]. The main idea of our approach is that existing content can be analyzed in order to create higher-level description using the concepts from our metamodel. If original content is not appropriate for the user or situation, we can repurpose it into a new form, changing improper modalities, but trying to keep higher-level effects contained in the user interface. More detailed description of our previous work in this area can be found in [8]. We have analyzed several model-driven repurposing strategies, such as perceptual content repurposing. Perceptual content repurposing demonstrates the repurposing strategy that attempts to keep the perceptual preferences of the original content. To illustrate this, Table 2 gives simple examples of possible mappings of some high-level perceptual concepts into primitives available on textual platforms such as XHTML or WML, and text-tospeech platforms such as JavaSpeech, Microsoft Speech API and VoiceXML. Based on this mapping, we have been developing tools that parse XHTML and HTML files,

create a new file with higher-level markups using terms such as group and highlighting, and which than transforms this high-level markup into text-to-speech presentation. Various other mapping strategies are also possible. For example, other approaches can concentrate on keeping higher cognitive effects. VI. DISCUSSION AND CONCLUSIONS Model-driven development of user interfaces is a promising approach, and joining MDD and HCI can bring many advantages to both fields. In the case of HCI, it would enable more efficient development of user interfaces, usage of existing standards, tools, and wider acceptance of these results. On the other hand, software engineering and MDD community has not paid enough attention for HCI aspects. After all, software systems are aimed to be used by someone, and integrating humancomputer interaction accomplishments into the MDD approach can lead toward creation of more complete and better accessible solutions. Still, there are many open problems that have to be solved. One of the main challenges is the absence of the formal description of relevant HCI concepts, which in MDD should serve as the metamodel. Due to its multidisciplinary nature, HCI field requires knowledge from many domains, including computer sciences, medicine, psychology, cognitive sciences, and sociology, and creating this metamodel should involve cooperation among experts form various fields. With this model is present, as we have shown, various applications are possible. With modeling technologies that have matured, MDD approach could be a practical alternative for development of user interfaces in near future.

1262

VII. REFERENCES 1.

Turk and G. Robertson, "Perceptual user interfaces (introduction)", Comm. of the ACM, Vol. 43, No. 3, March 2000,

M.

pp. 33-35. S. J. Mellor, A.N. Clark, and T. Futagami, "Model-Driven Development", IEEE Software, Vol. 20, No. 5, September / October 2003, pp. 14-18. 3. B. Selic, "The Pragmatics of Model-Driven Development", IEEE Software, Vol. 20, No. 5, September / October 2003, pp. 19-25. 4. B. Myers, Scott E. Hudson, Randy Pausch, "Past, Present, and Future of User Interface Software Tools", ACM Transactions on Computer-Human Interaction, Vol. 7, No. 1, March 2000, pp. 3-28. 5. D. J. Kasik, "A user interface management system", Comput. Graph. 16, 3, 1982, pp. 99-106. 6. S. Trewin, G. Zimmermann, and G. Vanderheiden, "Abstract user interface representations: how well do they support universal access?", Proceedings of the 2003 conference on Universal usability, Vancouver, (2003), pp. 77 - 84. 7. A.R. Puerta, "A Model-Based Interface Development Environment", IEEE Software, July/August 1997, pp. 40-47. 8. Z. Obrenovic, D. Starcevic, and B. Selic, "A Model-Driven Approach to Content Repurposing", IEEE Multimedia, Vol. 11, No. 1, January-March 2004, pp. 62-71. 9. Z. Obrenovic, D. Starcevic, "Modeling Multimodal HumanComputer Interaction", IEEE Computer, Vol. 37, No. 9, September 2004, pp. 62-69. 10. G. Singh, "Content Repurposing", IEEE Multimedia, Vol. 11, No. 1, January-March 2004, pp. 20-21.

2.