User-Centred Design and Abstract Prototypes - CiteSeerX

3 downloads 201 Views 565KB Size Report
The main goal of interactive software is to support the user by performing her/his tasks. ... interfaces, as performed in the course of prototyping, helps developers understand the ..... Proceedings of IUI 2003 -Miami, Florida, January 12-15, 2003.
User-Centred Design and Abstract Prototypes Peter Forbrig, Anke Dittmar, Daniel Reichart, Daniel Sinnig University of Rostock, Department of Computer Science, Albert-Einstein-Str.21 D 18051 Rostock [pforbrig|ad|dreichart]@informatik.uni-rostock.de Abstract A user-centred design approach in combination with a tool-supported development of abstract prototypes is proposed. Based on task, object and user models an editor for modelling the navigation structure of a user interface as a dialogue graph is demonstrated. This editor is based on the XIML technology and allows simulations considering temporal relations between task and design decisions for the dialogue graph.

1. Introduction The main goal of interactive software is to support the user by performing her/his tasks. A user-centred design process is a key to reach this goal. Rapid development of user interfaces, as performed in the course of prototyping, helps developers understand the functionality and facilitates the participation of users. As a consequence, software developers have to study the work of potential users and the objects they manipulate: They have to analyze tasks users are performing and to specify functionality of software according to the results of this analysis, e.g. [12]. However, in many cases the development of interactive systems starts either from a data model, such as described in [8], or from an object-oriented application model like UML. With the introduction of sequence- and use-case diagrams into object-oriented development [18] the idea of scenarios and tasks became popular. Unfortunately sequence- and use-case diagrams are not able to specify the whole set of scenarios in a precise way. Task-based techniques such as ADEPT [19] have evolved over time. They focus on the process of creating design representations, which are not necessarily object representations, from information about the user's tasks. Thus task-based approaches ensure that the interactive system is compatible with the work place it is intended to support. An approach, which is looking at objects as well as at scenarios in form of a task model, could combine the advantages of both techniques. Based on task, object and user models an editor for storing the models and for specifying the navigation structure of a user interface as a dialogue graph is demonstrated. The editor is based on the XIML specification [20]. It allows simulations in form of an abstract prototype and has the potential to evolve to the final interactive system.

2. Model-Based Development of Interactive Software Interactive system development that takes into account the work of end users has to comprise some representation of his work. In order to develop software based on user tasks and objects, several frameworks have been introduced. For instance, in the approach of [17] the development of interactive software is based on the methodological

integration of a task model, a user model, a problem domain model (data model) and an interaction model (Figure 1). In this paper we have adopted this idea. Please note that the term business-object model is used instead of problem domain. We also define the term problem domain in some sense broader. From our point of view tasks belong to the problem domain as well. Figure 1 visualizes the relations of the different models. task model

user model business-object model The relation

interaction model means is based on

Figure 1: Relations between models [17]. Figure 1 demonstrates that any interaction model has to be based on the task model of the application. In addition, the task model has to be modelled in mutual relationship to the user model, representing the functional roles users have to play for task accomplishment, as well as their individual perception of the tasks. The user model is also related to the business-object model and the interaction model since the user may require different views on the data while performing a task. Besides, the interaction model has to reflect the abilities, skills, and preferences of end users. A relationship between the business-object model and the interaction model is required, since the problem domain data in the business-object model have to be presented to the end users for interactive task accomplishment. Designers may handle interaction and object modelling separately as long as required. However, eventually the (object) architecture remains as the result of several modelling activities. The design process is a loose order of specification activities and Definitions 1 mutual adaptation procedures in and A task model describes the static and dynamic organization of the work. between different models. This approach still enables designer to start A user model characterizes the users and specifies either with the interface specification their perception of tasks and the organization of or the object-modelling activities, work, the access rights to data, and their based on a task model. preferences for interaction modalities. Within our TaO (Tasks and Objects) A business-object model specifies objects of the project we have defined two views problem domain with attributes, methods and which are related to the business-object relations, as well as the behaviour of these models. model. Within the context of a task, an object can be considered as an artefact An interaction model describes the structure and behaviour of interaction devices, features, and or tool. The term artefact is used to modalities. refer to parts of the environment that are intended to be changed by performing a task. The term tool refers to things, which are intended to be used during task execution. An object is often an artefact in relation to tasks in a specific task tree and a tool in relation to many other tasks in different task

trees. It is assumed that the features of computer systems can be characterized by stereotypes called devices, where one device contains all features of the following one (e.g. mainframe > personal computer > notebook > palm top > mobile). A task (Definition 2) is described by its goal, a specification of subtasks and their temporal relations, a list of roles Definitions 2 performing the task, an artefact which Task = (Goal, Subtasks, Temp. Relations, Role(s), is manipulated, a set of tools Artefact, Tool(s), Device) supporting the task and a device which An artefact is an object, which is essential for a is the target platform of the software task. Without this object the task can not be under development. performed. The state of this artefact is usually changed in the course of task performance. Tools and artefacts are considered to be parts of the environment. A goal is A tool is an object that supports performing a task. Such a tool can be substituted without changing the an intended result with reference to the intention of a task. environment. In the context of a task A goal is a state of the artefact, which is the this means the change of attributes of intention of performing the task. an artefact. A role is a stereotype of person who is expected to Our approach consists of a unified perform the task. strategy of structured and objectA device describes the stereotype of a computer, oriented ideas. However, analysing the which is necessary to perform the task. task descriptions first does not imply ignoring the importance of defining objects. There are special relationships between artefacts, parts of artefacts and tools, which are specified within the object model. These relationships can be extracted from the task model. However, they can also guide the development of the task model. Possible relationships are association, aggregation, composition and message. Experiences with a tool implemented in Java and using a data base are described in [21].

3. User-Centred Design Principles and the XIML Framework The XIML (eXtensible Interface Markup Language) [20] is a framework for specifying models for interactive systems. It is based on XML and allows the specification of tasks, objects, users and devices as well as the specification of user interfaces. From our point of view it was fascinating to represent the ideas discussed above by XIML models. The first attempt consisted of specifying tasks, tools and artefacts. This was possible without problems because XIML allows the introduction of relations between different model elements. One can introduce relations between classes of the task model and classes of the domain model. (Here domain model is once again used as business-object model in our terminology.) In this way “task_has_artefact” is a binary relation between a task and an object. In the same way tools can be attached to tasks. It was also possible to allow more general temporal relations for tasks [5]. Relations between tasks with different level of hierarchy are possible. At first a tool was developed to read an XIML file, to present different views for tasks, users and objects and which allows an animation according to scenarios (see Figure 2). Basic tasks (in rectangular boxes) can be executed from different points pf view, such as the task view, user view or domain view. At the same time the attached artefact, the tools, the role, and the involved temporal relations can be viewed on the right hand side.

Figure 2: Start situation of animating an XIML model for percolating coffee.

Figure. 3: Situation during animation

Figure 2 illustrates the example of preparing coffee. After filling water into the tank there is only one basic task that can be performed. Because of temporal relations this is “Remove old filter”. This can be seen in Figure 3. All temporal relations can be seen in Figure 2 and Figure 3. This first experiment demonstrated the opportunities of XIML and pushed the idea of using task models for requirements analysis and design. Let us have a look at an e-shop example and assume that the task of e-shopping can be decomposed into three subtasks. First, one has to look for the product, then check the offer and finally order the product. These subtasks are further refined. We have defined Looking for the product as an unlimited sequence of search attempts followed by an optional selection. Checking the offer can be either performed by checking in detail (on a PC) or short checking only (on a palm). At the moment the device information is not visualised by the tool.

Figure 4: Start situation of animating an XIML model of an electronic shop.

4. Designing Dialogues and Abstract Prototypes There are different strategies to design the dialogue model. The evolution of the animated task model to a final user interface could be one attempt.

Janus [2] uses information mainly from the object model, but most approaches are based on tasks. Teresa [13] follows an idea of grouping tasks based on preconditions, which allows an automatic generation of dialogue models. In this paper a tool support is discussed which follows the idea of grouping tasks within different views. This process is considered as an interactive design process. Within our approach we distinguish between 9 different phases of User Interface Development: 1. 2. 3. 4. 5. 6. 7. 8. 9.

High-level activity modelling of the existing work situation High-level activity modelling of an envisioned work situation Assigning platform stereotypes to task Designing an abstract user interface Designing a dialogue graph Assigning abstract user interface objects Designing a concrete user interface Assigning concrete user interface objects Assigning representations

This approach is very similar to the one described by [13]. In our approach, additionally the existing work situation is modelled as well as the way the abstract user interface is generated form the task model differs. Our method of explicitly designing a dialogue graph incorporates also an alternative strategy which consists of designing a very abstract user interface. The software developer has to decide which tasks are grouped together in one view and how the transition from one view to another one is specified. A dialogue graph consists of dialogue views and transitions. There are five types of dialogue views and two types of transition views, which are presented by Figure 5. Transition types sequential,

Dialogue view types concurrent

single

multi

modal

complex

end

Figure 5 Elements of a dialogue graph In contrast to sequential transition a concurrent transition means that both views are still visible. In order to understand the idea of a dialogue graph a little bit more in detail let us examines two examples. The dialogue views are presented as rectangles instead of a round circles.

Figure 6: Dialogue graph for an e-shop In Figure 6 all dialogue views except the end node are of the type single. There is a view “Start dialogue” in the upper left corner, which is specified as the entry point by a traffic light symbol. This view has two transitions. The first transition activated by “create_a_shop” activates view “Run” and closes view “Start dialogue”. The second transition is related to “close_shop” and activates the “End” view. If in view “Run” “look_for_a_product” is performed the transition to view “Search” is activated. In this case view “Search” appears and is active, but view “Run” remains in an inactive mode. In contrast to the mentioned sequential transitions for view “Run” this is a concurrent transition. Both views are visible afterwards. (From Figure 6 it cannot be seen how tasks are related to the transitions.) From all this emerged the idea to develop an editor that allows manipulating such graphs by attaching tasks to transactions and to views. If a task is attached to a view only, no transition will take place by executing a task. With this editor, which was already used to create Figure 6, different dialogue graphs can be developed for one task model and all models can be stored together into one XIML file. In order to get an impression how this editor looks like, please have a look at Figure 7, where a very simple dialogue graph for an e-shop is presented. It consists of one view only and represents a possible solution for a mobile phone since the interaction capabilities of such devices are very limited. On the left hand side of Figure 7, the task model is visible.

Figure 7: Dialogue graph editor By selecting a task (e.g. create shop) and a transition (e.g. sequential) a transition can be specified by drawing a line between to nodes. In this way a task is attached to a transition. If only a task is selected and it is attached to a node by clicking on it, this task is attached to the node; which is represented by a button. In this case the activation of a task is not related to a transition between different views. Clicking on the traffic light symbol will animate the dialogue graph. A new window appears containing all visible views. Figure 8 demonstrates a special situation during animation.

Figure 8: Animated dialogue graph Two views (“Run” and “Search ”) are visible. At this moment the view “Search view” is active but only the task “enter_serach_criterion” can be executed. Thus, the animation does not only consider specification from the dialogue model itself but it interprets temporal relations from the task model as well.

If you clicked on the “Run” button this view would be active and you get the situation of Figure 9.

Figure 9: Animated dialogue graph editor This kind of animation can be considered as an abstract prototype, a term introduced by Larry Constantine in one of his news letters. Such an abstract prototype can be developed easily by our tool. It supports very much the communication between users and software developers. First design decisions are immediately understandable for the user. He or she is able to experiment with a dynamic system. Different dialogue graphs can be explored in parallel. All models can be animated and forthcoming users can participate in the design of the user interface. It might be recognized that some temporal relations from the task model are wrong. In this case the corresponding model can be updated and loaded once again. The tool will ask whether a new model is loaded or just the task model has to be replaced. In the later case, the task models can be replaced without destroying the design decision already made with the dialogue graph. Currently there is one restriction that the names of the tasks should not change. If a name is changed the relation to corresponding nodes in the dialogue graph is lost. Nevertheless, the rest of the specification can still be used. Additionally, this animation has the potential to evolve to a concrete user interface. We are working on that. Our vision is a prototype that specifies the final presentation as well. Readers familiar with CTTE [14] might recognize the similarity of the icons in the example with those provided in CTTE. The dialogue graph editor has an import - interface for CTTE models stored in XML format. Such models can be imported and information related to our methodology of artefacts, tools and roles can be attached Furthermore dialogue graphs can be developed as an alternative to the user interface development of the Teresa project. Let us have a look at an example of using a mobile phone. (The according task model comes already with CTTE installation package.)

Figure 10: CTTE task model for a mobile phone In CTTE temporal relations like >> - sequential or ||| - in concurrency are graphically described between nodes of a tree. Additionally it is distinguished whether a task is performed by a human, interactively or by a computer. Importing this model into our tool results in the tree, which can be seen on the left hand side of the screen shut in Figure 11.

Figure 11: Dialogue graph for using a mobile phone

Figure 12: Attempt to visualise the dynamic behaviour of the abstract prototype

Figure 12 tries to visualise the dynamic behaviour of the abstract prototype for using a mobile phone. It is nearly impossible to do that by static pictures. Moreover it is easy to work interactively with the prototype. During the simulation it is also possible to have a look at the animated task model. In this way reasons for problems with tasks that are not activated can be discovered.

5. User-Centred Design and the Role of a Task Model A task model describes the essential tasks that the user performs using a user interface. Task models are a very convenient specification to describe the way problems can be solved. They are able to specify different problem domains on different levels of abstractions. Our early investigations show, we should make a distinction between four kinds of task models: • • • •

General task model for the problem domain General task model for software support Device dependent task model Environment dependent task model

The general task model for the problem domain is the result of a very accurate analysis of the problem domain. It describes how a problem can be tackled in general. All relevant activities and their temporal relations are described. Such a model can be considered as the representation of the knowledge of an expert. The state of the art is captured within this model. The general task model for software support contains activities that have to be executed by humans, by principle or because of financial reasons. It contains all activities that have to be executed by a software system or in an interactive way. In some sense this model is the first design decision. Later on the behaviour of the developed software has to be consistent to the specified behaviour of this task model. The capabilities of a device are the most important restriction factor in the context of use of a software system. This is the reason why the relation between task models and devices is especially stressed. There are approaches to transform whole applications from one platform to another one without looking at the tasks that will be supported. But sometimes it is wise to look at the tasks first and to decide which tasks can be supported in an optimal way by a special device. This information is captured in the device dependent task model. The environment dependent task model is the most specific one. It is based on several design decisions in previous models and describes computer-supported tasks for a special device. This model describes the behaviour of a system based on the available tools, resources and the abilities of the user. It can be interpreted statically (influences are fixed during design time) or dynamically (influences are evaluated during run time). There are different possible scenarios for the use of task models in a client/server environment, which is the case for Web applications with a multiple user interface. Often such models are used implicitly only. The implementation is done in a manual way. But it is possible to generate software based on task models.

From our point of view task models are able to specify a little bit more precise how work is performed than use cases or sequence diagrams from UML. In this way a task model can be a good documentation of a use case. Sequence diagrams can be used to build the task model and later on sequence diagrams can be generated from task models. These different views on models allow users and software engineers to get different perspectives of the software under development.

6. Summary and Future Work It was outlined how the design process of interactive systems can be structured and how it can be supported by tools in its early phases. The metaphor of tasks, artefacts and tools was used to describe models in the analysis phase. In this way in a very early stage of development users perspectives can be incorporated and real user-centred design is supported. Animated models and abstract prototypes support the communication between different stakeholders Different views of such models can help to understand the problem domain. They can be used to construct object models in form of class diagrams, which can be transformed by patterns later on. Such models can be the basis for developing dialogues as well. One of such approaches is supported by Teresa [13] which computes task sets. Here, an alternative approach was presented, which can be characterized as an interactive design process. At the moment, object based transitions are not supported in an optimal way. Object information is necessary for this purpose. There already exists an XMI import feature which allows importing the specification from UML case tools. In the future object information will be used for object-based transitions of multiple views. We have the vision of evolving the dialogue graph simulation to an animation of the final user interface. There is an ongoing work to find tool support. Maybe the results of the cameleon project [4] can be used for this purpose as well. Additionally it was discussed how task models can be used as control structure for nomadic applications. Different devices show different views on the same execution of a task model. In this way a user can work cooperatively with him via different views on different locations. Constraints can help to describe dynamic adaptations of interactive systems. In this way the task model can be dynamically adapted to the context of use. Experiments with XIML were promising and future applications have to prove the usefulness of task models for the development of nomadic applications.

References [1] UIML Tutorial. http://www.harmonia.com

[2] H. Balzert, “From OOA to GUIs: The JANUS System”, Journal of Object-Oriented Programming, Febr. 1996, pp. 43-47. [3] M. Biere, B. Bomsdorf and G. Szwillus, “The Visual Task Model Builder”, Proceedings of the CADUI’99, Kluwer Academic Publishers, 1999, pp. 245-256. [4] Cameleon an Teresa project, http://giove.cnuce.cnr.it/cameleon/SIGatCHI.html. [5] A. Dittmar, “More precise descriptions of temporal relations within task models”, DSV-IS 2000, Limerick, June 2000. [6] A. Dix, J. Finlay, G. Abowd and R. Beale, “Human Computer Interaction”, Prentice Hall, 1993. [7] J. Eisenstein, J. Vanderdonckt and A. Puerta, “Adapting to Mobile Contexts with UserInterface Modeling”, Workshop on Mobile Computing Systems and Applications 2000 (Monterey, CA, 7-8 December 2000), IEEE Press [8] J.D. Foley, History, Results and Bibliography of the User Interface Design Environment (UIDE), an Early Model-based Systems for User Interface Design and Implementation, in Proc. of DSV-IS’94, Carrara, 8-10 June 1994, pp. 3-14. [9] P. Forbrig and A. Dittmar, “Interfacing Business Object Models and User Models with Action Models”, Proc. HCI International, Vol. 4, p. 83-92, 2003, Greece [10] P. Forbrig an A. Dittmar, “Bridging the Gap between Scenarios and Formal Models”, to appear Proc. HCI International, Vol. 1, p. 99-99, 2003, Greece [11] E. Gamma, R. Helm, R. Johnson and J. Vlissides, “Design Patterns. Addison-Wesley”, 1995 [12] P. Johnson, “Human Computer Interaction: Psychology, Task Analysis and Software Engineering”, McGRAW HILL BOOK COMPANY, 1992. [13] Mori, G., Paternò, F., Santoro, C.: Tool Support for Designing Nomadic Applications, Proceedings of IUI 2003 -Miami, Florida, January 12-15, 2003. [14] Paterno, F., Mancini, C., Meniconi, S.: ConcurTaskTrees: A Diagrammatic Notation for Specifying Task Models, Proceedings Interact’97Sydney, Chapman&Hall , 1997, pp. 362-369. [15] Paternò, F.: Model-Based Design and Evaluation of Interactive Applications. Springer, 2000 [16] A. Puerta and J. Eisenstein, “XIML: A Common Representation for Interaction Data”, Sixth International Conference on Intelligent User Interfaces, IUI 2002. [17] Ch. Stary, „Interaktive Systeme. Software- Entwicklung und Software- Ergonomie“, Vieweg, 1996. [18] UML: http://www.uml.org

[19] S. Wilson and P. Johnson, “Bridging the generation gap: From work tasks to user interface designs”, In Vanderdonckt J, (Ed.) Proceedings of the CADUI'96, Presses Universitaires de Namur, 1996, pp. 77-94. [20] XIML, http://www.ximl.org [21] A: Dittmar, P: Forbrig, D: Reichart. Model-based Development of Nomadic Applications, Proc. IMC 2003, Rostock.