Interactive Aspects in Switching between User Interface ... - CiteSeerX

0 downloads 0 Views 229KB Size Report
2Departamento de Ciência da. Computação - UFF. Praça do Valonguinho s/n. Niterói, RJ [email protected]. 3Departamento de Artes. PUC-Rio.
Interactive Aspects in Switching between User Interface Language and End-User Programming Environment: A Case Study Simone Barbosa1 , Monica Portocarrero Cara1,2 , José Ricardo Cereja3, Cecília Kremer Vieira da Cunha1, and Clarisse Sieckenius de Souza1 1Departamento de Informática

2Departamento de Ciência da

3Departamento de Artes

PUC-Rio R. Marquês de S. Vicente 225 Rio de Janeiro,RJ sim,ceciliak,[email protected]

Computação - UFF Praça do Valonguinho s/n Niterói, RJ [email protected]

PUC-Rio R. Marquês de S. Vicente 225 Rio de Janeiro,RJ [email protected]

Abstract This is a case study intended to demonstrate some of the existing problems in popular extensible home computing applications, and how solutions can be sought following semiotic theories for analysis and redesign of integrated user interface language and end user programming environment. By resorting to very practical examples of graphical user interface and macro language design, we highlight the connections between HumanComputer Interaction and Computer Semiotics in punctual aspects of an existing software application, and suggest how the global picture of this approach can be obtained. Keywords: Computer Semiotics, Human-Computer Programming Language Design, Interface Language Design

Interaction,

End-User

1. Introduction This paper evaluates aspects of the integration between the User Interface Language (UIL) and the End-User Programming (EUP) Environment provided by Microsoft® Word 2.0 for Windows™ . Several features were evaluated according to principles of the areas of User Centered System Design (Norman & Draper 1996) and Computer Semiotics (Nadin 1988; Kammersgaard 1988;de Souza 1993). We first present a brief introduction to Computer Semiotics, so that the terms used in this paper can be better understood. Then the evaluation itself takes place by following the steps of a hypothesized user who tries to switch from the editing to the programming environment. At each step we analyze his/her difficulties based on the concepts of the two areas above mentioned. Following the analysis, we suggest some improvements for a partial redesign of the application's interfaces. They illustrate how the principles proposed by Computer Semiotics and User Centered System Design could be used in search of improving software usability.

2. Introduction to Computer Semiotics Computer Semiotics (Andersen 1990) is an important theoretical basis to deal with problems of human-computer interaction, such as user interface analysis and design. In a Semiotic Engineering Framework (de Souza 1993), user interfaces are designed to convey a message from the system designer to the system user (see Figure 1), whose meaning is the answer to two fundamental questions: “What kinds of problems is this application prepared to solve?” and “How can these problems be solved?” (de Souza & Barbosa 1996). Interface Message

Designer

Abstracted Conceptual Model of the Application

:

User

Conceptual Usability Model of the Application

Application's Model

Figure 1 – Semiotic Engineering framework.

User interface languages are codes (i.e. a set of signs) that system designers use to express their messages to users. A sign (i.e. that which represents something for somebody) is related both to an object (i.e. the thing it stands for) and an interpretant (i.e. a feeling, an action or another sign triggered by the presence of sign and object), in a triadic schema (Peirce 1931). A sign may be interpreted in indefinitely many layers of meaning bringing up a variety of other signs and meanings to mind, resulting in a process called unlimited semiosis. ... Interpretant {friends}

Interpretant {family}

Interpretant {where I live}

Sign

Object [a house located...]

Figure 2 – Interpretants and unlimited semiosis from(de Souza & Barbosa 1996).

Signs, and consequently codes, are only effective in a communication when interpretants in the sender’s (e.g. a system's designer) mind reach some level of convergence with the interpretants in the receiver’s (e.g. a user) mind. When such convergence is missing, people start a process of negotiation of meanings, in dialogues, until convergence is reached. The example below (from de Souza & Barbosa 1996) shows how the word (sign) home triggers different interpretants in two people's minds, and how the negotiation goes.

— Are you going home soon? — Not a chance... I have my job here and I can’t afford to leave it behind and go back to Bakersville. — Oh, no, sorry. I didn’t mean that. I was asking if you were going back to your house in the next couple of hours. I just thought you would like to ride along. The direct consequences of a Semiotic Engineering framework is the notion that interfaces are one-shot messages, in which no direct negotiation can take place. As long as an equivocal interpretant lives, there lives the interactive trap which can catch users and threat software usability. Some theoretically-motivated design guidelines may help us choose interface signs that reduce the chances of misinterpretation by users (de Souza 1993; de Souza 1994), but just like in any other communicative situation among humans, there are no guarantees. The analysis of Word UIL and EUP Environment should illustrate the semiotic concepts above and suggest how semiotic theory can contribute to the improving aspects of human-computer interaction. 3. From Text Editing in Word to Programming in WordBasic User Interface Language (UIL) and its End User Programming Language (EUPL) have naturally different codes that serve different purposes. Moreover, the UIL should be able to maximize the application's usability and act like a tutorial to its EUPL (DiGiano 1996), thus reducing the semantic and articulatory distances between the two (Hutchins, Hollan & Norman 1986). In order to see how this can be achieved, let us folow the steps of a hypothetical user in her early attempt to create, save, and execute macros, in order to be more efficient.

Figure 3 – The “Record Macro” option located under different menus.

Her first problem is to discover how to activate the EUP mode. Macro-related options appear under different menus, depending on whether there is an open document, or not (see Figure 3). Since a sign’s location is part of the graphic interface language grammar, this duality emphasizes distinctions over similarities among macro tools. For a novice user, however, distinctions are likely to be unclear and the duality may end up stressing the user's ignorance instead of encouraging her to use this valuable resource. Different sign locations are due in fact to an apparently inconsistent resource classification. When there isn’t a file opened, the “Macro” and “Recorder” tools are

presented as a FILE menu option. As soon as one file is opened, it is presented as a TOOLS menu option. The user may ask herself what this distinction means? Are macros files, tools or both? This initial confusion may have important consequences, as we shall see later on, when the user tries to close a “Macro” (as a file) or stop the “Recorder“ (as a tool) from within the EUP environment. But now, our user chooses the “Recorder” and enters the recording mode, in which every operation is recorded under a name index for later reuse. At this point, there is no feedback that the recording mode was activated, except for a change in the mouse pointer over the document, almost unnoticeable. In semiotic terms, this lack of emphasis on mode signalling can be interpreted as meaning that no big difference is at stake whether one is recording steps or not. However, the mouse is deactivated within the document window, i.e., no editing action can be accomplished by using the mouse. On the other hand, selecting and editing text can be done using the keyboard. Such inconsistencies as we switch from one window (or one device) to another can trigger odd interpretants in the user's mind. Moreover, the user may forget that she was in the recording mode and proceed recording what will become an enormous, undesired and useless macro. In order to exit the recording mode, the first question is: how to do it. After a while, the user will find out the option “Stop Recording” under the Tools menu, in the same position where the “Record Macro” option previously was. Notice that the user might have started the recording using the “Record Macro” option under the File menu, and therefore it would be impossible to predict the new location of the “Stop Recording” option, under the Tools menu. This feature suggests to us that the duality of sign encoding in the UIL is actually an inconsistency and not an intentional distinction. Next, our user decides to write her macro from scratch. As soon as she double-clicks on the macro's name (because double-cliking is the standard interaction for opening a file), the macro runs. This disruption at the transition from UIL to EUPL may cause a lot of distress, given that some actions may not be undone (Shneiderman 1986). An unwanted execution may cause irreparable damage in a user’s document, but no confirmation is asked before running the macro. If the user overcomes interactive obstacles and finally succeeds in entering the editing mode, the first impact she has is that the macro toolbar replaces the formatting toolbar (this feature was changed in a later version). This is a problem because Word allows the user to edit multiple documents at a time — texts and macros. She might rightfully suspect that she wouldn’t be able to use the formatting toolbar anymore on any other document. Ideally, the macro toolbar should only be visible when the focus is on the macro window, while the formatting toolbar should only be available when the focus is on the document window (i.e., on the text being edited, since the macro text cannot be formatted). Yet another problem is the lack of sign articulation. By this we mean that signs and signals used to communicate meanings in one context should be used to convey the same (and occasionnaly similar) meaning in all other contexts. Since the interface has signs to show a tool is deactivated, for sake of consistency in articulation deactivated toolbar icons should offer similar visual clues that they are, in fact, deactivated (although they don't). So far, we can see that local interactive design choices, choices determined by implementation constraints or inter-version software compatibility, have all led the

designer throughout tortuous sign production processes at the interface levels of both editing and programming. His communication with the user is severely threatened by such inconsistencies, and a more global approach to designer-user communication seems to be clearly called for. An odd situation also happens when formatted text is pasted into the macro text. At first it appears as formatted, but when the macro is saved, it is replaced by plain text. False expectations arise if the user wants to use the macro to paste some formatted text: even if it at first appears formatted within the macro window, the macro execution inserts only the corresponding plain text, or worse still, what can be interpreted to be differently formatted text, because of the format specified by the current cursor location. This generates a co-referentiality problem (Draper 1986), where formatted text has one meaning within a document, and no meaning within a macro. By considering the interface as a designer’s message for users, the designer could mean to say that the macro is not editable text and that’s why he hasn’t provided the same editing resources in both environments. But, if so, these operations shouldn’t be allowed altogether. Or else, his intention was to get the elements removed from the UIL portion directly referenced in the EUPL, but this goal has not been met. What can be noticed in our example is that inadequate usage of the expression resource has obscured message content, and this is why we have great difficulties to interpret and understand the designer's intentions. Back to our example, after editing the macro the user needs to save it. Nevertheless there is no apparent sign available for that operation. The first attempt would be to look for the available option under the “Tools” menu, where the macro editing mode was entered. But it isn’t there. If a macro is considered as a file, it should consistently be made available under the “File” menu. But there, all the user finds are the “Save” (supposedly to save a document) and “SaveAll” (which would save the macro alright, but also every other open text or macro, possibly causing some distress). Likewise, when trying to close the macro, more problems are found. At times the designer sends a confirmation dialogue message, but if one creates a new macro, doesn’t make any changes, and then closes it, the macro is saved without confirmation dialogues. This is a clear example where the designer has assigned no power to users over system, not providing any tools for them to exert control of what is happening in the application. Faced by the Macro Programming environment, our user finds out that macros can be executed step by step or at once, as one command. If she wants to execute a macro step by step, she must first open it. Double-clicking on the macro's name causes it to execute at once (The provided procedure to open a macro is to choose its name and then click on “Edit”). Having opened the macro, the user must now switch to a document window to start executing the macro, otherwise she will get an error message. However, if a document window is active it will hide the macro window, making it harder to trace the macro execution.

Figure 4 – A macro window on top of the document window.

If the user executes the macro program in step by step mode, the options “Step” and “Step SUBs” show up (Figure 4). However, if the program does not have subroutines, both buttons remain active although they lose their intrinsic distinctions. This hinders the user's learning of what a subroutine really is. Also, in stepwise execution of recorded macros, instead of highlighting the UIL signs related to each macro statement, many steps of the UIL interaction are encapsulated within a single EUPL statement (e.g. the single long macro instruction FormatCharacter .Font = "Courier", .Points = " ", .Bold = 1, .Italic = 0, .Strikeout = 0, \.Hidden = 0, .SmallCaps = 0, .AllCaps = 0, .Underline = 0, .Color = 6, .Position = \"0 pt", .Spacing = "0 pt" may correspond to at least 3 corresponding UIL commands, as underlined). These are executed at once and without feedback. Had the designer's main message content been an intention to teach users what EUP is and what it can do, the recorder could be used as a powerful tutorial source, showing the relation between the UIL and the EUPL signs at every execution step (DiGiano 1996). 4. Telling Users what Programming is via WordBasic EUPL's are meant as tools to increase the functionality of a specific software. WordBasic offers users a variety of commands to help them to achieve their text-editing goals efficiently. In order to help users understand the application's commands, the objects and operators provided by the EUPL should be consistent and continuous (i.e. have contiguous meanings) with those of the UIL. The EUPL should also provide, of course, some basic programming tools that allow for the creation of variables and the use of selection and iteration structures (Myers 1992). By interpreting the EUPL commands and operators, the user should create a mental model for the language and the machine that processes it. The way that these elements are presented in the language, their coherent bidings with UIL objects, and the users’ own experience, all influence the construction of such model. The closer the model is to the language semantics, the faster will the user be writing good macros. In most home computing applications, however, when trying to reach this type of functionality, the user usually faces an arid environment for editing text-only programs, which is by itself discouraging. The way the language is presented and taught to the user is a crucial factor for its persistence in the user's memory and competence. As stated previously, the programming environment offered by Word is quite arid. The user has three options: to record the routine step by step, to write it manually, or a combination of both approaches (e.g. by using the “Record Next” command). One of the main problems of using only the “Record Macro” tool is that it is actually only a means of grouping command language instructions (structured as a regular interactive grammar), without means of abstraction for data and control typical of programming

languages. Thus, real programming requires learning the EUPL, which involves manual editing of program textual codes. WordBasic, as its name says, is modified Basic. It can be learned by resorting to on-line help, by learning from recorded macros, or a combination of both (which is clearly the best in semiotic terms, since users can try to make connections between programming language constructs and application functionalities provided via UIL). Word on-line help follows MS Windows standard. It starts by listing reserved words in alphabetical order; and then offers options that lead the user to another help environment that presents a classification of functions and commands. WordBasic offers a great variety of commands besides those that refer to actions executed by the UIL. The online help is very accessible, but the way it is segmented is not exactly natural. For instance, it apparently tries to introduce two classes of commands: those related to programming in general and those related to the domain. Nevertheless, this classification within the help system differs from the UIL menu structure. In order for an EUPL to be useful at all for an unexperienced user, it must show signs of consistent generalizaions and follow systematic patterns of articulation. Even though Word maintains some correspondence between its UIL and its EUPL, it isn’t easy to build an adequate mental model that bridges this gap. Even for a programmer acquainted to Visual Basic, help texts don't make further references to the UIL. Thus, the easiest way to learn the macro language is to record macros and then analyze the correspondence between the actions in the UIL with the generated commands in the EUPL. However, this analytical task is obviously too sophisticated for novices. We believe that if designers did this semiotic analysis in the first place, the outcome could not but make major improvements for successful end user programming in Word.

5. Redesign of Word for Windows 2.0 UIL and EUPL Our proposal for a partial redesign of Word's UIL and EUPL aims at demonstrating opportunities for improving software usability because of more consistent communication between designer and user. Empirical experiments with the proposed interface are difficult to be carried out separate from major redesign of the whole environment. Alternaltively, through reconfiguration of the standard interface can be done to conceal those portions of the interface that are not redesigned, and that can clearly interfere with the environment under analysis. However, both are out of the scope of this study at its current stage. Our redesigned UIL is literally meant not only to maximize the application's usability, but also to fulfill a tutorial goal regarding the EUPL. We considered a computer icon as a visual sign that should transmit as clearly as possible a concept related to a specific state of action (e.g. the icon in Figure 5a expresses in a universal way the sense of “doubt” and “questioning”.). This sign must not necessarily be represented by a graphic language only (as may seem ideal), but it could be associated with text to achieve greater clarity in message transmission Figure 5b: association of image and text). However, computer icons (whose definition at this point diverges drastically from the original one proposed by Peirce (Peirce 1931)) may also be arbitrarily associated to meanings that are hardly anything but a personal secret code. Interestingly, such icons are provided by the Word UIL in its customization lexicon.

(Figure 5c: this happy face icon can only mean something for the specific user who decided to associate it with some arbitrary function). a

b

c

Figure 5 – Examples of icons.

The steps followed for our redesign exercise with Word UIL followed these lines: 1. Choice of a group of functions expected to be the most frequent and more easily recognizable without the need of memorization. 2. Grouping of icons according to the menu to which they belong. 3. Spatial distribution relating icon groups and menus We found that spatial relations between the menu bar and icon groups could be an objective way of communicating the correspondence between textual and graphic readings, thus reinforcing analogical interpretive processes in both codes. Our proposal consists of an icon bar with signs that are meant to resolve some of the problems found in semiotic analysis. In trying to tie UIL and EUPL together more seamlessly, specific menus and buttons were created to switch from one environment to the other. A button was added corresponding to the “help” menu, in such a way that textual (for the menu) and graphic (for buttons) languages are kept distinct. Although this button may appear in Word UIL as a result of user customization, we understand the original design provided by the manufacturer suggests this feature is superfluous. Icons and Groups File

Open

Save

Print

Edit

Cut

Copy

Paste

View

Normal

Layout

Insert

File

Frame

Picture

Tools

Bullets

Numbering

Spelling

Programming

Object

Help

As for the code the user is likely to find when she chooses to press the "Programming Icon Button"( ), we propose, in synthesis, the visual representation of structures of control over the data flow, using representative geometric forms (de Souza & Ferreira 1994). We selected for the EUPL the most significant commands to the programming operation. The interpretations of signs should bring up mental constructs derived from user–device dialogue that give users the (correct) impression that Word is a machine (actually a symbollic machine) they can use as such or modify if certain components are played with.

Execution Run

Pause

Selection Stop

If

Case

Iteration For

Do-While

Even though the UIL and EUPL naturally possess different codes, we tried to look for a standardized semiotic framework. Thus, we removed from the UIL the originally ambiguous reference to the entrance into the EUPL environment. We encouraged the use of the EUPL through the insertion of an ostensive icon in the toolbar ( ). This inclusion makes the message we want to transmit to the user (that the EUP environment can bring them as many facilities as cut & paste, and the like) clearer, and shows the relative increase in importance our design assigns to WordBasic as compared to the original design. The macro options and their corresponding dialog box were replaced by a “Programming” option, under the Tool menu In the extension (programming) environment we propose a new toolbar and a noticeable change in the mouse design while in the EUPL environment. We also manage focus when the user switches back and forth between the EUP and the editing UI environment We have watched that there are no signs common to both codes that have different meanings. An editing menu specific to programs and the disabling of the mouse on Word documents when within a EUPL window is one of the new design features, meant to separate the scope of action the user is now undertaking. The active document becomes a working object or a parameter to the program. Another feature is a conceptual change of focus expressed through the choice of the working window, which enables the user to edit a text and a program simultaneously. As soon as the user enters the extension environment for program editing, she is presented with the screen shown in Figure 6:

Figure 6 – Extension environment for program editing.

In this redesigned extension environment, we tried to maintain a semiotic continuum with the UIL. Our programming paradigm capitalizes on the code generation resulting from macro recording at the UIL. This disclosure function (DiGiano 1996) is used to improve Word’s usability and serves as tutorial for the EUPL. As can be seen in Figure 6, there are two basic types of resources in the EUPL: construction and execution resources. The former consists of elements which, when clicked and dragged, generate program code. The latter consists of elements which, when clicked, accomplish an action. The construction resources can be divided in two categories: • construction resources which belong to the internal model of application • programming resources in general

For example, it would be possible to make a “Save As” in only one step in the following way: (a) the user drags & drops the icon or a menu item to the inside of the programming area; (b) she is presented a dialog box and to be filled up with the desired parameters (see Figure 7); and (c) by clicking “Ok”, the generated code appears in a macro window, ready to be edited or executed.

Figure 7 – Dialog box for code generation via drag and drop.

This programming paradigm has a strong tutorial nature. We manage to transmit in this fashion the concepts of variable, constant and default values. At a more global level, we have the option of generating comments which introduce further programming concepts to the user. In our redesigned version of the “WordBasic” language, the pattern used for the commands which access Word’s internal model is . (e.g. Save.File). This action-oriented style is quite opposed to the object-oriented style of the original macro language, but it has the advantage to be in line with the user's most probable state of mind (i.e. that of specifying specific actions over classes of objects, rather than specifying general actions over specific types of objects). This liberates the code from the confusion involving menus and emphasizes the idea that text editing is in fact performing actions over objects. Besides that, this modification maintains a homogeneous articulation level with that of the UIL. Nevertheless, this pattern can only be implemented in WORD at the expense of an extensive redesign of Word’s internal model. This point reinforces tremendously our belief that Semiotics should be playing a bigger role in interface design. Only far-reaching and deeply-understood analyses of desirable scenarios in HCI can pave the road to future upgrades and extensions to software functionalities without loss of investment in previous versions of applications. Specific programming functions can be divided in two classes: code edition commands, and code execution commands. The first are presented in the specific edition menu, and the second in a toolbar in the code visualization area. In non-automatic code generation

the program edition resources are presented in a specific menu. Although we have not gone into code execution, some suggestions could be made for interaction: •



· Run Step by Step - shows the performance line and the variables map. When it reaches a function call, it asks the user if she desires to execute it also “step by step” (also called “trace into”). · Run using ... • ·Active Document: executes the program on the current document and asks the user if she desires to redimension the windows so that the program and the document on which it is being executed are placed side by side. A message window is showed when the program finishes the execution. During the execution there must be continuous visual feedback. • ·Several Documents: the user can choose more than one document on which the program shall act. It works in the same way of the previous item, but on several documents. This option possesses a tutorial function, in the sense that it shows the user what is a loop.

6. Concluding Remarks With suggestions made in this paper, we intend to have (a) given a semiotic account of empirically observable flaws of an existing application's interface and extension environment, and (b) demonstrated that with semiotic explanations for flaws, designers can have a clearer notion of what problems to attack and why. With semiotic theory as a basis, repeating the same mistakes can be avoided and superfluous changes can be spared. Although we have not proposed extensive redesign, we have spotted some of the theoretical underpinnings of macro language programming in Word and have strived to reduce the discontinuity between the Word user interface and programming environments by means of: · menu restructuring · visual grammaticalization of interface widgets · insertion of an automatic explanation to the generated code · comments insertion in non-automatic code generation · abstract code visualization In HCI literature there is an increasing awareness of the importance of end-user programming as a complementary facility for software in general (Myers 1992; Nardi 1993). In particular, the notion of a bridge from interface to programming environment has been proposed before (DiGiano 1996), with the exception that a semiotic approach has not been taken. We firmly believe that Semiotics adds a qualitative factor to analysis and design of user interfaces in general, since it can provide explanations and make predictions that are theoretically connected to each other. Without it, Do's and Dont's are more likely to be used and cause the kind of local solution we have observed in many design options of this case study. Moreover, a global semiotic approach shows that designing software is much more like synthesizing languages (i.e. semiotic systems) than programming, a perspective that certainly has deep consequences in terms of software education and practice.

Acknowledgements We would like to thank Guy Perelmuter for his great contributions to the contents of this paper. References Andersen, P.B. (1990) A Theory of Computer Semiotics. Cambridge. Cambridge University Press. de Souza, C.S. & Barbosa, S.D.J. (1996) End-User Programming Environments: The Semiotic Challenges. PUC-Rio Inf MCC19/96. Rio de Janeiro, RJ. Jun’96. de Souza, C.S. & Ferreira, D.J. (1994) “Especificações Formais para Linguagens Visuais de Programação” Proceedings of SIBGRAPI'94. Curitiba, PR. SBC and UFPR Press, pp. 181–188. de Souza, C.S. (1993) “The Semiotic Engineering of User Interface Languages”. International Journal of Man-Machine Studies. No. 39. pp. 753-773. de Souza, C.S. (1994) “Testing Predictions of Semiotic Engineering in Human Computer Interaction”. Anais do Simpósio Brasileiro de Software. Curitiba, Pr. Oct’94. pp. 5162. de Souza, C.S. (1996) The Semiotic Engineering of Concreteness and Abstractness: From User Interface Languages to End User Programming Languages. Dagstuhl Seminar on Informatics and Semiotics. 1996. DiGiano, C. (1996) A Vision of Highly-Learnable End-User Programming Languages. Position Statements of Child’s Play’96. Draper, S.A. (1986) “Display Managers as the Basis for User-Machine Communication” In Norman and Draper (eds.) User-Centered System Design. Hillsdale,NJ. Lawrence Erlbaum and Associates Hutchins, E.L., Hollan, J.D. & Norman, D.A. (1986) “Direct Manipulation Interfaces” In Norman and Draper (eds.) User-Centered System Design. Hillsdale,NJ. Lawrence Erlbaum and Associates. Kammersgaard, J. (1988) “Four different perspectives on human-computer interaction”. International Journal of Man-Machine Studies, 28, 343-362. Myers, B. (1992) Languages for Developing User Interfaces. London. Jones and Bartlett Publishers,Inc. Nadin, M. (1988) “Interface Design and Evaluation — Semiotic Implication”. In Hartson and Hix (Eds.) Advances in Human-Computer Interaction (Volume 2). Norwood,NJ. Ablex. Nardi, B. (1993) A Small Matter of Programming. Cambridge, Ma. MIT Press Norman, D. and Draper,S.A. (1996) (eds.) User-Centered System Design. Hillsdale,NJ. Lawrence Erlbaum and Associates Peirce, C.S. (1931) Collected Papers. Cambridge, Ma. Harvard University Press Shneiderman, B. (1986) Designing the User Interface. Reading, Ma. Addison-Wesley

Suggest Documents