Coupling A UI Framework with Automatic ... - ACM Digital Library

21 downloads 762 Views 1MB Size Report
application and its interface, we have extended the representation to fully support runtime animated help as part of the user interface to an interactive application ...
Automatic

Coupling A UI Framework with Generation of Context-Sensitive Animated

Help

Piyawadee “Noi” Sukaviriya JamesD. Foley Dept. of Electrical Engineering and Computer Science The George Washington University Washington, DC 20052 E-mail: [email protected], fo1eyQseas.gwu.edt.t Abstract Animated help can assist users in understanding how to usecomputer application interfaces. An animated help facility integrated into a runtime user interface support tool requires information pertaining to user interfaces, the applications being supported, the relationships between interface and application and precise detailed information sufficient for accurate illustrations of interface components. This paper presents aknowledge model developed to support such an animated help facility. Continuing our research efforts towards automatic generation of user interfaces from specifications, a framework has been developed to utilize one knowledge model to automatically generate animated help at runtime and to assist the management of user interfaces. Cartoonist is a system implemented based on the framework. Without the help facility, Cartoonist functions asa knowledge-driven user interface. With the help facility added to Cartoonist’s user interface architecture, we demonstrate how animation of user’s actions can be simulated by superimposing animation on the actual interface. The animation sequences imitate user actionsandCartoonist’s user interface dialogue controller respondsto animation “inputs”exactly asif they were from a user. The user interface runtime information managed by Cartoonist is shared with the help facility to furnish animation scenarios and to vary scenarios to suit the current user context. The Animator and the UI controller are modeled so that the Animator incorporates what is essential to the animation task and the UI controller assumesresponsibility of the rest of the interactions - an approach which maintains consistency between help animation and the actual user interface. Introduction Animation hasbeen used increasingly in program interfaces. Its attractiveness, obviously, is in the dynamic nature of its graphical illustration which inherently provides a mapping for a user’s visualization of program algorithm and interface dynamics. However, ways to provide help in understanding such dynamics have not kept up with the rapid growth in the

useof graphical interfaces. Graphical animation can enhance help’s capability to illustrate “how to” of user interfaces, the kind of help questions often asked by users. Graphical animation, or even just graphical illustrations which portray the senseof animation, when used with textual explanations also enhances human performance to follow procedural instructions, as indicated by experimental results in [Booher 751 and [Palmiter et al 891. Creating animation, however, requires representing information concerning conceptual, semantic, syntactic and lexical aspects of an application, and oftentakesaconsiderableamountofprogramming-including the tedious and time-consuming fine-tuning of animation

Permission to copy without fee all or part of this matertial is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that the copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission.

0 1990 ACM 089791-4104/90/0010/0152

$1 SO

152

precision.

Related Work

Taking advantage of the knowledge representation developed in the User Interface Design Environment (UIDE) Foley eta1 88a], lFoley et al 88b] which captures knowledge about an application and its interface, we have extended the representation to fully support runtime animated help as part of the user interface to an interactive application. Our goal is to take the burden off application programmers by making UIDE automatically generate animated help at runtime from application and interface knowledge.

Domain dependent approaches of automatic help by [Fenchel&Estrin 821, [Hecking 871 and [Y&Robertson 883 propose tightly-coupled help systems with their underlying applications. A basic problem with this approach is its lack of a general framework which would allow different applications to benefit from the samekind of “intelligent” help. Other help systems, [Kemke 871, Fischer et al 851, use the knowledgebased approach and maintain a certain amount of domainindependence. The latter approach is parallel to the way we baseuserinterfacedesignonadomain-independentknowledge model. The motivation is partly the benefit of the reusability of user interface environments forrapidprototypingpurposes.

Traditional help systems normally deliver general help information and users have to relate the information to their current problems at hand. To reduce the gap between help information and the actual user context, we want to make animated help sensitive to user contexts. Help explanations can be tailored to the current application state if they are generateddynamically atruntime utilizing runtime information about application states. With accessible application and Ul specifications, help explanations appropriate to particular user contexts can be generated using current states and backward reasoning,the approachusedin AI planning research.

While all help systems mentioned above use textual explanations in their help presentations, a different set of systemsresult from the recognition of dynamic graphics’ need in help and are dedicated to the idea of providing animation or pictorial explanations in help. CADHELP [Cullingford et al 821 is slanted towards natural language problem solving but lends its representations to its companion system, GAIL [Neiman821 toproduceanimation. However,theiranimation appears to be runtime context independent and their representations are not sufficiently generalized and modularized to support a concurrent user interface design process. APEX Feiner SS] has its focus on synthesizing the pictorial explanations and does not support animation per se, but it has a similar knowledge base structure and approach to making context-sensitive explanations of procedural tasks as we do. A part of APEX’s runtime information is structured specifically for the domain it supports.

Our contribution to user interface environment research is twofold. We contribute the initial groundbreaking effort for coordinating automatic generation of animated help with an integrated user interface environment. The extensions to the UIDE knowledge representation also add a finer granularity to the development model for user interfaces and confirm the applicability of our approach. Cartoonist is a system built to illustrate how a user interface framework can be organized to accommodate both run-time UIdialoguecontrolandprovisionofcontext-sensitiveanimated help, utilizing the same UI specification. Cartoonist’s help system, referred to in the paper as the Animator, retrieves the design specifications of an application, from its conceptual design to the lexical details of its intended user interface, and uses that knowledge to complete animated help scenarios at runtime. Cartoonist’s runtime contextual information is used intensively to instantiate appropriate values for variables in the scenariosand also to vary help scenarios appropriate to the user’s current context at the time help is requested.

The UIDE project focuses on issues in building an integrated environment for userinterface design ranging from acquisition of high-level interface specification [Murray 891, transformations of designs into different interface paradigms [Foley et al 88a], automatic generation of design layouts [Kim 90],runtime supportofuserdialoguecontrol [Foleyetal88b], and ongoing work to integrate help support into the user interface [Senay et al 891. The present system, Cartoonist, is based on an earlier partial ,prototype of context-sensitive animated help system involving a direct manipulation style interface to Unix directory system [Sukaviriya 881. Related systems which center around the use of user interface specifications to generate runtime user interfaces are MIKE [Olsen 861, MICKEY [Olsen 891 and Chisel [Singh 891. Neither system,however, hasexplicit interface representations which can be utilized for other interface purposes such as automatic help support, thereby limiting the usefulness of their specifications. UIDE has established a firm knowledge representation base upon which the work in this paper builds. Many animated help design decisions support the global objective of refining and improving the UIDE model, while maintaining a parallelism with knowledge-base research on

This paper will discuss how Cartoonist generates contextsensitive animated help, emphasizing the knowledge representations which best support the provision of the animation. The example application knowledge base used throughout this paper is for a simple digital circuit design program. The system and its representation model are general and can support other applications given that appropriate knowledge basesand application routines are provided.

153

help systems elsewhere. Hence we hope our research in extending help support presents a synthetic and synergistic effort with respect to help research in general.

How this knowledge is used to control and disambiguate user dialogues, and How the Animator combines the sameknowledge to generate animation scenarios. The emphasis of this paper is, mainly, to demonstrate how context-sensitive animated help is generated from this architecture. Though the sameknowledge model is sharedby the “dialogue controller”, we will place less emphasis on the latter. Therefore, the functionality of somecomponents in the architecture not directly related to animated help will not be discussed unless their explanations are necessary to establish context. l

l

Architecture UIDE employs knowledge of an application to reason about user interface design and to drive user interface functionality at runtime. Cartoonist employs the same general approach with an emphasis on the runtime support. An application interface running on top of Cartoonist is driven from its UI specificationsandchangesin thespecificationsareimmediately reflected in theruntime interface. Context-sensitive animated help is driven from the sameknowledge base. Consequently, it changesits presentationsappropriately without any additional specifications from the application designer.

Knowledge l

Figure 1 illusbates Cartoonist’s general architecture. The “dialogue controller” often mentioned in this paper refers to combined work of the UI Coordinator, the Input Controller and the Output Controller. Our specific approach is to superimpose animation on the interface and to let the actual dialoguecontrolprovidethefeedbackandresponsesimulations neededto complete the animation. Thearchitectureis designed such that the user’s interactions and the Animator imitating user interactions are indistinguishable by the dialogue controller.

l

rc;

and Model

Knowledge

A designer lays out an application program in Cartoonist by specifying objects in the application together with their attributes. The designer also specifies actions which can be invoked by users to manipulate application objects. Each action is defined with parameters required to complete the action, pre-conditions-theconditions which must hold true prior to the invocation of the action, andpost-conditions-the conditions which will hold true after the execution of the action. Essentially, pre- and post-conditions represent the causality of actions. Cartoonist employs the semantic information embedded in these conditions to derive help explanation atrun-time. Pre- and post-conditions in Cartoonist are represented following the first-order predicate calculus convention. Readers may wish to refer to [Gibbs et al 861,

The next 3 sections will discuss: Representations and model of application knowledge, interface knowledge, and interaction techniques,

Output

Application

Representations

1

Controller

Input Controller I

l l

Input &eue l

Actions Objects I

User Interface USER

Animator

tetface Interface

4

Figure 1: Cartoonist’s general architecture.

154

Foley et al 88a], and Foley et al 88b] for further descriptions ontheevolutionanduseofparameter,pre-andpost-condition representations in HIDE.

of its parameters have been either entered by the user or simulated by the Animator, and proved valid by Cartoonist.. To illustrate how an application is represented in Cartoonist, a simple digital circuit design program is used as an example. Readersmay refer to Figure 7 to visualize an interface design for this application.

Each parameter of an action also has constraints which limit the values acceptable to the action. The basic constraints used in Cartoonist are borrowed from the planning research by Wilkins IWilkins 891, whose constraints consists of class specification for objects, range and limitation specification for values, and relationships among parameters. Parameter constraints instantiate example values for the Animator such as which object of a certain class to use or how many degrees of rotation to be used. These values are then used in animated help scenarios. Constraints are used straightforwardly by the dialogue control when interacting with the user to limit the values acceptable as parameters of the action.

Figure 2a shows the knowledge scheme of the action representation. Figure 2b illustrates an example of the action intended for creating a NAND gate in the application. Notice that the definition at this point is at a high level and hasnot yet specified the nature of interactions. Questions such as “how to createa NAND gate”or “how to connect an output of a gate to an input of another gate” in the digital design application canbeansweredby the Animator using this level of information. Meanwhile, it is the same action representation with its preand post-conditions which allows the dialogue control to determine semantics and syntax of user interactions in the application context.

To complete an action representation, the application designer specifies an action routine associated with the action. The application designer has to provide the application routine at runtime. The routine is invoked whenever the action and all

action ( parameter

: constraints; : constraints; .....) pre-conditions: ( predicate; predicare; ...) post-conditions: ( predicate; predicate; ...) action routine: routine-name (parameter, parameter, ..) parameter

Figure 2a:

A general action representation schema.

,

Figure 2b: An action representation for createNAND.

l

Interface

Class object NAND; Classfadn Integer; RangefanIn 2 5 ) pre-conditions: ( exist CurrentGraphicsWindow ) post-conditions: ( existInKB object ) action-routine: createNAND ( name, location, fanh ) object fdn

: :

Knowledge

A user interface can be viewed as a mini-application in itself. Cartoonist maintains the representations of interface objects such as menu bars, windows, scrollbars, dialogue boxes, etc., and the representations of interface actions, operations the user can use to manipulate interface objects.

categories,conceptually, helps draw aclean separationbetween application knowledge to be acquired from the application designer and user interface knowledge. This conceptual separation, as will be shown later in the paper, allows the Animator to retrieve application interface specifications systematically and effectively. The separation also has the benefit of allowing the application designer to design an application at a high level without getting bogged down with interface details. The job of designing the interface can be passed along to a user interface designer who can use the application conceptual design as a starting point.

With respect to representation, there is no difference between interface actions and application actions. Both kinds of actions have parameters with constraints, pre- and postconditionsandaction routines; hence, thesamerepresentation schemeisusedforboth. However,separatingactionsinto two

55

selectCommandPulldownMenu (

menu : menutrem :

Class menu UIDEPulldownMenu; Class men&em MenuItem )

pre-conditions: ( visible menu ) post-conditions: ( ) action-routine: nil

Figure 3a: selectcommandPulldownMenu action representation.

Class icon UIDECommandIcon ) s&ctCommandIcon ( icon : pre-conditions: ( visible icon ) post-conditions: ( highlighted icon ) action-routine: nil Figure 3b: selectCommandIcon action representation. Such a separation results in a clear distinction between preand postconditions usedin the two action categories; the ones used in application action specifications describe application semantics while the ones used at interface specifications describe interface semantics independent of the application using them. Consequendy, the Animator as well as the user interface controller can be fine-tuned in its presentation of these actions, independent of the application for which an interface with help is being provided.

and to define appropriate lexical feedback in between input steps. The representation of interaction techniques used by both the Animator and the dialogue controller not only guarantees the consistency between help presentations and the actual interactions, but also makes addition of new interaction techniques possible without internal modification of the dialogue control. Figure 4a shows the general representation scheme of an interaction technique while Figure 4b and 4c show the and mouseClickObject representations of mousePulldownMenu techniques. In addition to input primitives, such as “pressMouseButton:” and “releaseMouseButton: ,” one can represent output or feedback primitives such as“ShowPulldownMenu: menu at: point”and “menu: menu finalFeedbackAt: point,” aswell asintermediate primitives suchas“verifypoint: point for: object.” Intermediate primitives associate interaction techniques with contextual information by checking theincoming dataagainst the purpose for which the technique is designed, for instance, a mouseClickObject is only a valid technique identification if the user’s point falls within an object.

Figure 3a and 3b show examples of interface actions, selectCommandPulldownMenuandselectCommandIcon, two of the styles used to invoke application actions. l

Knowledge

about Interaction

Technique

In Cartoonist, an interaction technique representation captures simple lexical information about how the user goes about manipulating input devices to interact with objects on the screen such as a NAND gate or a pulldown menu, etc. Examples of interaction techniques are mouseClickObject and mousePulldownMenu. In order to capture lexemes of interaction techniques, Cartoonistrepresents each interaction technique as a sequential list of lexical steps required to complete the technique. While the Animator only needs td know about the input lexemes in order to animate how a user’s action would be performed in an interface, the interaction technique representation also hasto facilitate the requirements of the dialogue controller, who needsto know about input and output lexemes. Sharing the same technique representation insures consistency betweenanimation and actual interactions - a consistency critical to effective procedural help presentation. The interaction technique representation presented below has been developed to support both the Animator and the interface controller needs.

Every interaction technique has specifications of input tokens. The token specification allows the Animator to simulate user inputs at the token level and to passits simulated tokens to the dialogue control after animating a particular input step. The token specifications are also used by the dialogue control to check user’s inputs by matching the actual input tokens with the token specifications in technique representations. Currently, a limi ted number of input, output and intermediary primitives are supported for mouse and keyboard devices. A user interface designer can conveniently create a new interaction technique by combining existing primitives. The technique is then added to Cartoonist’s technique library and can be used at once by both the Animator and the interface controller. For example, if a specified key needsto be pressed before mouse movements, the designer can create a new technique which has a press-key step preceding mouse

The interaction technique representation scheme provides a declarative form of knowledge about interaction techniques for the dialogue controller. The knowledge is used to capture input events generated from user’s operations of inpu t devices

156

I

interaction-technique (parameter,paremerer, ...) steps: ( primitive primitive ,.... ) tokens: ( token specification token specification .... ) associated with device: ( device device ... )

Figure 4a: A representation scheme for interaction technique.

mouseClickObject ( object ) steps: ( pressMouseButton: point verifyPoint: point forObject: object releaseMouseButton: point

[mouse point buttonPressed [null] [mouse point buttonReleased]

mousePulldownMenu ( menu, menzdfem) steps: ( pressMouseButton: point verifypoint: point forMenu: menu showPulldownMenu: menu at: point stillPressMouseButton:point menu: Menu feedbackAt: point releaseMouseButton: point verifypoint: point forMenuItem: men&em menu: menu linalPeedbackAt: point ) associated with device: ( mouse ) Figure 4c: mousePulldownMenu primitives for moving, then the release-key step as the last step. The Animator will animate pressing the key first, then moving the mouse followed by releasing the key. Additional primitives, if required to support a new kind of interaction technique or a more complicated interaction, can be added by programming a corresponding method in the InputController, the Ul Coordinator or the Output Controller for an input, intermediate or output primitive, respectively. A corresponding animation method will also have to be addedto the Animator. Currently, Cartoonist does not support a primitive which reads data from more than one device at a time. Polling inputs from multiple devices can be easily added as an extension to Cartoonist, which then will recognize and support additional multiple-device primitive method definition a designer may wish to include. l

Relationships between Application Actions, Interface Actions and Interaction Techniques

Information critical to the Animator to perform its task successfully is that of the links between an application and its interfacespecifications. Themodelingofrelationshipsamong the knowledge components described above is described in the following paragraph.

Figure 4b: mouseClickObject techniquerepresentation.

[mouse point buttonPressed

[null] [null] [mouse point buttonStillPressed1

[null] [mouse point buttonReleased] [null] [null]

technique representation.

In an application action representation, the action itself and its parameters are information units to be obtained from the user at runtime to invoke its corresponding application routine. Assigning an information unit to an interface action relates that particular unit to a user interface task. The assignment is specified in Cartoonist as a table mapping the left side containing the action and its parameters to the interface actions on the right side. Figure 5a shows the createNAND action representation with its mapping. From Figure 5a, the action name createNAND is linked to the selectCommandIcon interface action (whose mapping table is shown in Figure 5b) which has its own parameter, the specific command icon CreateNANDIcon of createNAND. (The icon is assumed to be predefined as an interface object.) The locution parameter is linked to the action selectpoint. Notice that the right side of the table can also have one of the Both two keywords: “createdB yAction” and “implicit”. keywords designate no interaction is required from the user; the former implies that the value for that parameter is to be generated by the action routine and the latter implies that the value is to be taken from the current default attribute values. Interface actions usemapping tables aswell. Figure 5b shows the mapping table of the interface action selectCommandIcon. The information units in interface actions are linked to either

createNAND ( name location object farJn

: :

Class name String: Class location Point; PointIn locution CurrentGraphicsWindow; : Class object NAND; : Classfunln Integer; Rangefunln 2 5 )

..... .. .. Mapping Table: ( createNAND name location object f&n 1

: : : . I

Figure 5a: Linking createNAND action to its interface through its mapping table.

selectCommandIcon createNANDIcon; enterStringInDialogueBox nameBoxWithOK name; selectPoint location ; CreatedByAction; implicit)

selectCommandIcon ( icon : Class menu UIDECommandIcon ) ..*. .... mouseClickObject icon ) Mapping Table: ( selectCommandIcon icon : Figure 5b: Linking selectCommandIcon action to interaction techniques.

other interface actions or directly to interaction techniques to be employed for their tasks. For simplicity, we try to keep syntacticunitsofinterfaceactionslinkeddirectlytointeraction techniques. l

Implication

of the Knowledge

obtain a completelineof information to furnish animated help scenarios. Though each layer of representation by itself can supportcertainaspectsofwhattheuserneeds toknow,andcan be adopted for general help generation in its own terms, the linking of these layers is the key to providing a full spectrum of help information from the lexical to the conceptual aspects of an application.

Model for Help

Our knowledge model provides an organized view of the various kinds of information necessary to provide procedural animated help. With only the knowledge of interaction techniques, a limited form of animated help can be delivered, i.e., demonstrating characteristics of different interaction techniques. This is close to having a statically defined animation scene for each interaction technique. Providing animated help based only on this level of information is in itself useful as an aid for the user to visualize interface dynamics.

Using Knowledge

to Control User Dialogues

Cartoonist handles a user interaction in a bottom-up fashion. The job of the dialogue controller starts with recognizing the kind of interaction the user has performed. Once it is determined, the dialogue controller places the interaction within theuserinterfacecontextby checking with allinterface actions linked to that particular technique and identifying a singleinterfaceactionin whichtheinteractionfits. Similarly, oncean interface action is determined, thedialogue controller further places it in the application context by identifying all applicationactionslinkedtotheinterfaceactionandidentifying an appropriate application action.

With the interface actions linked to interaction techniques, a help facility can provide semantic and syntactic information about the behavior of the user interface independent of the application. Help basedon interface knowledge and interaction techniques is a sufficient responseto user’s questions in many cases,such asshowing how to move a window around, browse a menu or scroll up and down a window. In these situations, changing states of interface objects do not associate with changes in the application state. However, user’s procedural help questions often implies the need to learn the syntax and the semanticsof actions on the objects presented on the screen which are directly related to an underlying application. The linking from application actions to interface actions to interaction techniques provides a bridge for the Animator to

Cartoonist recognizes a user’s interaction by matching input tokens against token specifications of interaction techniques. When amatch is found, the matched techniquedeliversvalues (declared as its variables) to the UI Coordinator to place it in the user interface context. An example is when the user clicks on the createNAND icon, the mouseClickObject technique passes the object createNAND icon to the UI Coordinator which thenrecognizesitasbeingselectCommandIconaction from the user. While tokens are coming in one at a time for checking, intermediate and output primitives included in the

15s

technique’s specification are processed to associate the technique with context and to provide appropriate feedback.

assumed to be predefined by the application designer. This predefined script approach is similar to the approach used earlier in our preliminary systemdeveloped to explore the idea of context-sensitive animated help [Sukaviriya 881, but the script is defined at a much higher level in Cartoonist. The Animator can then retrieve associated parameters for all the actions. Theseactions again form a script to be refined further for animation. The Animator uses the script to sequence animation, the knowledge baseto coordinate animation within actual user context, and animated “characters” to represent input devices relevant to the task or action being animated.

The UI Coordinator further places the selectCommandIcon action in the application context. In this case,it recognizes the action as “select*’ the application action createNAND. The UI Coordinator then sets up the context for receiving further parameter inputs for the action. UI Coordinator uses both parameter constraints and the mapping specification in action representations to identify a user action in both interface and application contexts. The bottom-up matching approach has proven to be efficient as a general dialogue control strategy in responding to user inputs. Context-Sensitive l

Animated

Currently, we have not worked on a robust interface to help yet. The interface to help is therefore still simplistic. The user clicks on a help icon which then brings up a dialogue box where a command name or a task name can be entered. This certainly assumesthe user already knows how torequest help. Once a robust interface to help exists, help on how to get help would certainly be a valuable part of that interface. Our plan is to allow the user to indicate the condition which she wants to end up with. The Animator then decomposesthe condition into post-conditions, which are then used as the starting point of our reasoning process to generate a corresponding help explanation.

Help Generation

Help Questions

Animated help is intended for help questions equivalent to the kind beginning with, “Show me how to ...“. A question posed to Cartoonist can be interpreted in two ways - as a question about one of the actions in the knowledge baseor asa question about an activity or a task consisting of more than one action in the knowledge base. Cartoonist does not endeavor to interpret natural language questions.

l

In the fmst case, the action representation can be retrieved from the knowledge base with its required parameters. Only parameterswhich require interactions with usersareretrieved. The action itself and its parameters form a script which has only one action and in which no parameter values have been instantiated. In the second case,the script for the task, which contains the list of actions required to perform the task, is

Animated

Characters

When one imagines animation, animated “characters” are naturally the center of attention. A primary purpose of animating what can be done within a user interface is to inform the user of what devices are to be used with which objects on the screen. Cartoonist representscharacters corresponding to input devices as “animated characters”, each of which has

Figure 6a: Three-button mouse animated character.

Figure 6c: Keyboard animated character.

Figure 6b: Stylus animated character.

159

viewsreflectingdifferent statesof inputdevices. Forexample, a one-button mouse would have one view of the mouse with its button pressed and another view with the button not pressed. Demonstration of the user clicking on an object with the mouse can be illustrated on the screen by first displaying the no-button-pressed view of the “mouse”character on top of theobject, then switching to the button-pressed view and then switching backto the no-button-pressedview again. Animated charactersdefined in Cartoonist areused together withexisting objects, both created by the application and those which are pre-defined interface objects, such asmenus,command icons, etc., to perform animation scenarios. Theanimatedcharacters are used solely in help presentation by the Animator. Figure 6a, 6b and 6c show some of Cartoonist’s animated characters and their multiple views.

dialoguecontroller to verify user parameter inputs. If no value is passedto the constraint, it works in instantiation mode for the Animator. For example, in the caseof object instantiation when the constraint class is Class, the constraint will retrieve and pass back all objects which belong to the specified class. When more than one constraint is specified for one parameter value, a later constraint will take the setof objects passedback by the constraint before it and pass back only those objects which qualify its restrictions. In the case of instantiating an attribute value, constraints are used to give a value which fits the restrictions. The example script above is shown below after its parameter values are instantiated.

The Animator should not use an animated character of an input device which is not currently used or present in a particular environment. Correct selection of animated charactersis guaranteedby different knowledge sourceswithin Cartoonist. First of all, the designer should indicate the devices which will be used in the targeted environment and interaction techniques dependent on non-existent devices in the design knowledge base, as part of initialization process, should be rendered unavailable. If an interaction technique is flexibly designed, however, in such a way that any pointing device which yields the same input tokens can be used, then an initializing procedure will link the appropriate animated characterofthepointingdeviceactuallyavailableinaparticular environment. A futureextension of Cartoonist could add such an initializing procedure.

Instantiations of parameter values often depend on runtime context. Selection of objects from the set ofcurrently existing objectsin an application is one obvious example. Instantiations of values can also depend on attribute values of an object, for example, when a point has to be entered to indicate a new location of a NAND gate being created. If the point is underconstrainedtobe within theextentof thecurrent graphics window, the Animator first requestsinformation on the current size of the window then determines a point which falls within the window’s extent at random. This approach guarantees accurate information even if the window size is changed occasionally at runtime. The approach is more flexible than that used in GAK [Neiman 821 in which the rectangular area constraining a point is staticly encoded. l

l

lnstantiations

of Variables

Context-sensitizing

the Animation

used in Animation Once values are instantiated for a script, the preconditions of the first action in the script are checked to make sure the action is ready to be performed in the current context. In this case, there is only one action in the script and since the object has already been instantiated, the only precondition of rotateComponent action, which states that the object has to be in the knowledge base,is satisfied. The script is now ready to be refined further for animation.

Upon getting an uninstantiated script like the one shown above, the Animator tries to selectparticipants in the animation from the runtime context. For a simple script, the action name part of the script is always linked to an interface action or to aninteractiontechnique,thusitdoesnotrequireaninstantiation. Values to be instantiated in a script are either objects, which can be application objects or predefined interface objects, or attribute values. When an object is required, it has to be selected from the runtime context. When an attribute value is required, an appropriate value can be chosen totally based on its corresponding attribute information in UI specification.

In casethat a precondition is not satisfied, the Animator does a backward search in the knowledge base for an action which has a post-condition matching the precondition and places it at the top of the script. If theaction sharessomevalues already instantiated in the previous top action, those values are propagated to the new top action. The script then undergoes the instantiation and context-sensitizing process again.

As mentioned earlier, parameter constraints are used by the Animatortoinstantiatevaluestobeusedinanimationscenarios. Each constraint class is implemented so it works in three ways. If a value is passed to the constraint, it checks whether the value falls into its implied restrictions. This is used by the

The backward searchapproach is somewhat similar to classical AI planning searches. Currently, Cartoonist only employs a simplebackward search. It doesnot deal with any complicated problems planning research attempts to solve such as the Sussman Anomaly problem [Nilsson 801. Simple backward

160

search is, however, sufficient because the search space in Cartoonist’s UTspecification is reduced tremendously-both by capturing application knowledge at a highly conceptual level and fixing lower-level application interface design decisions in mapping tables. A more sophisticated backward chaining algorithm, which includes knowledge of individual usersusing a particular interface, hasbeen proposed [Senay et al 891 to determine an appropriate path selection among multiple possible paths to achieve a goal. Such a backward chaining mechanism isapromisingfutureenhancement which willcertainlyaddusersensitivity toanimatedhelppresentation. l

Traversing down the Knowledge

l

To animate an interaction technique, the Animator retrieves the representation of the technique and uses its lexical steps and token specifications in the animation. Only input stepsare of interest to the Animator. For each input primitive, the Animator knows which view of which animated character to be used, and how it should animate the character. The transition-based model of program animation, referred to as the Path-Transition paradigm [Stasko 901 (which defines a path asmagnitude of changesin animation stepsanda transition as how a path’s magnitude values are to be applied to an animated character), is used to specify the movement, delay and view switching of animated characters. Each input primitive has corresponding animation calls to create appropriate paths and use appropriate transitions on the path. For example, apressMouseButfon primitive would consist of switching the mouse view to “button-pressed” view and delaying theview for aperiodof time. ThereleareMouseButfon primitive would signal switching the view back to the “nobutton-pressed” view and another delay. Moving the cursor by moving the mouse from one position to another is shown by moving the mouse icon along the path obtained from a straight line interpolation from the start to the end positions.

Hierarchy

Once the top action preconditions are satisfied, the Animator traverses down the knowledge hierarchy to refine the script further. The mapping table of the top action is used to retrieve interface actions associated with it, which are then placed at the top of the script. Again, the script undergoes instantiation and context-sensitizing process again. Instantiation work for interface actions is not as intensive as for application actions, because interface objects accessedby interface actions are often already predefined in the mapping table, For instance, the specific menu and menu item, ComponentOperationsMenu and RotateItem, are defined for selecting the command rotateComponent as shown in the refined script below from the previous example. These interface objects, therefore, can be used immediately by the Animator.

The Animator at this point has to fill in a few last pieces of information - the coordinates towards which movement of the chosen animated character will occur. For example, beforepressingthemousebuttonattheComponentOperations menu title, it would show the mouse moving from the current cursor position to the menu. The current cursor position is easily retrieved through a request to the UI Coordinator. The position of an object or a valid selection point for selecting an object is not always determinable by the Animator. The Animator has to request a selection point from every object it involves. By default, each object defined in Cartoonist responds to the request for its selection point by passing back its center point. The response can be overridden by the applicationdesigner for more sophisticatedobjects with unique constraints on possible selection points.

selectCommandFromPulldownMenu ( ComponentOperationsMenu RotateItem ) selectobject ( NAND1 ) se1ectRadioButtor-i ( angleBoxWithOK radioButtonFor90 90 ) Eventually in traversing theknowledge hierarchy, the mapping will link to interaction techniquesandan interaction technique will show up as the first item in the script. The script is then ready to be partially animated from the representation of the interaction technique. The script below shows the next step after the above script hasbeen refined further. Cartoonist will then animate whatever it can in the script from the top before it refines the rest of the script further. This way, checking preconditions of the top action is always done within the current context, hencereducing thecomplication of regression in backward chaining. mousePullDownMenu ( ComponentOperationsMenu RotateItem ) selectobject ( NAND1 ) selectRadioButton ( angleBoxWithOK radioButtonFor90 90 )

The Actual Animation

I 161

The Animator animates an action by animating the interface actions which comprise it, which in turn it animates by animating the interaction techniques at the lowest level. Once the animation of an input primitive is done, the Animator generates an input token from the information used in the animation, such as a point, and deposits the token in the input queue of the input controller. After a series of tokens are simulated, the Animator relies on the dialogue controller to recognize its simulated interactions and to respond appropriately. Figure 8a-i and 9a-f show screen snapshots from Cartoonist animation. Figure 7 shows a whole screen appearance of the digital circuit design program. Figure 8a through i show a sequence of interactions to perform the createNAND action. Figure 9a through f show how to rotate a gate.

Figure 7: A snapshot of the digital circuit design program with some gates already created in the design.

Figure 8a

Figure 8b

Figure 8a-b: The animated character of the mouse is shown trailing the cursor which is being moved to click on the createNAND command icon.

162

Figure 8c: When the cursor is on top of the createNAND icon, the mouse’s left button is enlarged designating the ‘pressing left mouse button’ act.

Fis

e 8d: The keyboard character is shown with a key highlighted to designate entering a character from the keyboard.

I,

Gate

name:

Figure 8e: A character ‘H’ is shown being piped from the keyboard to the text area before it is actually displayed in the text area (Figure 80.

,I

Figure 8f

Figure 8g Figure 8f-g: Animation of a string input via the keyboard.

Figure 8i Figure 8h Figure 8h-i: The keyboard character disappeared after the string input is finished. The mouse character is then shown moving to click on the OK button to confirm the string input.

163

Figure 9a

Figure 9b

Figure 9a-b: Animation of selecting the rotateComponent

u- .,’ B

‘/‘..,

=D-

action from a pulldown menu.

%

Figure 9d

Figure 9c

Figure 9c-d: Animation of selecting an object as one of the parameters of the rotateComponent action.

l

Angle g

of Rotation go

270 q

IOK)(Cancell

Figure 9f

Figure 9e

Figure 9e-fi Animation of a degree of rotation selection from a dialogue box as the other required parameter of the rotateComponent action.

lG4

l

Context

Restoration

Cartoonist is currently implemented in Smalltalk-80. Context restoration is doneby copying objectsinvolved in an animation usingthedeepCopymethodinSmalltalkpriortotbeanimation. The copies of objects which have exactly the sameinformation as the intended objects are then used in the animation. The Animator also copies the User Interface Context object which holds values for the UI Coordinator’s context. Once the animation is completed, the user has the choice to keep the context or to restore the original context. If the user chooses to keep the context, the original objects are destroyed and replaced by their copies, otherwise the copies are destroyed and the User Interface Context copy is restored. The screen then needs a refresh to restore the original screen. Conclusions

and Future Work

A general framework with an embedded model of application and interface knowledge utilizes the embeddedknowledge to drive user dialogue control and to automatically generate context-sensitive animated help. Cartoonist, the system implemented to prototype the framework, demonstrates the feasibility of our approach. The knowledge model of the framework structures deliverable procedural help information while the linking between different layers within the model provides akey connection from application procedural concepts to the user interface components. Cartoonist has been implemented in Smalltalk- running on both Macintosh II and Sun workstations and is still being tested and refined for wider varieties of techniques and interface situations. Although animation can be valuable, merely using animation in help doesnot deliver a perfect help system. Minimal textual explanations must be presented with the animation to help a user generalize concepts. Our next step is to couple textual explanations with animation, especially when backward search results in additional actions to make the context ready for generatinghelpon aparticular action. Conceptual explanations are also required if a help system is to provide a total understanding of an application interface to a user. An integration of the conceptual help framework being developed at GWU [Moran 891with the animated help framework would result in a more complete automatic help framework.

Acknowledgement We would like to thank the other members of the UIDE researchteam,LucyMoran,SrdjanKovacevic,andWonChul Kim,forintellectual discussions which invaluably contributed to the conceptual design of this framework. We also thank Lucy Moran for editing this paper, the GWU CS graphics and user interface researchgroup for recent suggestions on how to effectively present this research,and WeerasakNaveekarn for designing animatedcharacters. JohnStaskoprovidedgenerous help both on implementation details of his Path-Transition paradigm and suggestions to make this paper clearer to nonUIDE readers. Financial support for this research is provided by the National Science Foundation Grant IRI-88-13179, Siemens and Software Productivity Consortium. We also would like to thank all 3 reviewers for their constructive comments which helped us strengthen and clarify this paper. References [Booher 751 Booher, H.R., “Relative Comprehensibility of Pictorial Information and Printed Words in Proceduralized Instructions,” Human Factors, 17(3), 1915. [Cullingford et al 821 Cullingford, R.E., Krueger, M.W., Selfridge, M. and Bienkowski, M.A., “Automated Explanations as a Component of a Computer-Aided Design System,“lEEE Transactions on System,Man and Cybernetics, March/April 1982. [Weiner851 Feiner, Steve, “APEX: An Experiment in the Automated Creation of Pictorial Explanations,” IEEE Transactions on Computer Graphics and Applications, November 1985. [Fenchel&Estrin 821 Fenchel, R.S. and Esuin, G., “SelfDescribing Systems Using Integral Help,” IEEE Transactions on System,Man and Cybernetics, March/ April 1982. [Fischer et al 85] Fischer,G., Lemke, A. and Schwab, T. “Knowledge-based Help System.” Proceedings of CHI’85.1985.

Cartoonist’s animated help is rather simple and straightforward, basedonaminimalknowledgenecessaryteproduceanimation scenarios. It would be more exciting and useful if different animation styles such as mini-screen animation, different styles of animation story-telling, animation special effects, etc., could be supported, tested for effectiveness, and left as options for the application designer to choose among. Future analysis will determine the appropriateness of different animation styles and whether varying animation styles doesor does not require additional information from the system’s point of view.

Foley et al 88a] Foley, J.D., Gibbs, C., Kim, W.C. and Kovacevic , S., “A Knowledge-Based User Interface Management System, Human Factors in Computing Systems,Proceedings CHI’88,1988. [Foley et al 88b] Foley, J.D.; Kim, W.C.; Kovacevic, S.; and Murray, K., “The User Interface Design Environment.” Proceedings of Architecture fur Intelligent Interfaces: Elements and Prototypes.” Monterey, California, 1988.

165

[Gibbs et al 861 Gibbs, C; Kim, WC. and Foley, J.D., “Case Studies in the Use of IDL: Interface Definition Language,” Report GWU-ZZST-86-30, Department of EE 8zCS, The George Washington University, Washington, DC 20052,1986.

Proceedings of Working Conference on Engineering for Human Computer Interactions, Napa Valley, California,

August 1989. [Singh 891 Singh, Gurminder, “A High-level User Interface Management System,” Human Factors in Computing Systems, Proceedings of CHZ’89,1989.

[Hecking 871 Hecking, Matthias, “How to Use Plan Recognition to Improve the Abilities of the Intelligent Help System SINIX Consultant,” Proceedings

[Stasko 901 Stasko, John, “TANGO: A Framework and System for Algorithm Animation,” in print, IEEE Computer, 1990.

ZNTEZ?ACT’87,2ndZFZP Conference on Human-Computer Interaction, 1987.

[Kemke 871 Kemke, Christel, “Representation of Domain Knowledge in an Intelligent Help System,” Proceedings

[Sukaviriya 881 Sukaviriya, P. “Dynamic Construction of Animated Help from Application Context.” Proceedings

ZNTERACT’87,2nd ZFZP Conference on HumanComputerInteraction, 1987.

of the ACM SZGGZMPH User Interface Software Symposium. Banff, Canada, November 1988.

[Kim & Foley 901 Kim, Won C. and Foley, JamesD., “DON: User Interface Presentation Design Assistant”, submitted to UIST’90, 1990.

[Wilkins 891 Wilkins, D.E. Practical Planning: Extending The Classical AZ Planning Paradigm. California : Morgan Kaufmann Publishers, Inc., 1988.

[Moran 89] Moran, Lucy, A Curriculum Design Approach to Interactive Knowledge Base Organization, Doctoral Dissertation Proposal, the George Washington University, Washington, DC, 1989.

[Yu&Robertson 883 Yu, Chiung-Chen and Robertson, S.P. “Plan-based Representations of Pascal and FORTRAN Code.” Human Factors in Computing Systems, Proceedings of CHZ’88,1988.

[Murray 891 Murray, Kevin, The Design Specification Interface Tool in UZDE, Master Thesis, the George Washington University, Washington, DC, 1989. [Neiman 821 Neiman, Daniel, “Graphical Animation from Knowledge,” Proceedings of AAAZ’82,1982, pp. 373376. [Nilsson 801 Nilsson, N.J. Principles of Artificial Intelligence. California : Morgan Kaufmann Publishers, Inc., 1980. [Olsen 861 Olsen, Dan, “MIKE: The Menu Interaction Kontrol Environment,“ACM Transactions on Graphics, October 1986. [Olsen 891 Olsen, Dan, “A Programming Language Basis for User Interface Management,” Human Factors in Computing Systems, Proceedings of CHZ’89,1989.

[Palmiter et al 891 Palmiter, S.; Elkerton, J.; and Baggett, P., “Animated Demonstrations versus Written Instructions for Learning Procedural Tasks,” Technical Report C4E-ONR-2, Center for Ergonomics, Dept. of Industrial and Operations Engineering, University of Michigan, January, 1989. [Senay et al 891 Senay, H; Sukaviriya, P; and Moran, P. “Planning for Automatic Help Generation.”

166

Suggest Documents