Reverse Engineering Legacy Interfaces: An Interaction ... - CiteSeerX

2 downloads 1642 Views 110KB Size Report
can later be used as the basis for specifying a new graphical user interface tailored to the ... an instance of the screen that corresponds to the node. An edge may be ...... FL, Nov 10, 1993. [10] H.A. Mueller, M.A. Orgun, S.R. Tilley, & J.S. Uhl:.
Reverse Engineering Legacy Interfaces: An Interaction-Driven Approach



E. Stroulia, M. El-Ramly, L. Kong, P. Sorenson Computing Science Department University of Alberta Edmonton, AB T6G 2H1 Canada stroulia,mramly,lanyan,sorenson  @cs.ualberta.ca

Abstract Legacy systems constitute valuable assets to the organizations that own them. However, due to the development of newer and faster hardware platforms and the invention of novel interface styles, there is a great demand for their migration to new platforms. In this paper, we present a method for reverse engineering the system interface that consists of two tasks. Based on traces of the users interaction with the system, the “interface mapping” task constructs a “map” of the system interface, in terms of the individual system screens and the transitions between them. The subsequent “task and domain modeling” task uses the interface map and task-specific traces to construct an abstract model of a user’s task as an information exchange plan. The task model specifies the screen transition diagram that the user has to traverse in order to accomplish the task in question, and the flow of information that the user exchanges with the system at each screen. This task model is later used as the basis for specifying a new graphical user interface tailored to the task in question.

1 Introduction and Motivation Legacy systems constitute valuable assets to the organizations that own them. Very often, a legacy system is the sole repository of valuable corporate knowledge collected over a long time and the logic of the organization’s business processes. Unfortunately, there is usually little or no supporting documentation for these systems, which makes their maintenance, evolution and integration with other applications very difficult. In addition, the text-based interfaces of these systems offer very limited ways to organize

 In the Proceedings of the 6th Working Conference on Reverse Engi-

neering (WCRE’99) October 6th - October 8th, 1999, Atlanta, Georgia USA. pp. 292-302, IEEE Computer Society

B. Matichuk CEL Corporation 9637 45 Avenue Edmonton, AB T6E 5Z8 Canada [email protected]

and present information to the user, and therefore they are usually non-intuitive and difficult to understand. This fact makes the training of new personnel a challenge. Furthermore, the proprietary nature of these systems prevents the corporation from making use of the opportunities that the WWW presents today and from offering direct access to its information system to its partners and customers. Thus, the problem of wrapping and/or migrating legacy systems [9] has emerged as one of the most interesting and challenging ones in software engineering. To that end, one might attempt to understand the system design in terms of the data structures and control of processing by examining its code [10]. However, legacy systems tend to be rather large and complex. In addition, in all likelihood, the principles of their original design have been compromised with layers of “glue code”, that is, code for modifications unrelated and perhaps contradictory to their original design. Even worse this code has been developed by people not involved in the original design process and therefore unaware of the design rationale. Furthermore, in some cases the code is only available in its executable form. For all these reasons, this approach may be prohibitively costly or even impossible. An alternative approach, which we have been investigating in the CEL LEST project, is to try to understand the information and the process logic that a legacy system embodies by reverse engineering the system interface. In general, transaction-based systems developed for mainframes, whose purpose is to support tasks such as data entry, database querying and report generation, do not perform long computations but interact often with their users to get input or return the output of their transactions. As a result, their interfaces expose to their users a lot of the information stored in their internal repositories and its organization. They also correspond quite faithfully to the different tasks that they are currently used to perform. Therefore,

examining how the system is being used, based on how its users interact with it, can potentially provide an alternative approach to understanding the processes that the system currently performs in the organization. Clearly, if the end goal is to extend or modify the system functionality by modifying its current code, then such interaction-based understanding is insufficient because it provides only a model of the user tasks that the system supports and it completely ignores low-level design decisions such as data structures and algorithms. If on the other hand, the overall purpose is to migrate the system interface to newer platforms or to wrap it with well-defined APIs to enable its interaction with other systems, then this alternative interaction-based understanding approach is suitable. In this paper, we present a method for reverse engineering the system interface that consists of two tasks. Based on traces of the users’ interaction with the system, the interface mapping task constructs a “map” of the system interface, in terms of the individual system screens and the transitions from one screen to another. A screen is a distinct unit of information presentation, available to the users at a particular point in the process of their interaction with the system. A transition from one screen to another is performed by a sequence of user actions, such as keystrokes and cursor movements and may be conditioned upon a specific internal system state. The subsequent task and domain modeling task uses the interface map and taskspecific traces to construct an abstract model of a user’s task as an information exchange plan. The task model specifies the screen transition diagram that the user has to traverse in order to accomplish the task in question, and the flow of information that the user exchanges with the system at each screen. We also discuss how this task model can later be used as the basis for specifying a new graphical user interface tailored to the task in question. The rest of the paper is organized as follows: section 2 describes the overall architecture of the CEL LEST environment that we are developing for implementing our interface reverse engineering method. section 3 describes our method for mapping the system interface, and section 4 discusses the experiments we have performed to date to evaluate it. section 5 describes our method for task and domain modeling. section 6 discusses some of the related research literature and outlines our current directions for further research. Finally, section 7 summarizes the paper and offers some preliminary conclusions.

2 The CELlest Architecture The CEL LEST system (see Figure 1) interacts with the legacy system through two different middle-ware tools, developed by CEL corporation [16]: the Recorder and the Pi-

lot. The Recorder is a component emulating the protocol of communication between the mainframe where the software resides and its terminals  . Thus it enables a user to interact with the system in the same way that a terminal would, while at the same time, it records the interaction trace. The Pilot is a component that enables the control of the interface from an external application, by “translating” events in other applications to corresponding control sequences to the legacy interface. The overall process of the CEL LEST system uses as input a set of system-user interaction traces, collected by the Recorder. A trace, as captured by the Recorder, consists of a sequence of “screen snapshots”, i.e., copies of the screen buffers that the user has interacted with, and “action records”, i.e., records of all the keystrokes that the user has entered to the system while at a particular screen. A collection of traces from the daily usage of the legacy system is the input of the interface-mapping process. The interface-mapping task (T1 in Figure 1) is accomplished by the L E NDI system (LEgacy Navigation Domain Identifier). As the user interacts with the system, the Recorder collects the snapshots of the system screens that the mainframe sends to the user terminal. Then, the interface-mapping process proceeds to recognize identifying features in the screen snapshots of the trace, in order to cluster several screen snapshots together as instances of a single unique system screen. The output of this process is a directed graph, henceforth called the interface graph. The graph nodes correspond to the individual screens of the system and its edges correspond to the user action sequences that enable the transitions of the system from one screen to another. Each node is characterized by a predicate that evaluates whether a particular screen snapshot is an instance of the screen that corresponds to the node. An edge may be annotated by a condition, if the transition it represents is only possible in a particular system state. To provide sufficient information to map the system interface, the trace should at least contain screen snapshots of all the different tasks that the system users perform, and each non-deterministic task should be performed at least as many times as the number of its distinct screen navigation paths. To put it more simply, the collected traces should exhibit all the potential behaviors of the interface. For example, in order for the interface graph to reflect the fact that the user can return to the “login screen” from anywhere in the system by pressing “@12”  , the user should have performed this action, i.e., hit PF12 to return to the login screen, on all screens at least once.

 Currently the Recorder is able to emulate a variety of protocols in-

cluding 3270, AS400, vt100 etc. To date, however, in the CEL LEST project we have focused exclusively on the 3270 protocol [1] We use the symbol @ to denote function keys, e.g., “@12” is function key PF12.



Figure 1: The CEL LEST environment The second reverse-engineering task, task and domain modeling (T2 in Figure 1), is performed by the URG EN T system (User interface ReGENeration Tool). URG EN T uses as input a collection of task-specific traces mapped onto the interface map. Based on them, it constructs an abstract model for the task in question. URG EN T adopts an “information exchange” approach for modeling a user’s task. According to this model, the users’ interaction with the system allows them to provide (obtain) some pieces(s) of information to (from) the system. The users accomplish their informationexchange purposes by performing a sequence of elementary information-exchange actions on individual system screens. We have identified two types of such elementary actions:

 

When the users provide information to the system, such as with data entry operations, then they are considered to perform tell actions. Alternatively the users may obtain information from the system with an ask action. The system users indicate such actions to URG EN T by highlighting the screen area where the interesting information appears. – If the information in question always appears in

the same area, then this is an ask-standard action, because the user is always interested in the same standard screen area; – if, on the other hand, the screen on which the ask action is performed is dynamic and the information of interest may appear in different positions in the screen, then this is an ask-select action, because with every individual screen buffer the user has to “select” which screen area to highlight. In addition to classifying the user’s informationexchange actions according to the direction of the information flow, URG EN T also classifies the different pieces of information exchanged according to their scope. So, a piece of information “asked” or “told” by the user can be a System Constant, a User Variable, a Task Constant, or a Problem Variable.



System Constants are constant strings whose values are independent of the user’s task when visiting these screens. For example, in systems with multiple complex subsystems the user may have to select one among them by entering its name in some particular screen. These possible choices are all system constants.



User Variables are data items associated with the individual users performing the task.



For example, a user’s login and password are user variables. Also, in the above example of system constants, if a particular user always works with a single subsystem, then this selection is also a user variable.



Task Constants are constant strings that need to be entered on a screen when it is visited in service of a specific task. In menu-driven systems for example, the user, in order to perform a specific task, usually has to make a sequence of selections on the menu tree. All the elements in this sequence are task constants. Problem Variables are data items that flow through the screens of a task session; they are either original user input, or intermediate system output used as input in subsequent screens, or system output used as task output.

Based on these two orthogonal classification dimensions, URG EN T produces a domain model of the system, in terms of the unique pieces of data that the system exposes to the user, and a task model, in terms of well-defined sequences of information-exchange actions performed on subsets of that data. Finally, in a forward-engineering phase (T3 and T4 in Figure 1), based on the abstract task model and a chosen user profile, URG EN T specifies and generates a graphical user interface. This new interface can properly convey all types of information that need to be exchanged between the user and the system in service of the task at hand. The abstract task model constitutes a bridge between this newly designed GUI and the actual system interface, since the different operations on the GUI are designed to implement the information exchange actions that the users perform on the original interface for their tasks. At this point, the graphical interface generated by URG EN T is integrated with the Pilot that drives the legacy system by issuing the keystrokes corresponding to the user’s actions to the underlying legacy interface.

The second problem involves learning the interface graph edges, i.e., the possible transitions between screens and their preconditions. Although we have not yet addressed this problem, some initial ideas are sketched out in subsection 3.2.

3.1 Screen Identification Legacy screens range from very static to very dynamic. Some screens appear always the same irrespective of the user’s history of interaction with the system. Consider, for example, login screens or screens describing a menu of possible actions. Such screens are generally poor in data and their purpose is to enable the users to control their subsequent interaction with the system. On the other hand, other screens can be very dynamic. Consider a screen presenting the result of a user query. The purpose of such a screen is to present data, and depending on whether the presented data is well-structured or not, the different instances of the screen may share a great or a very small degree of commonalities. For example, if the query result is a set of data records, the screen may always appear organized as a table of data, although its actual content will be different. If, on the other hand, the result of the query is free text, then the appearance of the screen will vary more. 3.1.1. Screen Features. To develop a screenidentification process able to cope with this diversity of screen types, we have identified a set of different features that belong to three different families. The first family consists of features based on commonly used strings in some special screen areas, such as codes, titles, date and time, and page numbers that appear in the periphery of the screen.

 

Screen Code: In some systems, a sequence of alphanumeric characters can be found usually at one of the screen corners. When such a code exists on a screen, it usually is a very discriminative feature.

3 Interface Mapping As we have already discussed, the first phase of the CEL LEST process is the mapping of the system interface, that is, the construction of the interface graph. This process must address two different problems. The first is the identification of the individual system screens, that is, the interface graph nodes. This is the problem on which we have focused on to date, and L E NDI’s screen identification method is described in detail in the next subsection.



Screen Title: Quite often screens have titles, usually in one of the top two lines. The screen title is a sequence of words, usually describing the purpose of the screen. As with codes, when they exist, titles can potentially uniquely identify a screen. Date, Time and Page Location: The format and location of date, time and page information, usually found on the left or right of the first two lines of the screen, can also be a discriminative feature.

The second family of features is intended to recognize geometric patterns in the placement of the information on the screen.





Field Positions: The 3270 protocol [1] provides a description of the organization of each presentation space pushed to the terminal in terms of “fields”. These fields – their number and locations – provide valuable information about the overall screen organization. Especially for static screens, it is usually the case that the field locations are the same for all the instances of a single screen. Projection Profiles: Projection profiles, widely used in document analysis [13], are a mapping from a twodimensional image to a waveform, whose values are the sums of the values of the image pixels along some specified dimension. A projection profile is obtained by determining the number of black pixels that fall into a projection axis. A deep valley in the profile with a certain predefined width is called a cut. By considering the legacy screen as an image, with each character as a pixel, we have utilized this technique to extract some useful features. For a legacy screen, a projection profile is a histogram of the occurrences of a particular character (or set of characters) on the screen. Depending on the character(s) counted, a variety of different profiles can be collected. For example, horizontal and vertical profiles of all characters reveal table, list or structured report layout, horizontal numbers profile can indicate enumerated lists that are often used for presenting menus in legacy systems, and the profiles of special characters, such as , , , are useful for identifying patterns of separator lines.

Finally, a set of application-specific features can be examined.





Keywords: The existence and locations of particular keywords or sets of keywords can help in screen identification, e.g. Menu, Report, Form, Help, File, Input, etc. Also, keywords like Error, Invalid, no/Not, match/found/valid/available, etc. help in identifying error screens or error state. Others like Page, nn of nn (where nn is a number), continued, etc. suggest that the screen is an instance of a sequence of the same screen, or one page of a multi-page screen. The Cursor Label and Position: Along with every new 3270 presentation space sent to the terminal, the location of the cursor on screen is also sent. This initial cursor location and the label to its left can be significant in identifying the screen in command-line driven systems.

3.1.2. The Screen Recognition Process. The screen recognition process requires the configuration of a set of recognizers,  . Each recognizer employs one

or more of the features discussed above, in order to evaluate whether two screen snapshots are instances of the same screen. A recognizer compares two snapshots in terms of its features. If the two snapshots share a feature then this feature votes that the snapshots are instances of the same screen. The recognizer votes that two screen snapshots are instances of a single screen with some probability, computed as the weighted sum of all the votes of the individual features of the recognizer.

  21436587  :94; .< !=  !  " % #  $ ' & ) & + ( ! * . , ) ! / + ( 0  where is the vote of , the  recognizer, 7  ? , > is the vote of the @ recognizer, ,?> feature for this feature and 94; .< != is the weight with which @ con,?> tributes to the overall vote of the  recognizer. ,?> If a feature is not found on a particular screen, the recognizer may be configured to either ignore it, or to consider its absence as evidence that there exists a distinct screen that lacks the feature in question of which the two snapshots are instances. Let us now discuss how the overall recognition process works, that is, how the list of the interface graph nodes is constructed. The interface graph is initialized to be empty. Subsequently, each screen snapshot in the trace is accessed and compared to all the previously recognized nodes by the configured recognizers. Each recognizer compares the current snapshot against a representative of each previously recognized node, and it computes the probability that they match, i.e., that they are instances of the same screen. The final probability that the new snapshot matches a previously recognized node, that is that it is an instance of the screen that the node represents, is the weighed and normalized sum of the votes of the different recognizers.

  A " #B$&C/!(+DE$+F 1G 2H (E/!0CI  :9 +J ,  where is the vote of the  recognizer, and 9 ,?>

is

its weight relative to the other recognizers.

If the most probable match is above a preset threshold, then the current screen snapshot is considered to be an instance of the same screen as the previous screen snapshot with which it was compared. Otherwise, the current screen snapshot is considered to be an instance of new screen, never encountered before, in which case a new cluster is introduced. At the end, the process has constructed a set of snapshot clusters. Each cluster contains snapshots that are instances of the same screen, and therefore corresponds to a node in the interface graph. The combination of features, on which a cluster of snapshots match, constitutes the identifying predicate of the node.

3.1.3. Setup Advice. To support the configuration of the recognizers, L E NDI has a heuristic advisor component. The advisor takes as input a list of the features to be used for the screen recognition process and an estimate of the number of unique screens in the system, that is, the number of interface graph nodes. The advisor performs a complete recognition process on the available traces using every single feature in the input list and every possible OR-combination of two features. Then the advisor considers only the recognizers that resulted in a number of output clusters between 50% and 100% of the number estimated by the user. The advisor then suggests that each feature contained in the selected recognizers be used in a separate recognizer, whose weight will be analogous to the number of occurrences of this feature in all the selected recognizers. Finally, the user tunes the advisor’s suggested configuration and determines the threshold value.

3.2 Transition Identification The second problem in the interface mapping task is to learn the interface graph edges, i.e., the possible transitions between screens and their preconditions. Transitions are made possible by user actions on a screen. An action is a sequence of cursor movements and keystrokes that ends by an Attention Identification key (AID [1]), such as Enter, or a function key, etc. Different types of actions exist in different systems or even within one system. For example, in some legacy systems, the final function key of the keystroke sequence drives the navigation. In such systems, different function keys correspond to different transactions – much like in a menu – and the user enters the data required on the screen and uses a function key to select the transaction to be performed on this data. Other systems are command-driven, and require the user to type in a command, usually followed by the Enter key. Often, different forms of the command are available, for example, a command string and all its prefixes may have the same effect in some systems. Yet other systems are driven by a string of data items entered at particular location(s) on the screen followed by the Enter key. Transition preconditions can also be of various types. Some times, they may be observable on the screen itself in the form of a system message. For example, a transition from one data entry screen to another may be possible only if all the fields are entered; in case of incomplete entries an error message may appear on the screen that prevents all but the error-handling action, i.e., the filling of the empty fields. In other cases, a transition may be conditioned upon the value of a piece of data entered on the same (or some previous) screen(s). For example, only if the value entered

in the “age field” is above 60, can the user access a screen presenting pension information. This variety in action types and their preconditions makes the problem of edge recognition quite challenging. An especially interesting aspect of the problem is that of completeness, that is, how to recognize all the transitions of the system screens. We are currently starting to work on this problem and our approach is to try to formulate hypotheses about the screen behavior based on its layout and/or the examples of its transitions contained in the trace. Then, based on these hypotheses we generate predictions about the types of the expected screen transitions. Such expected screen transitions can guide the understanding of the instances of the transitions that exist in the trace, and can also provide the basis for an exploratory phase where the screen behavior is tested with novel action sequences, not seen in the trace. For example, some screens contain a line – usually at the bottom – listing different function keys followed by a string indicating their functionality on the given screen. If such a line is found on a particular screen, then the expectation is that each individual function key listed corresponds to a transition from this screen to another. If the trace does not contain examples of all these transitions, then the interface can be tested by applying the function keys that were not exercised in the trace to the screen in question. Similarly, a screen may be hypothesized to be a data-entry form, due to the substantial number of user keystrokes in the example transitions of the screen in the trace. Then that screen can be tested with several alternative action sequences, generated with modifications to the action sequences that exist in the trace, to explore its behavior with erroneous or incomplete input. In general, it is not possible to guarantee the completeness of the interface map with respect to the actual system interface. However, we envision a background monitoring process which will recognize new transitions, not specified in the interface map, and will incrementally – and possibly interactively – update the map to more closely reflect the system.

4 Evaluation L E NDI’s screen identification process was tested with two legacy systems. The first was an insurance system and the second was the Library Of Congress Information System (LOCIS) [18]. The results of these experiments are reported in the next paragraphs.

4.1 First Experiment The trace used for the first experiment – with the insurance system – is 371 screens long. It has 106 different screens.

Some screens appear once and some exist as many as 14 times. Some screens, e.g. menus, are very static and others, e.g. free report forms, are very dynamic. Most of the features described above were extracted. In particular 1.

2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

K  encodes a classification of left, middle and right areas of the top two lines of the screen snapshot into screen code, title, date, time, page information, or blank K  is the location of code and title on screen snapshot, if any KML is the text in the middle of the first line K8N is the text in the right of the bottom line O  encodes the locations of the 3270 fields O  encodes the number of input fields on the screen snapshot P  encodes the cursor’s label and initial location Q  and Q  encode the horizontal and vertical all characters profiles respectively Q L encodes the horizontal numbers profile QRN encodes the vertical words profile Q8S encodes four vertical profiles for four special characters: , T , U and V .

Two experiments were performed with this system: in the first one, a L E NDI user manually configured the recognizers, while in the second one, the advisor proposed the recognizers’ configuration. Manual Setup: In this experiment, a L E NDI user reviewed the trace thoroughly to decide the most useful, i.e., the most discriminating features for screen identification, and then, manually configured the recognition setup. After performing the identification process, the user reviewed the results for errors, and fine-tuned the setup to avoid these errors. It took ten such recognition/review/reconfiguration rounds to reach the setup shown in Table 1. Table 1: Experiment 1: Manual recognition setup Weight 1 2 3 4 5

1 1 1 1 1

Features Employed

Value

K  ,8K L ,K8N Q  ,K  ,K8N Q  ,K  ,K8N Q S ,K  ,K8N

100 10,45,45 10,45,45 10,45,45 10,45,45

K8N

Ignore if Empty Y Y,Y,Y Y,Y,Y Y,Y,Y Y,Y,Y

The final setup included five recognizers. The weight, features used, value of each feature and the significance of this feature if empty are shown in columns 2 to 5 of Table 1. The threshold used was 20%. The output of this final

setup was 110 different identified screens. It included no false positive errors and 4 false negative errors, i.e. 1.1%. A discussion is necessary here on the consequences of the different types of errors. In this problem, false positive matches between snapshots are graver errors than false negatives, because, they collapse snapshots with potentially different behaviors within a screen cluster. In comparison, false negative errors differentiate among samescreen snapshots thus resulting in two nodes in the interface map corresponding to the same screen. The former error type constitutes an actual misrepresentation of the underlying interface. When an external application uses the interface map to plan a screen traversal sequence, such errors in the map can cause incorrect predictions about the result of actions in the erroneously collapsed nodes and can therefore result in the external application “getting lost” in the legacy interface. The latter type of error, where snapshots of the same screen are erroneously split into more than one clusters, simply constitutes a redundancy in the interface model.

Advisor’s Setup: In this experiment, the advisor was initialized to propose recognizers based on features K , K L , K8N , P  , Q  , Q  , QRN and Q S and their combinations such that a target number of 100 screens would be recognized from the input trace. The advisor suggested the setup shown in Table 2 in normal font.

Table 2: Experiment 1: Automatic recognition setup Weight 1 2 3 4 5 6 7 8

4 2 6 1 2 2 3 1

-WXZ0+Y (E/

15

Features Employed K , K8N

KML K8N P Q Q QN QRS

Value 10, 90 100 100 100 100 100 100 100

Ignore if Empty Y, Y Y Y N Y Y Y N

After reviewing the results, the L E NDI user made the changes shown in bold in Table 2 and also suggested a threshold of 35%. The output of this setup was 113 different identified screens. There were no false positive errors and 7 false negative errors i.e. 1.9% of the total number of screen snapshots were misclassified as unique.

4.2 Second Experiment

Table 4: Experiment 2: Automatic recognition setup

In the second experiment, L E NDI was tested with a trace taken from the 3270 on-line connection of LOCIS. The trace represents catalogue browsing and help tasks. It is 200 screens long. It has 48 different screens. 32 of these screens are static, e.g. help or information screens and menus. The others vary in how much dynamic they are. Some screens exist once and other exist up to 20 times. The features extracted are the same as the ones extracted in experiment one with the following changes 1. 2. 3.

KL P QS

is the text at the left of the first line encodes the cursor’s label only encodes two horizontal profiles for * and -

Manual Setup: The user of L E NDI used features K , K L , K N , P  and QRS only. He needed eleven recog- nition/review/reconfiguration rounds to reach the setup shown in Table 3.

Weight 1 2 3 4

2 2 3 2

Features Employed K ,K N ,P

 K L K8N P  , K  , 8K N

Value 20, 40, 40 100 100 20, 40, 40

Ignore if Empty Y, Y, N Y Y N, Y, Y

traces of different lengths from several different systems. Although not completely automated yet, the screen recognition process is quite effective. We expect that use of action information will help to further automate the process and increase its precision.

5 Task and Domain Modeling and Interface Migration

Table 3: Experiment 2: Manual recognition setup Weight 1 2 3 4 5

1 1 1 1 1

Features Employed K  ,K8N ,P  KML ,K ,K8N ,P

K8N P  ,K Q S ,K



 ,,K8KRN N 



Value 10,45,45 10,30,30,30 100 10,45,45 10,45,45

Ignore if Empty Y,Y,N Y,Y,Y,N Y N,Y,Y Y,Y,Y

The threshold used was 20%. The output of the final setup was 49 different identified screens. It included 2 false positive errors (1%) and 6 false negative errors (3%). Five of the false negative errors happened due to misclassification of instances of a system message screen. Despite of having the same behavior, the appearance of these instances may look completely different. Advisor’s Setup: In this experiment, the advisor was initialized to propose recognizers based on features K , K L , K N  nd P and such that a target number of 40 screens would be  recognized from the input trace. The advisor suggested the setup shown in Table 4 in normal font. After reviewing the results, the L E NDI user made the changes shown in bold in Table 4 and also suggested a threshold of 20%. The output of this setup was 51 different screens. There were 2 false positive errors (1%) and 7 false negative errors (3.5%). We find these preliminary results encouraging and we are in the process of conducting further experiments with

After an interface map has been constructed, the task becomes to better understand the tasks that the users perform with the system and the types of information that the system exposes to them. The URG EN T process for constructing an abstract task model and, based on it, designing a GUI consists of three steps: Task and Domain modeling, Abstract User Interface Specification and Graphical User Interface Generation [5]. These steps will be illustrated in the subsequent subsections in terms of a hypothetical task in an Insurance Information System. Consider a situation where the insurance company computerized its claims department separately from their customer’s database, and therefore owns two separate subsystems, i.e., subsystem1 and subsystem2, containing their customer and claims information. Suppose further that in subsystem2, the users must enter the customer’s claim number in order to retrieve the data relevant to generating a report on the customer’s accident claim. If they only know the name of the customer, they have to first search for the claim number in the subsystem1 by entering the customer name, before they can go into subsystem2 to retrieve the report data.

5.1 Interactive Task and Domain Model Construction The input to this task of task and domain modeling is a set of traces of users performing the hypothetical task described above, i.e., entering customer names to get their corresponding claim numbers and using these numbers to retrieve the information about their accident to generate a

Table 5: Part of a Recorded Trace and its Analysis

Screen Name

Data items

Name search

“lanyan”@T “t65j”@E “scott”@E

1st phase Single trace Tell var2 Tell var3 Tell var6

2nd phase Multiple traces Task-specific Task-specific Problem-spec.

Signon

Claim retrieval

mouse(3,12)to(3,17)

Ask var7

Task-specific

Menu

“7889”@T “a1”@E

Tell var7 Tell var8

Problem-spec. Task-specific

report. The overall purpose of the task analysis is to identify the sequence of actions that the users perform to accomplish their task, as well as the pieces of information exchanged between the user and the system and their scope. Task analysis occurs in three phases. The first examines a single trace of the task, in order to create a baseline subgraph of the overall interface map that the user traverses to accomplish the task in question, and the pieces of information exchanged during the traversal. The second phase examines multiple traces of the same task performed by the same user, in order to categorize the information exchanged between the user and the system in system constants and problem variables. Finally, the third phase examines multiple traces of the same task performed by different users, in order to differentiate between system constants and user variables. The first two columns of Table 5 describe a part of a recorded session, in terms of screens visited and data entered and obtained, for the above reporting task. The variables in quotes are data items appearing on a screen or entered by the user. “MouseTrack (x1,y1)to(x2,y2)” means that the user highlights an area with starting and ending coordinates (x1,y1) and (x2,y2) correspondingly. The last three columns on the right of the table show the results of the three phases of the task analysis process. At the end of this phase, URG EN T has identified two userspecific variables (the user’s login and password), a task constant (“a1” is the string entered in “Menu” to get the desired information about the claim), and two problem variables (the customer’s name and the number of his claim). It has also identified that the claim number can be obtained from a standard position in the “Claim retrieval” screen. In a subsequent interactive phase, the legacy system user is presented with a graphical representation of this information and can edit the results in order to correct possible errors. The set of the different types of information manipu-

3rd phase Different users User variable User variable Problem variable Ask standard Problem Variable Problem variable Task Constant

lated in the analyzed task constitutes a small view to the overall domain model of the system. In the example discussed above, in the process of analyzing the “Claim Report Generation” task, URG EN T has identified that “var6” and “var7” are two of the different types of information that the system uses. A second interactive phase enables the user to annotate these different pieces of information with a description of their semantics, such as, “var6 is-a Customer-Name” and “var7 is-a Claim-Number”. In this manner, the user creates a task-specific view of the system’s domain model and a corresponding mini-database schema.

5.2 Task-based GUI design Abstract User Interface Specification: After the analysis of a task, URG EN T specifies an abstract GUI, which requires the users to input their data items (user variables and user input problem variables) only once, and buffers them appropriately to deliver them to all screens that use them. This GUI also retrieves from the legacy interface the “asked data items” and feeds them to the appropriate screens. In this manner, if the user has developed a domain model, the GUI can also perform a “data-warehousing” task in the background of the interaction. So in the above example, the users will enter their personal data and the customer name only once. The claim number will be automatically retrieved from the standard area of the screen where it appears, and will be fed to the appropriate locations of the subsequent screens that require it. This interaction process is much simpler than the original one, where the users enter their personal data twice (once to each of the two subsystems), write down the claim number on paper and then enter this same number in three different screens

to get different elements of the final report L . At this point, for all data items to be manipulated, depending on their type, URG EN T identifies a class of graphical interaction objects appropriate for the data-entry action at hand [6]. For example, for a tell action manipulating a date, appropriate graphical objects might be a calendar, a combination of three scrolling lists for year, month and day selection, a simple text entry box, etc. Graphical User Interface Generation: The different types of actions imply the need for different graphical objects in the front end GUI. Finally, given a profile of the users that perform the task, it generates a front-end GUI appropriate for users’ requirements, based on standard interface design guidelines. The final step in the URG EN T process is the actual generation of a GUI for implementing the user’s task. Having identified a class of graphical objects appropriate for each data item, URG EN T proceeds to develop a dynamic HTML GUI with graphical objects appropriate for the users for which this GUI is intended. To date, we have applied this process to simple report tasks, for which a simple GUI generation process can generate sufficient new interfaces. More complex tasks will require more elaborate approaches to the problems of window organization and object layout.

of task to collect related traces from which to extract the logic and the elements of the information-exchange process between the user and the system. This approach localizes the migration effort to different areas of the overall interface, and also enables the development of intuitive, userfriendly graphical interfaces tailored to simplifying the interaction of the user with the system in service of a specific task. Our own plans for further research include first the problem of transition identification. Associated with that is the problem of generating more semantic characterizations for the recognized system screens. Our current screenrecognition method relies heavily on characteristic features of the screens seen as images; we are now working on defining features rich in semantic content, such as recognizing the application-specific keywords that most often appear on the system screens. Such features will enable the generation of hypotheses about the screen behavior, which in turn, will support the transition recognition. Finally, we are interested in feeding the results of the task and domain modeling phase back to the interface mapping process, by using the task-specific data warehouses as resources for application-specific keywords that can be used in the screen-recognition process.

7 Summary and Conclusions 6 Related and Further Research The majority of work on reverse engineering has focused on parsing the system code in various types of call graphs [10], and based on them, on extracting higher-level, abstract structures [4]. While these methods were purely syntactic originally, more recently they have incorporated application-specific information such as a designer’s model of the system design [11] or a domain object model [2]. To our knowledge, there has been no work that uses the traces of the dynamic interaction between the user and the system. Work on interface migration has followed similar strategies, although emphasizing the extraction of a high-level representation of the interface elements [7, 8]. And although developing task-specific interfaces is generally accepted as a desirable practice for developing original interfaces [14, 12, 15, 3], it hasn’t been applied to the migration of interfaces across platforms. The CEL LEST method for task modeling and interface migration relies on the concept

[ This problem of interfaces requiring redundant information is not

particular to legacy systems only; rather it occurs when user interaction processes are distributed over multiple systems, developed independently of each other. For example, WWW users are increasingly faced with similar issues.

Reverse engineering is the process of analyzing a subject system to identify the system components and their interrelationships, and to create representations of the system in another form or at a higher level of abstraction [17]. The work we have described in this paper focuses clearly on the latter goal. The basic idea underlying our work is that, for transaction-based systems, whose interface quite faithfully represents the data model that they adopt and the tasks that they accomplish, examination of the system-user interaction traces can provide sufficient information for understanding the purposes of the system as they have evolved and as they are currently perceived by its users. In this paper, we described a process for mapping the interface of a legacy system as a directed graph, based on traces of system-user interaction. Then we described a process for using traces of task-specific traversals of a sub-graph of this overall graph to extract a model of the information-exchange process that occurs between the system and the user in service of this specific task. This approach cannot support the maintenance of the system code, but it is useful for tasks such as interface migration and data warehousing, This approach exhibits two advantages over existing code-based approaches:





because it is interaction-driven and code-independent it is geared towards extracting an informationexchange model of the underlying system, which can potentially be closer to the requirements that the system was developed and evolved to deliver as opposed to low-level, possibly ad-hoc, implementation decisions; it directly supports the migration of the legacy interface to new task-specific GUIs.

The work reported in this paper is clearly work in progress, but we believe that the results of our initial experimentation with the interface-mapping, task-modeling and interface migration processes are quite promising, and we continue to develop and evaluate this process.

Acknowledgements The authors would like to thank Roland Penner, Brice Riemenschneider and Satinder Sandhu for their assistance in the implementation. This work was supported by a generous contribution from CEL corporation and a Collaborative Research and Development grant by NSERC 21545198.

References [1] 3270 Information Display System, Data Stream Programmer’s Reference, GA23-059-07, 8th Edition, IBM, June 1992. [2] R. Clayton, S. Rugaber, & L. Wills: “Dowsing: A Tool Framework for Domain-Oriented Browsing of Software Artifacts” Automated Software Engineering Conference (ASE-98), October, 1998. [3] M.R. Frank & J.D. Foley: “Model-based user interface design by example and by answering questions”, In Adjunct Proceedings of INTERCHI, ACM Conference on Human Factors in Computing Systems, pages 161-162, (Amsterdam, The Netherlands, April 24-29) 1993. [4] Wills, L., Automated Program Recognition by Graph Parsing, MIT-AI-TR 1358, July 1992. [5] L. Kong E. Stroulia & B. Matichuk: “Legacy Interface Migration: A Task-Centered Approach”, 8th International Conference on Human-Computer Interaction August 22-27, 1999 Munich, Germany. [6] C. Lewis & J. Rieman: “Task-Centered User Interface Design”, http://www.acm.org/ perlman/uidesign.html, 1993.

[7] E. Merlo, P.Y. Gagn, J.F. Girard, K. Kontogiannis, L.J. Hendren, P. Panangaden & R. De Mori: “Reverse engineering and reengineering of user interfaces”, IEEE Software, 12(1), 64-73, 1995. [8] M. Moore: “Representation Issues for Reengineering Interactive Systems”, ACM Computing Surveys, 28(4), 199-es, 1996. [9] M. Moore & S. Rugaber: “Issues in User Interface Migration”, Proceedings of the Third Software Engineering Research Forum, Orlando, FL, Nov 10, 1993. [10] H.A. Mueller, M.A. Orgun, S.R. Tilley, & J.S. Uhl: “A reverse engineering approach to subsystem structure identification”, Journal of Software Maintenance: Research and Practice, 5(4), pages 181-204, December 1993. [11] G.C. Murphy, D. Notkin, K. Sullivan: “Software reflexion models: bridging the gap between source and high-level models”, ACM SIGSOFT Software Engineering Notes Vol. 20, No. 4 (Oct. 1995), Pages 18-28. [12] M. Sanz, & E.J. Gomez: “Task Model for Graphical User Interface Development”, Grupo de Bioingenieriay Telemedicina, Universidad Politecnica de Madrid, Technical Report gbt-hf-95-1, 1995. [13] S.N. Srihari, S.W. Lam, V. Govindaraju, R.K. Srihari & J.J. Hull: “Document Image Understanding”, Tech. Report CEDAR-TR-92-1, Center of Excellence for Document Analysis, State University of New York at Buffalo. May 1992. [14] A. Vanniamparampil, B. Shneiderman, C. Plaisant & A. Rose: “User interface reengineering: A diagnostic approach”, University of Maryland, Department of Computer Science, Technical Report CS-TR-767, 1995. [15] S. Wilson, & P. Johnson: “Empowering Users in a Task-Based Approach to Design”, Proceedings of DIS’95, Symposium on Designing Interactive Systems, Ann Arbor, Michigan, August 23-25, pp.25-31, ACM Press, 1995. [16] http://www.celcorp.com [17] http://www.tcse.org/revengr/taxonomy.html [18] LOCIS IP address: locis.loc.gov

Suggest Documents