Computing Department, Lancaster University, Lancaster LA1 4YR, UK. Abstract. This paper describes the model underlying a user interface prototyping system ...
A prototyping environment for dynamic data visualisation R. Bentley, T. Rodden, P. Sawyer and I. Sommerville Computing Department, Lancaster University, Lancaster LA1 4YR, UK Abstract This paper describes the model underlying a user interface prototyping system which is designed to support the creation of multi-user, interactive database visualisations. The context of the work is the automation of an air traffic control system; we are concerned with the user interface to the in-flight database. The model is based on active display agents which interact in real-time with the database. The model separates the selection of entities for display, the entity representations and the way in which these representations are presented to specific controllers. The advantages of this model are that it allows multiple database views to be updated concurrently, it allows views to be shared by users at different workstations and it allows a high degree of end-user tailorability. Keyword Codes: H.5.2; H.5.3; H.1.2 Keywords: Information Interfaces and Presentations, User Interfaces; Group and Organization Interfaces; Models and Principles, User/Machine Systems
1. INTRODUCTION A common model of an interactive computer system is one in which the user interface and application are designed and implemented as separate components. As Hartson [1] points out, this arrangement has many advantages, not least of which are ease of change and maintenance. However, certain classes of system do not incorporate a distinct application component as such; rather the ‘application’ is concerned with the presentation of information from a database and the users’ interaction with that presentation. In such systems, the principal problem is discovering effective strategies for database visualisation so that users are not overloaded with immense amounts of information. The system should ensure that only relevant database items are presented to the user, that these items are presented in a way which is readily comprehensible and that users can easily distinguish one item from another. Much of the work on information visualisation has concentrated on static scientific data and automatic generation of different presentations [2]. Such work has examined the application of graphical techniques (e.g. animation, positioning, orientation) to highlight important aspects of the information [3]. This work has been extended by systems which take into account the task the visualisations must support as well as the data to be visualised [4]. Systems such as Iconographer [5] allow similar visualisation techniques to be applied to dynamic data. Visualisation systems are complemented by User Interface Management Systems (UIMS). Such tools allow the prototyping of user interfaces for interactive systems but typically embody a run-time environment which only supports synchronous interaction where data is updated solely in response to user actions.
Hix identifies the problems of visualising dynamic data objects as an issue being addressed by the latest generation of UIMS [6]. SIRIUS [7] is one such system which has been used for prototyping user interfaces to oceanic air traffic control systems. However, it does not support high level tools for experimentation with different data visualisations. Our research is concerned with techniques for database visualisation where the database is ‘reactive’; that is, it may be updated in real-time by human users or by external sensors. The work described here is being carried out as part of a project investigating requirements for an electronic display system for UK air traffic controllers. The examples in this paper are thus drawn from this domain but the prototyping system which we have developed is generally useful where reactive database visualisations have to be generated. In this paper, we present an overview of the context of our work, our objectives and the model used for interface construction. We then go on to describe an environment which uses this model to support the construction of database interfaces and experimentation with different representations of the data. Finally, we review the characteristics of this model and show how it meets the project objectives. 2. THE CONTEXT OF OUR RESEARCH In the current UK air traffic control system, aircraft descriptors are printed from a flightplan database onto paper flight progress strips. Controllers use these strips, in conjunction with a real-time radar display, for in-flight control. Control is organised into sectors with a team of controllers working on each sector. At any one time, a controller will be directly responsible for about 20 flights. Aircraft descriptors are made available to controllers some 40 minutes before the aircraft enters their sector. The overflight time for a sector is about 20 minutes so several controllers of different sectors may have access to flight strip descriptors for an individual aircraft. Control currently involves controllers dealing with a rack of paper strips, subdivided into individual bays (see Figure 5), which allows them to visualise the airspace they control; each bay represents a single reporting point in the sector and flight strips are placed in a bay for each aircraft due to pass over the corresponding reporting point in the next 40 minutes or so. Controllers evolve individual ways of organising strips and re-order them as control decisions are made. Control decisions and instructions given to aircraft are recorded by writing on the strips with a coloured pen; different controllers use different pen colours to distinguish their inputs. Controllers have evolved complex and subtle ways of working involving coordinating their decisions with other controllers. In making decisions, they must often anticipate how other controllers will react to particular configurations of aircraft. Furthermore, controllers are justifiably proud of their safety record and have the notion of ‘egoless’ control where they monitor each others’ work and correct each others’ mistakes. Previous attempts to automate this system have not been successful because they have not taken into account the subtleties of the system which are so essential for its effective functioning. Our work is distinct from these previous projects as we are deriving the user requirements for an electronic display system from an ethnographic study being carried out by social scientists [8, 9]. This study involves detailed observation of the controllers at work as well as in-depth interviews with them about their job and their evolution of successful working practices. This work is a development of previous work by Hughes et al. [10], and is a new field for sociologists whose role is to explicitly participate in the requirements capture process. Air traffic control (ATC) systems are a type of command and control (C2) system and our objective was to develop a model for user interfaces which could be applied in other C2 environments. Many classes of C2 system involve controllers cooperating using a shared information space which reflects the external processes being controlled. Like the in-flight database, the control database is updated by user input and by input from external sensors such as radars, transponders, etc. (figure 1).
external sensors
transponder data
control database
local representations
end users
aircraft entity
radar data
Figure 1. Example interactions in a C2 database Currently, most C2 systems are hybrid systems where a computer-maintained database is supplemented by paperwork and a variety of manual procedures. As the complexity of these systems increases, it is increasingly urgent to increase the degree of automation to in order to reduce the load on individual controllers. This is particularly apparent in the UK ATC system, where projected future traffic levels will exceed the capacities of the existing system. 3. OBJECTIVES FOR THE SYSTEM DEVELOPMENT Because of the complexities inherent in C2 systems, there are many subtle factors which must be taken into account when designing their user interfaces. A user-centred approach to interface design [11], supported by effective prototyping tools, is essential if a workable interface is to be developed. Our overall objective was, therefore, to construct an environment for the prototyping of user interfaces for reactive databases. This environment should support: •
the definition of different interactive representations of database entities and information displays (representations of collections of database entities).
•
the tailoring of these entity representations and information displays by the end-user, to support different working practices and personal preferences.
•
the development of prototype interfaces which keep the individual entity representations and information displays consistent with the database in real-time.
•
the construction of multi-user, multi-screen interfaces where the same database entity or entity collection may be simultaneously displayed on separate screens and manipulated by different users.
It is important that the environment support the definition of entity representations and their organisation within information displays for two reasons: 1. Many different entity types may be represented in a C2 database and we cannot predict in advance what these will be. We need the facility to create different representations quickly so that a designer and an end-user can collaborate on representation design.
2. The limited screen size on current workstations makes it critical that the representation of a collection of database entities are organised in the most effective way. This can only be discovered by experiment and in collaboration with end-users. A simple way of specifying and generating these representations is therefore necessary. Our requirement for end-user tailorability is based on the experience of previous attempts to produce automated systems for ATC. A criticism of these attempts was that the systems locked controllers into a particular style of working which often conflicted with the controllers’ own views of the tasks they performed. In this sense, we agree with Fischer and Girgensohn [12] when they say: ‘End-user modifiability is not a luxury, but a necessity in cases where systems do not fit a particular task, a particular style of working or a personal sense of aesthetics’ (page 185). However, in C2 environments it is often the case that information displays are shared by a number of end-users. The public availability of information might be adversely affected if endusers were allowed complete freedom in tailoring their user interfaces. The environment must therefore allow interface designers to specify exactly what aspects may be tailored and in what ways. Controllers using a C2 system typically make several control decisions per minute so access to up-to-date information is essential. The displays should react to changes in the database and update immediately. We were required to generate prototype systems whose performance was such that the immediate database updates could be demonstrated to controllers. C2 systems often require several users to have access to the same data entities and to be able to update these entities independently. For example, in ATC the sector controller issuing instructions to aircraft requires information on the same aircraft as the planning controller, responsible for coordinating aircraft into and out of the sector. Similarly, military systems may require field and headquarters commanders to monitor the same data, to coordinate their actions and to take over from each other if part of the communication systems goes out of action. We therefore needed a mechanism in the prototyping system which allowed the same database entities and entity collections to be displayed simultaneously on several workstations and for representations to be kept in step as updates were made. Performance considerations were important here; it had to be possible for multiple displays to exist without causing a significant degradation in the performance of the database update mechanism. 4. USER DISPLAY AGENTS In order to meet our objective to keep our database representations consistent in real-time, we developed the idea of an active display which could respond to changes in the state of the database. This asynchronous object (or agent) is responsible for managing a display of database entities on the user’s workstation. We refer to these agents as User Display agents (UD agents) and the parts of the user’s screen managed by an agent is called a User Display (UD). There may be several UD agents associated with each workstation so that several UDs can be presented, in separate windows, on the same screen. Multiple windows which present the same UD may be opened and are controlled by a single UD agent (figure 2). Each agent is concerned with displaying a collection of database entities using an appropriate representation and organised in an appropriate way. As the state of database entities changes, these agents must update their UDs and any windows which present them in order to maintain the interface consistency.
database
user environment UD agents
display screen 250 180
external sensor updates
320
250 180 320
X Y Z
Figure 2. Interface management using active agents The fact that we are dealing with a database which is constantly being updated places a number of requirements on the system model: •
Display focus: As the attributes of an entity change, that entity may have to be added to or removed from an existing UD. For example, as an aircraft moves out of a sector, its descriptor should be removed from the set of descriptors managed by a controller.
•
Entity visualisation: As the attributes of an entity change, it may be necessary to change the screen representation of the entity within a UD. For example, a change in heading for an aircraft may require the colour of the aircraft’s representation to change, to warn the controller that the new heading places the aircraft in confliction with existing traffic.
•
Entity position: As the attributes of an entity change, the position of its representation relative to other representations within a UD may have to be adjusted. For example, the display of aircraft descriptors may be organised according to their time of arrival over a reporting point. A change in airspeed for one aircraft may require this UD to be reordered.
4.1. The System Model These requirements led us to develop a model for a User Display which separates the notions of the entities to be displayed, their representations and the spatial organisation. A UD is therefore defined as a triple: UD = {S, P, C} S represents a Selection (the entities to be displayed, or the display focus), P a Presentation (the entity representations) and C a Composition (the spatial organisation). These terms are defined more precisely as follows: 1. A Selection (S) is a set of entities which are chosen from the database according to the Selection criteria. Selection criteria are predicates over entity attributes and are specified by the interface designer. The Selection criteria filter database entities to select those which should be included in the UD. As the state of entities is updated, the Selection is changed accordingly.
2. A Presentation (P) is a set of Views used to represent entities in the Selection. A View is defined as a graphical representation of one or more entity types; it specifies how a represented entity will appear, the layout and representation of entity attributes and the interaction mechanisms used to update the entity’s state. Views are selected for each entity according to the Presentation criteria. Presentation criteria are guards associated with each View which could be used to represent an entity. An entity which passes a guard can be represented using the corresponding View. As an entity’s state is updated, it may be necessary to change the View used to represent it in the UD. 3. A Composition (C) is a set of positions which represent the spatial arrangement of the Views in the UD. These positions can be absolute or relative to other Views. As an entity’s state is updated, the positions of the Views must also update, to remain consistent with the arrangement defined in the Composition criteria. An example of a UD is a flight strip bay where aircraft descriptors are displayed for the controller to manage (Figure 5). Flight strips will appear in the bay for all aircraft due to pass over the corresponding reporting point in the next 40 minutes or so. In the UK ATC system, military aircraft are represented using pink strips, civilian aircraft using white strips. In addition, the format and information on these strips is different depending on, amongst other things, the type of aircraft being represented. Strips are arranged vertically in the bay in the order they will arrive at the reporting point, with the strip representing the next aircraft to arrive being positioned at the bottom. The definition of a simplified flight strip bay UD for the Barton (BTN) reporting point might be as follows: Selection criteria : reporting points include BTN and time to BTN < 40 minutes Presentation criteria : use a pink strip for military aircraft, a white strip for civilian aircraft Composition criteria : vertically ordered by arrival time at BTN reporting point In practice, the definition of a UD is usually more complex than this. It is possible, for example, for the View applied to an entity to be related to dynamic entity attributes such as its position or height. We could therefore signal potential collision courses by defining a collision View and applying this to aircraft which are at the same height and on opposing headings. This model meets our requirements and has the further advantage that, by partitioning the problem of display creation in this way, it supports the construction of libraries of Views and Selection, Presentation and Composition criteria. As these libraries are populated, the task of the interface designer is simplified; the definition of the Presentation criteria above may be equally applicable for flight strip bay UDs representing other reporting points, for example. Implementing a new User Display can therefore be accomplished by selecting the appropriate components from the library. 5. DEFINING USER DISPLAYS Based on the model described above, we have developed a prototyping environment called MEAD which allows the rapid construction of different user interfaces by experimentation with different View and User Display definitions. These user interfaces can then be evaluated by analysing their use in a control situation. The MEAD prototyping environment includes two tools: 1. A View definition tool which is used to specify representations of database entity types (Figure 3). 2. A User Display definition tool which is used to specify the Selection, Presentation and Composition criteria components of User Displays (Figure 4).
The View definition tool is a hybrid drawing and database viewing tool which allows a user interface designer to create graphical representations of entity types. Using a set of primitives, the designer constructs the representation, indicating how attribute values are to be positioned, how they are to be represented and how the user can interact with them to change the state of a represented entity. Created Views can be stored in an external View library, allowing them to be re-used across a number of interface designs. Figure 3 shows the definition of the ‘westing flight strip’ representation. This View can be used to represent entities of type ‘Civilian Aircraft’, and is used by the top flight strip in figure 5.
Figure 3. The View definition tool Our tool for User Display creation captures the definition of the Selection, Presentation and Composition criteria which collectively make up the definition of the UD. The interface designer is presented with three forms with which to specify these criteria; the Selection and Presentation forms consist of a list of entity attributes which the designer can complete in order to define appropriate guards. The Composition form allows the spatial organisation to be defined - this may be absolute, and computed using a function of entity attributes, or may be relative, and specified using pre-defined primitives such as vertical or horizontal ordering etc. As with View representations, the designer can store and retrieve criteria definitions to/from an external library. Figure 4 shows the User Display definition tool being used to define the Barton flight strip bay UD, as discussed in the previous section. The designer has specified that entities of type ‘Civilian Aircraft’ which will pass over the Barton reporting point during the next 40 minutes are to be selected for representation in the UD. There is a choice of two different Views which can be used to represent entities of this type; a ‘westing flight strip’ is to be used if the aircraft is flying in a generally westerly direction, otherwise an ‘easting flight strip’ should be used. (In figure 5, westing flight strips have darker borders than easting flight strips). Finally, the designer has specified vertical ordering of the strips depending upon their times of arrival over the Barton reporting point, with the strip representing the next aircraft to arrive positioned at the bottom of the bay.
Figure 4. The User Display definition tool Figure 5 shows a window presenting the User Display which is defined in figure 4. The aircraft objects represented here are taken from a small example database which has been created using actual aircraft data from the in-flight database.
Figure 5. A window presenting the ‘Barton Flight Strip Bay’, as defined in figure 4
6. DISCUSSION We have briefly described the architecture of our model based on active User Display agents and a prototyping environment for developing User Displays. We now examine aspects of these in greater detail to illustrate how they meet the system objectives identified earlier. 6.1. Multiple Representation Support To achieve the required flexibility we have divided the problem into View and User Display definition and provided tools for both, as described in the previous section. These tools are integrated so that the User Display definition tool knows about the representations and their associated entity types, as defined by the View definition tool. Entities can therefore be constrained to map to appropriate representations. View definition: A View is defined for an entity type by drawing a graphical template, selecting the names of the attributes to be displayed and positioning them within this template. Entities being represented by a View are mapped to directly manipulatable instantiations of its template with attribute values displayed in the specified positions. The View definition tool can be used to map example attribute values (or a range of such values) on to different attribute representations, which define both the attribute’s presentation (output properties) and the enduser’s editing capabilities (input properties). The simple approach which we have taken to View construction means that a user interface designer can interact with an end-user to produce entity Views. The end-user can see, as the View is designed, how it will appear in a User Display. Indeed, any changes to the definition of a View will be reflected immediately in UDs (and hence windows presenting these UDs) using this View to represent one or more entities in their Selection. User Display definition: The User Display definition tool addresses the three aspects of Selection, Presentation, and Composition directly in order to partition the problem and provide a high degree of flexibility. A Pattern based mechanism [13] is used to capture both the Selection and Presentation criteria for each type of entity which might be represented in the UD. Patterns are individual forms which can be used to specify conditions in a by-example manner. To meet the Selection criteria, an entity must satisfy all the conditions of at least one pattern template (a template with no conditions implies all entities satisfy it). Similarly, the Presentation criteria are defined by specifying the conditions which must be met by an entity before a particular View can be used to represent it. The Composition criteria define the spatial organisation of Views within a screen window. Two approaches to the definition of layout are constraints and pre-defined layout algorithms. Constraint-based systems such as Thinglab [14], and more recently C32 [15], allow the user to define the layout of objects by describing the relationships between them. Although such systems can be very effective at capturing the desired layout, they do require a complex system to resolve constraints at run-time and can be hard to debug [16]. Systems based on pre-defined layout algorithms [17, 5] are much easier to implement but cannot easily be extended to cater for new layouts. Our approach to capturing layout, based on ‘composition axes’, falls somewhere between the two. For each entity type (or group of types) which can be represented in the UD, axes are defined which are tied to an attribute. The value of this attribute determines the View’s position along this axis for entities of this type or types. This position can either be absolute (as in a radar display, for example) or relative (as in the flight strip UD shown in figure 5). Again, the principal advantage of this approach is that end-users can easily participate in the design process. By using a forms-based representation, the underlying complexity of the Selection, Presentation and Composition model is hidden from the end-user. The effects of changes to the definition of a UD are immediately calculated and screen windows brought up to date, allowing rapid interface development by the designer in conjunction with end-users.
6.2. End-user Tailorability An objective for our system development was to provide support for individual end-user preferences in the interfaces created by the prototyping environment. Although the designer of an interface can specify the appearance and organisation of the individual User Displays, the user should be able to make local changes to these UDs to suit their own methods of working. We support end-user tailorability in two different ways: 1. We allow the user to interact with a screen window to change details of Views and aspects of the Presentation and Composition of the represented entities. 2. By separating the notions of Selection, Presentation and Composition, we provide the potential for personalised libraries which may be used to construct User Displays. A user may pick up a UD definition and replace components with their own preferences. Typical changes which may be made by an end-user to individual Views include changing the size of the representation, font types and sizes, the colour of individual items and particular highlighting conventions. Users define their preferences by selecting the item to be modified then choosing the modifications required from a menu of allowed options, as specified by the interface designer using the View definition tool. The user can indicate if changes applied to one View should be localised, applied to all such Views in the window or all such Views on the screen. It is possible to allow the user to select from a number of alternative View representations for a particular entity. The interface designer can specify the Presentation criteria so that an entity may pass the guards of more than one View. In this case the end-user can select which of the representations they wish to use and change between representations at any time. It may sometimes be necessary for end-users to overide or even replace the Composition criteria for a User Display. For example, our ethnographic analysis of the ATC system showed that different controllers adopted subtly different methods of organising flight strips in the strip bays. Using the User Display definition tool, it is possible to define alternative criteria for the Composition component of a UD. The end-user can then select from a menu which of the defined organisations they wish to adopt for a particular screen window. The user interface designer can also allow the end-user complete freedom to arrange the Views within a screen window by selecting and dragging individual representations. Thus, for the flight strip UD presented in figure 5, the end-user can select one of a number of pre-defined organisations (ordered by arrival time, current height etc.) or can re-order the flight strips manually. A particular problem with allowing end-users to modify their displays is that non-standard interfaces can result where only the user who did the tailoring can understand the significance of various aspects of the interface. Because of the collaborative nature of many C2 systems, in some cases displays will be shared and must be understandable ‘at-a-glance’ by more than one end-user. In such situations, the end-user cannot be allowed complete freedom to tailor their information displays. The tools have therefore been designed to give the interface designer control over the tailoring allowed to end-users. For example, the border colours of flight strips convey information about the direction of flight of the aircraft which they represent; blue border implies generally westbound, yellow eastbound (dark and light respectively in figure 5). Flight strip displays are used by various members of the controlling ‘team’ to support a number of different tasks; allowing end-users to change these colours will limit the strips’ usefulness as a shared resource. 6.3. Display Consistency and Updating The model of active User Display agents provides an effective mechanism for detecting changes to the database and immediately reflecting these changes in User Displays. Our initial implementation of this model in the MEAD prototyping system is developed in Smalltalk-80 for Sun workstations. We have found that it meets our requirements including the performance requirement for real-time updating of displays.
In principle all UD agents should examine every update to database entities to check if the UDs which they manage should be updated. Although the number of entities in an air traffic control system is relatively small, this is not necessarily the case for all C2 systems. Thus, a simple database scanning mechanism is likely to be too inefficient for general use. We have therefore designed and implemented a mechanism which monitors the transactions in the database and uses these to identify the database entities which have changed [18]. This mechanism is responsible for informing potentially affected UD agents which can compute the effects of the changes on the UDs they manage. The performance of this system is such that the effects of all manual database updates are computed and reflected in less than 1 second. In a production system it is unlikely that this mechanism could handle all updates from external sensors such as radars. However, we anticipate that hardware assistance would be available for these types of display. 6.4. Multi-user Working A characteristic of many C2 systems is that data is shared; different users using separate screens interact with the same database entities and information displays. The ability to support multiple users on separate workstations was an objective of our system development. Figure 6 shows how the User Display agent model has been extended to provide support for multiple workstations. Each end-user holds a local copy (or Surrogate) of each of the UD agents which manage the information displays they are concerned with. These agents maintain communication links with the Master agent, which computes the effects of database updates on the UD and informs its Surrogates of the new state. Surrogate agents, therefore, are ‘minimal’ copies of the Master, in that only the Master holds the definition of the UD. This mechanism minimises both computation and communication whilst ensuring that users’ displays remain consistent. In addition, this Master/Surrogate model supports end-user tailoring of information displays; tailoring operations are retained by the Surrogate agent and are taken into account when displaying changes in the state of the UD. A much fuller description of the architecture which supports these mechanisms can be found in [18]. Database
Master Workstation
Slave Workstations Surrogate UD agent 250
Master UD agents
180 320 Update Handler Surrogate UD agent 250 180 320
Figure 6. Master/Surrogate User Display agent arrangement All Master UD agents are held centrally on the Master Workstation. It is only possible to create and modify UDs using the User Display definition tool on this workstation, though it is possible to change definitions on Slave workstations through local tailoring operations. Any machine can assume the Master role, so the system is potentially tolerant of the failure of the existing Master. In addition, machines can be added and removed from the system at any time by telling a local workstation which machine is currently running as the Master.
When the network is lightly used, the system performance with multiple workstations is comparable to that of a single workstation arrangement. Its limitation is network performance; some degradation is obvious when the network is heavily loaded. 7. CONCLUSIONS AND FUTURE WORK We have developed a prototyping environment for the construction of multi-user interfaces to reactive databases. Interfaces built with this environment use a model of active User Display agents which respond to changes in the state of entities in the database to ensure that windows opened on them remain consistent. Experience in developing this environment has shown how the encapsulation of a User Display in this manner simplifies the problems of distribution and end-user tailorability. The MEAD system has now been developed to the extent that it will allow us to experiment with the particular problems of air traffic control. These include: • • •
What is the most effective way to utilise limited screen space yet still provide controllers with an overall picture of what is going on in their sector and adjacent sectors. What mechanisms should be used to signal changes to shared data so that controllers do not miss important updates, yet also do not continually have their attention diverted by screen updates. What flexibility should be allowed to individual controllers to tailor their own screen display.
This system development has occurred in conjunction with an ethnographic analysis of the application domain. We are convinced that new computer systems will only be effective if they take into account existing working practice which might, for example, include protocols for the updating and confirmation of updates to data shared between users. The analysis of the results of this sociological study will enable us to support the various working practices that are revealed. 8. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9]
Hartson, R., ‘User-Interface Management Control and Communication’, IEEE Software, 6(1), January 1989, pp 62-70 Mackinlay, J., ‘Automating the Design of Graphical Representations of Relational Information’, IEEE Computer Graphics and Applications, 3(2), March 1983, pp 11-20 Bertin, J., Semiology of Graphics, W. J. Berg Tr. University of Wisconsin Press, Milwaukee, Wis., 1983 Casner, S. M., ‘A Task-analytic Approach to the Automated Design of Graphic Presentations’, ACM Transactions on Graphics, 10(2), April 1991, pp 111-151 Waite, K. W., Draper, S. W., Gray, P. D., ‘Iconographer: a Tool for Creating Interactive Representations’, submitted to ACM Trans. on Office Systems, 1992 Hix, D., ‘Generations of User-Interface Management Systems’, IEEE Software, 7(5), September 1990, pp 77-87 Windsor, P., ‘An Object-Oriented Framework for Prototyping User Interfaces’, in INTERACT '90, Elsevier Science, 1990, pp 309-314 Sommerville, I., Rodden, T., Sawyer, P., Bentley, R., ‘Sociologists can be Surprisingly Useful in Interactive Systems Design’, to appear in Proceedings of HCI '92, September 15-18, York, Cambridge University Press, 1992 Bentley, R., Rodden, T., Sawyer, P., Sommerville, I., Hughes, J. A., Randall, D., Shapiro, D., ‘Ethnographically-informed systems design for air traffic control’, to appear in Proceedings of CSCW '92, November 1-4, Toronto, ACM, 1992
[10] [11] [12] [13] [14] [15] [16] [17] [18]
Hughes, J. A., Shapiro, D. Z., Sharrock, W. W., Anderson, R., Harper, R. R., Gibbons, S. C., The Automation of Air Traffic Control, Final Report, SERC/ESRC Grant no. GR/D/86257, Department of Sociology, Lancaster University, May 1988 Norman, D. and Draper, S. (Eds), User Centered System Design, Lawrence Erlbaum Associates, New Jersey, 1986 Fischer, G. and Girgensohn, A., ‘End-user Modifiability in Design Environments’, in Proceedings of CHI '90, Seattle, Wa., April 1990, ACM, pp 183-191 Larson, J. A., ‘A Visual Approach to Browsing in a Database Environment’, IEEE Computer, 19(6), June 1986, pp 62-70 Borning, A. and Duisberg, R., ‘Constraint Based Tools for Building User Interfaces’, ACM Transactions on Graphics, 5(4), October 1986, pp 345-374 Myers, B. A., ‘Graphical Techniques in a Spreadsheet for Specifying User Interfaces’, in Proceedings of CHI '91, New Orleans, Lo., April 1991, pp 243-249 Myers, B. A., ‘State of the Art in User Interface Software Tools’, to be published in Advances in Human-Computer Interaction, Volume 4, R. Hartson and D. Hix (Eds), Ablex, 1992 Haarslev, V. and Moller, R., ‘A Framework for Visualising Object-Oriented Systems’, in ECOOP/OOPSLA '90 Proceedings, October 1990, pp 237-244 Bentley, R., Rodden, T., Sawyer, P., Sommerville, I., ‘An architecture for tailoring cooperative multi-user displays’, to appear in Proceedings of CSCW '92, November 14, Toronto, ACM, 1992