Should Computers Function as Collaborators? Anne Bruseberg Department of Computer Science University of Bath Bath, BA2 7AY, UK
[email protected] ABSTRACT
This paper discusses the merits of drawing analogies between human-computer interaction and human-human collaboration in the light of the ever-advancing capability of computer systems. We argue that, given the differences in characteristics between humans and computers, suitable analogies should focus on the more basic, fundamental processes that make collaboration efficient, rather than aiming to equip computers with complex, human-like abilities. We demonstrate this perspective through a number of examples of observed problems with current aircraft systems. Keywords
Interaction design, collaboration, automation. INTRODUCTION
Computer systems become increasingly capable of a wide variety of functions, thus creating new types of complexity to be understood and designed for. Equipping computer-based systems with new capabilities is often approached through increased automation, adaptivity, context sensitivity, and artificial intelligence. Such advances, however, create new types of interaction, and therefore new problems including new types of error, often with higher consequences. However, the aim of any such design should be to make human-computer interaction not more, but less complex, and thus less difficult. Such new types of interaction require new design approaches. One of the new concepts that have emerged is that of designing computer ‘agents’ as collaborators – by drawing analogies between the ways humans collaborate, and how humans may collaborate with a new generation of computer systems. However, one can take very different perspectives when applying this concept. In this paper we explore the implications of this idea, and discuss our perspective for how we believe it should be applied. We take the view that human-computer systems will function more effectively and efficiently if we can shape them as collaborative systems, by drawing on the fundamental processes that make collaboration efficient. For example, Hourizi and Johnson showed how human-computer interaction dialogues for autopilot systems can be re-designed to reduce error rates, by drawing on acceptance, repetition and confirmation procedures found in human-human interaction [9]. This perspective is grounded in the idea that the concept of computer ‘users’ is not sufficient anymore, since the complex
Peter Johnson Department of Computer Science University of Bath Bath, BA2 7AY, UK
[email protected] interaction with automated systems, as can be found in modern glass cockpits, often goes much deeper. Whilst we consider system operators to ‘use’ computer systems, we would not ideally consider human collaborators to ‘use’ each other, but to work with each other. The more capability and responsibility that is given to ‘computational agents’, thus modifying the roles of humans interacting with them, the more we may benefit from considering them as ‘collaborators’ in carrying out complex, shared tasks. However, it is essential to draw suitable analogies to humanhuman collaboration, otherwise the resulting interaction may become even more complex. Based on a review of problems observed from automated aircraft, we explore the issues that lend themselves to be considered within the perspective of designing human-computer collaboration. DESCRIBING AND DESIGNING COLLABORATION
In order to determine which characteristics of computer-based agents best support collaboration between humans and computers, we need to understand what collaboration is. A generally accepted definition is that “collaboration is a process in which two or more agents work together to achieve shared goals” (pp. 67 [15]). Within the context of describing potential human-computer collaboration, Terveen [15] describes five fundamental collaboration aspects that arise from this definition, by drawing on insights from both Artificial Intelligence (AI) research and Human Computer Interaction (HCI) research: • Agreeing on the shared goals to be achieved (both prior to, and during, engagement in collaborative activities); • Planning, allocation of responsibility, and coordination (decide about, and coordinate, actions of individual collaborators to achieve goals); • Shared context (track progress towards their goals and evaluate effects on their actions); • Communication (exchange of information and concepts, including observation); • Adaptation and learning (building knowledge for efficient collaboration through explanations, demonstration etc.). We consider that designing human-computer collaboration is concerned with two main aspects. Creating tools that function as ‘collaborators’ means, firstly, to support the inevitable collaborative activities including (a) information exchange (in terms of its means);
(b) the interpretation processes to create awareness of a shared workspace; (c) the planning and adjustment processes that change what people do in relation to others – i.e. coordination. Related to this, we need to assess and anticipate the ways in which individual agents may approach their tasks to support these processes effectively. This, in turn, is based on an understanding of the nature of collaborators’ relationships – i.e. how their task objectives relate to each other. The characteristics of collaborative activities are influenced through the degree in which functions (goals) are shared by collaborators, the types of goals shared, and how functions interrelate. This includes aspects such as: the (a) locus of responsibility and control [8]; (b) why people collaborate; (c) the influence of constraints on sharing of resources. Thus, secondly, we need to consider how to divide functions between humans and computers – at both high and low levels – to create suitable collaborators and collaborative practices. We can influence the functional structure, and thus, how collaborators’ goals may relate to each other, by making decisions as to which functions should be delegated to the system, and which should remain with human operators, but supported by computer systems. Delegating functions means to automate functions, thus creating a reliance on the outcomes, whilst this is not the case (or to a much lesser extent) for assigning support functions. Moreover, we can distinguish between providing active support (e.g. by providing predictions, assessments, plan suggestions) and passive support, based mainly on information presentation. Communication processes can be influenced through the choice of media and data types and format (e.g. communication channels). Coordination processes can be pre-determined through the way interaction procedures are designed – through the extent to which, and the way in which, sequence and purpose of actions are specified (e.g. structure interaction into selection, ‘arming’, and execution stages). Likewise, providing alternatives for selecting different forms of task delegation to the system creates choices that influence coordination processes. Planning processes, as part of coordination activities, may be supported through specific support functions (e.g. prediction, simulation). Building and maintaining awareness of system ‘collaborators’ needs to be achieved through a variety of support functions (e.g. convey status, plan, progress). AUTOMATION PROBLEMS THAT CAN BE CONSTRUED AS COLLABORATION PROBLEMS
To better understand the requirements for human-computer collaboration, we review problems that have been observed with automated systems within the area of aviation. We can draw on several sources here. One of the most important sources of information for the types of problems that can occur are accident and incident reports. Funk [7] reviewed a large number of accidents and incidents, and showed that automation-related issues (as identified in the text of the accident/incident reports) had a vital part to play in their occurrence.
Besides a range of insights drawn from the literature, we have also carried out interviews with pilots that highlight the issues drawn out here through example accounts. Pilot 1 (P1) has experience in flying Boeing 777 (one of the most advanced automated planes in service), besides other experience. Pilot 2 (P2) has, besides others, flown Boeing 727 (with only basic automation) and Learjet 31 (with a range of automated capabilities). Both pilots have been mainly in corporate employment (i.e. not airlines). We review the implications of automation that can be observed in terms of the changes in functional relationships, and, subsequently, for the processes of coordination, communication, and maintaining awareness of ‘collaborators’. Whilst we find it useful to distinguish these categories in terms of their design implications, it is essential to realise the close relationships between communication, coordination, and awareness building – since none of them can exist completely in isolation. Functional relationships
Automation was perceived as beneficial by the pilots we spoke to, thus reflecting a widely observed opinion: P2: “The autopilot takes a lot of the workload off the pilot… it also helps you in being more accurate.” P1: “In all the planes I have flown, the 777 has been the most user-friendly…if you think about how the system would function, it’s generally how it does function, they obviously thought about it in the design.”
However, their responses also reflect the significance of change. P2: “People say to me ‘what do you do?’… I say I am a systems operator.” P2: “The Boeing 727 is my favourite plane because you fly – all the systems work is done by the flight engineer.”
Automation significantly changes the roles of pilots – by creating supervisory tasks, and adding new functions. In terms of a collaborative system, the implications are: • Through decisions as to which functions to automate, and which functions to provide support for, new power distributions are being created – regarding issues such as autonomy, leadership, and final responsibility. • Computer-based systems take on a ‘mediator’ role by conveying information about aircraft status to pilots, thus creating opportunities for taking away mundane calculation tasks, and ‘preparing’ information in a useful way. Consequently, pilots also become more remote from the actual aircraft systems, since reality is often interpreted for them. • Computer-based systems take on a ‘co-worker’ role; operators are in need to understand the activities and plans of this ‘collaborator’; operators often can (and have to) choose between several alternative automated, or semimanual, functions. • Instructions to the (computer-based) system can have a much higher impact, since more powerful tasks are being initiated.
These new types of interaction require different cognitive processes and information representation strategies. Accident and incident reports revealed a number of automation-related problems [7] that we believe are closely related to such role changes – classified as ‘automation may lack reasonable functionality’; ‘database may be erroneous or incomplete’; ‘manual skills may be lost’; ‘protections may be lost though pilots continue to rely on them’; ‘pilots have responsibility but may lack authority’. With increased opportunities for new capabilities, we also increase our expectations towards the functionality of computer-based systems, for example in terms of their ‘intelligence’, error recovery capabilities, as well as provision of assessments, warnings, and safety back-up functions. Coordination problems
New roles require additional activities for managing them, due to three main aspects of change: 1. Different pilot roles. Pilots need to switch between the different roles they can take on, including aircraft operator and observer of conditions (e.g. ‘aviate’ and ‘navigate’); collaborator with co-pilots, ATC etc. (e.g. ‘communicate’); automation supervisor (e.g. ‘manage’). This requires additional attentional processes. 2. Different system roles. Pilots can often choose from a set of alternative system functions (e.g. use descent mode with ‘rate of descent’ or ‘speed’), often at different levels of automation (e.g. manual control vs. use of autopilot). Such task allocation processes require efforts. 3. Role switching between pilot and automation. a. Whilst overall pilot workload has decreased, variability in task complexity has increased due to the effect known as ‘ironies of automation’ [3]. Pilots may be forced to take over functions from automated systems when conditions have changed, whilst having to diagnose and deal with unusual conditions. Such role switches need to be supported by effective ‘handover’ mechanisms. b. Autonomous actions of automated systems (e.g. mode reversions) can be problematic – causing automation surprises due to unsuitable relationships to pilots’ tasks, and lack of clear annunciation and explanation [5, 11, 12]. Such task management issues are reflected in the automation problem categories from accident and incident reports [7], including: ‘automation may demand attention’; ‘pilots may be overconfident in automation’; ‘pilots may over-rely on automation’; ‘pilots may under-rely on automation’; ‘automation may not work well under unusual conditions’. Whilst the pilots we have talked to did not report mode reversions, they reported many situations in which to make decisions as to what parts of the automated systems to use when, as a prominent aspect in pilots’ tasks. For example, situational factors often require the use of less sophisticated equipment (e.g. adverse weather conditions, landings, Air Traffic Control (ATC) giving headings), or pilots may need to decide when the automated systems cannot cope:
P1: “But invariably on every flight the route is changed to some degree by ATC…even to a point where you disconnect it from the FMS and fly in another mode.” P2: “On our descent they changed the runway three times…for some reason we didn’t change the frequencies [the third time]. We came in…looked from a distance…thought ‘this is wrong’…so we knocked off the autopilot.” P2: “Sometimes you have situations where you know the plane is supposed to turn at 4 miles and if it doesn’t then, because it’s so much easier to use the automated system to fly this departure, you’ll find the pilot will sit there and go like, ‘OK, I’ll give it a little more time’.” P2: “[in bad weather]…what you do is you take off the height control… A lot of up- and downdrafts…can confuse the sensors… If it bobs up and down, you don’t have the autopilot fighting it. But that’s not a company procedure.” P2: “…due to traffic etc. they can’t descend you…you now find that the system is telling you ‘top of descent’ but you have to ignore it …suddenly the controller announces to you that you are cleared to descend… Now…you have to close the throttles and pull out the spoilers, the speed breaks, which gives you the right maximum rate of descent, and in most of the times the autopilot cannot comprehend what is going on…you have to knock off the descent mode and descent it yourself at the very high rates, and when you’re closer to your level you put it back on…for the 727 you definitely have to remove the descent mode, in the Learjet you go to speed. ” Communication problems
Accident reports have revealed a number of communicationrelated issues [7] including: ‘displays (visual and aural) may be poorly designed’; ‘interface may be poorly designed’; ‘mode selection may be incorrect’; ‘controls of automation may be poorly designed’; ‘insufficient information may be displayed’. Different types and purposes of interaction may require different communication channels, media formats and representation types. The following types of interaction need to be designed for: 1. Active dialogues: concerning what the system enables pilots to communicate to the system and how – including (a) instructing the system through a number of pre-defined steps; (b) accessing information (e.g. use of menu structure, diagnosis) and knowledge (e.g. help system). 2. Monitoring of computer system: concerning what the system communicates to the pilots about its own state and functioning (co-worker role). 3. Monitoring aircraft (system) activities: concerning what information the computer system provides for maximum support, based on what pilots may need to know, including provision of ‘help’ (mediator role). 4. Mediating between (human) collaborators: e.g. co-pilots, ATC, pilots in other aircraft, people on ground/airports. Moreover, the functionality of being able to pre-program the FMS to carry out complex automated tasks requires pilots to convey these plans and instructions to the system. Since more complex instructions can be communicated, the communication becomes more complex, too. Rudisill [10] reports that pilots often express problems with entering
instructions through the keypad into the FMS, particularly when under time pressure. Likewise, pilots have mentioned this issue to us as a particular problem: P1: “…during the high workload phases, operating the FMS, especially the tasks that you don’t do very often…you might forget to put a slash or a stroke, whatever the format should be that you are typing into the scratchpad…that is very distracting, getting the format correct.”
Choosing suitable data formats, tailoring the abstraction level to relevant pilots’ activities, enabling consistent data formats, and avoiding format restrictions, facilitates such communication. Likewise, input procedures and dialogue structures have to follow the logic of the pilot’s task. Again, pilots reported problems with programming instructions to the FMS: P2: “You have to make sure that the departure clearance is linked up, it would just be flashing and say ‘no link’, so you may have to delete it and then it becomes a link.”
Likewise, during the sequence of events leading towards the Cali accident [1], the miscommunication of an instruction given by the pilot to fly towards a (wrongly selected) waypoint from the FMS database was a major contributory event. Efficient dialogue procedures to enter instructions and diagnose problems can be achieved by providing task-tailored feedback; providing meaningful assessment of instructions; require pilots to acknowledge having seen essential feedback; conveying meaningful and accessible information; using the same reference systems; avoiding translation efforts. Problems regarding awareness of collaborators
Again, we need to distinguish between supporting pilots’ awareness of the activities of computer systems themselves in the role of co-workers, and the role that computer systems play in supporting pilots’ awareness of aircraft, environment, and human collaborators, by conveying information (as mediators). The different functions interact closely. Awareness of aircraft activities (mediator role)
Conveying information about aircraft (system) status needs to support different types of information search. Information needs to be presented differently depending on whether pilots are likely to (a) actively search for particular pieces of information, or (b) scan displays for changes or problems, or (c) respond to unexpected situation alerts (e.g. system alarms; instructions from others). Since the level of expectation to see developments of specific parameters may vary, some information may need to be displayed in an appropriate format, or at appropriate levels of abstractions. For example, if pilots firmly believe that they could not have mis-communicated an instruction to the system, then they will be less likely to pay much attention as to whether the system behaves as expected (see Cali accident [1]). In such cases, changes of key parameters have to be made exceptionally clear. Providing suitable abstractions, calculations, and representations are essential here. Likewise, the level of ‘expectation’ can be improved for dealing with system alarms by providing trend indicators
before critical alarms occur, or by aiding attention, comprehension, and assessment processes. Subsequently, provision of relevant diagnosis mechanisms, prioritisation aids, or solution suggestions can aid fast rectification. By pre-calculating, assessing, abstracting, predicting, or simply visually grouping related information, computer systems can be designed to act as efficient collaborative communicators, which tailor information to what pilots need to know for particular tasks. The level of abstraction displayed is also related to the types of attentional processes pilots may use in order to achieve different levels of awareness, including: being aware that something is happening; paying attention to the content; interpreting the content as relevant; drawing conclusions regarding actions. Dynamic systems underlie constant change. One of the pilots we have talked to described ‘situational awareness’ as: P1: “Situational awareness is being aware of all the things that are constantly changing, and the things that can change; have they changed and have they changed for the better hopefully; the only thing you need to be aware off are changes.”
However, change as such is often not sufficiently visible. It often relies on comparing different snapshots to see how things have developed, or will develop. Observing changes does not only cover understanding of the developments of system dynamics as such, but also how this relates to plans and expectations. Pilots constantly evaluate rates of change, make predictions to ‘stay ahead of the aircraft’ and ‘in the loop’, diagnose what has happened in the past, and assess whether or not what is happening is acceptable or whether is has become problematic. This includes developing an understanding of how activities of automated systems progress. Awareness of automation (as co-worker)
Often, pilots have to infer knowledge about the status and activities of the automated systems through the observations they can make from the aircraft activities. Awareness issues identified from accident reports [7] that clearly relate to the co-worker role include: ‘pilots may be out of the loop’; ‘behaviour of automation may not be apparent’; ‘mode awareness may be lacking’; ‘understanding of automation may be inadequate’ – including wider-reaching problems for understanding automated systems such as ‘information in manuals may be inadequate’; ‘training may be inadequate’. Other issues are less clear with regard to this functionality: ‘situation awareness may be reduced’; ‘vertical profile visualization may be difficult’. Collaboration requires the maintenance of (mutual) awareness (e.g. what others are doing, intending, have done, found out). Whilst the above-mentioned implications of role switches between pilot and automation (e.g. ironies of automation, automation surprises) may sometimes be unavoidable, their effects may be mitigated by enabling better awareness of computer collaborator’s activities. Human collaborators would consider mode reversions very bad practice without announcing them clearly, ensuring that they have been noticed, and coordinating their effects in relation to collaborators’ activities and plans. Likewise, the supervisory
role of pilots needs to be supported by providing suitable indications of automation progress, status, and predictions [13]. Pilots need to constantly double-check what the automation is doing since responsibility remains with them: P1: “you tend to double-check, although the longer you are on the airplane, the more confident you become. I certainly do that with the descent and the approach, double-check using my little method, a lot of guys I guess do the same sort of thing.” P2: “Generally no matter what system you are using you have to monitor it all the time…you know in three minutes you are going to be here and the plane is going to turn right.”
Likewise, we need to ensure that there are no mis-matches of ‘understanding’ between the pilots and the automated systems – i.e. we need to avoid that both ‘collaborators’ have a different perception of the current target or plan to be achieved. For example, during the course of the Cali accident [1], the system ‘believed’ it should fly towards the waypoint ‘Romeo’ since instructed so. The pilots however, thought it was following their intended (but mis-communicated) instruction to fly to ‘Rozo’. Moreover, it is essential to be able to develop a good understanding of the characteristics and functions of other collaborators (e.g. what they can do, how they do it; how they are structured; what they cannot do). Whilst the behaviour of the computer-based systems may make perfect sense to the automation logic, it does not necessarily to the pilots: P1: “It’s understanding the technology, and if you are not doing a particular function often enough you forget…If you can understand it, you would be able to remember it a lot longer.” P1: “There is one area on the 777…to do the VNAV approach they want you to open the speed window again. So it’s in VNAV and you open the speed window to manually set the speed, as soon as you put VNAV, the thing blanks, the speed bug jumps, it usually goes back to a lower speed…and then you have to open it again and as soon as you open it, it’s back to whatever speed, and you watch the throttles come back. It’s messy.” P2: “You have situations where…you get close to the airport and the controller now vectors you…the guy says when you’re established give me a call… [when going] from heading back to NAV…all that happens is that the plane swings round and it is going back to where is was going before…you run the chance of being disorientated…before you do that you have to have moved the plane in the system onwards to the next point so you come into position.” PERSPECTIVES ON COMPUTER COLLABORATORS
Automation on modern aircraft shows a clear trend towards more autonomous and ‘intelligent’ systems. This, however, can be problematic. Terveen [15] describes a wide variety of work from the AI community (for problem solving and learning systems), where computers can be enabled with an impressive array of autonomous activities, including the ability to adapt activities to the evolving situation (e.g. control task suspension), repair misunderstandings, flexible explanation and clarification, and mixed initiative dialogues. We have shown how such changes of roles and responsibilities can have a detrimental effect without fully understanding their implications for pilots’ activities. We
believe that designing interaction as collaboration does not mean to engage computers into complex collaborative processes such as negotiation (e.g. [6]) or shared planning – although it may be useful for certain applications – but to learn from the mechanisms and processes that human collaborators engage in, as driven by the goals and constraints of particular collaborative engagements. Thus, intelligent agents and adaptive systems will not function unless basic processes of collaboration are understood and supported. The field of AI is taking a particular focus on active agents – by capturing, and reacting to, current operator intentions and states – rather than providing a static design that finds a reasonably optimum solution to deal with a range of variable task requirements. When one design does not fit all situations, then (automatic) adaptability (e.g. change of display and functionality) seems a suitable solution [2], but there are a number of dangers. Besides the problem of consistency [2], the system activities can become obscured, and therefore surprising and incomprehensible to operators. Even if the system is able to convey information as to whether and why it changed its behaviour, the operator may have to deal with this as additional information. Moreover, in complex environments, it is usually impossible for designers to anticipate all situational variations, thus rendering a set of complex behavioural rules useless for a certain set of (possibly critical) operating conditions. AI research often focuses on developing mechanisms that enable computer agents to hold, infer and share plans with human agents. However, acquiring and asserting confidence in correctness of (adaptable) user models remains a difficult task – whether the information is attained explicitly (by asking for user input) or implicitly (by inferring from their actions) [15]. Having observed difficulties that computers face in building an understanding of their users’ plans and beliefs through the limited set of cues they receive (e.g. actions, questions, explanations), Suchman [14] concludes that there is a deep asymmetry between humans and machines. She points out that human interaction does not only rely on constructing meaning by any one participant, but succeeds through building mutual understanding, including the difficult task of detecting and repairing misunderstandings. With this mutuality in mind, the focus of AI faces a particular difficulty. By giving computers extensive responsibility and autonomy in driving the process of interaction, through developing complex models of goals, task status and users, it seems to be often overlooked that users need some useful model of the system, too. Intelligent adaptive systems bear a danger of being too complex, too novel, too incomprehensible and too unpredictable. This is especially so in safety-critical environments where the final responsibility usually remains with the system operators. Moreover, since the computer’s model of users has to be a very complex one to take account of the wide variety of possible human beliefs and goal structures, there are limits to its reliability. Whilst AI is exploring the possibilities of the extent to which computers can be enabled to carry out human functions, HCI
takes a more pragmatic approach in identifying functions that computers are suited to due to their unique processing, communication and storage capabilities [15]. In the field of HCI, however, the resulting interaction in not usually viewed as collaboration. We believe that enabling and supporting the fundamental processes that are unique to collaborative activities may create a better interaction between computers and users. This relies on an appreciation of the characteristics (e.g. abilities, limitations) of the different collaborators. Computer systems should act as collaborators that are designed to be in tune with the users’ objectives and working practices. For example, relevant task information should be made not just available but also observable [4]. Mismatches of ‘understanding’ between humans and computer agents may be avoided and/or repaired by effectively supporting collaborative processes. Our focus is on enabling human operators to be able to understand the activities and targets of computer systems, rather than progressing too soon towards the ambitious aims of AI. CONCLUSIONS
The progress of automated, intelligent and adaptive systems by itself does not guarantee better human-computer ‘collaboration’. Rather than enabling systems to ‘think’ and act like people, we envisage systems that collaborate by functioning in a way that can be understood by the people interacting with it. In this paper, we have reviewed a wide array of problems that system automation has caused for interaction with flight systems. We have outlined the relevant design issues from the perspective of designing a collaborative system by relating problematic issues to design options that support processes of equivalent human-human collaboration – including coordination, communication, and building awareness of other collaborators. By understanding these processes, and minimising the efforts for these, we believe we can design seamless interaction. Therefore, we have to develop a better understanding of the more basic, hidden processes underlying human-human collaboration, those that we often label as intuition and expert knowledge. ACKNOWLEDGMENTS
This work is being funded by the EPSRC (grant number GR/R40739/01) and supported by QinetiQ and Westland Helicopters in the UK. REFERENCES
1. Accident Report (1996) Aeronautica Civil of the Republic of Colombia: AA965 Cali Accident Report, Near Buga, Colombia, Dec 20, 1995. WWW: http://www.rvs.unibielefeld.de/publications/Incidents/DOCS/ComAndRep/Cali/ calirep.html. 2. Alty, J. L. (2003) Cognitive Workload and Adaptive Systems, In Handbook of cognitive task design (Ed, Hollnagel, E.), Erlbaum, Mahwah, NJ, pp. 129-146. 3. Bainbridge, L. (1987) Ironies of Automation, In New Technology and Human Error (Eds, Rasmussen, J.,
Duncan, K. and Leplat, J.), John Wiley & Sons Ltd., Chichester, pp. 271-283. 4. Decker, S. (2001) Automation, Cognition and Collaboration. HF Tech Report No 2001/02, Human Factors Group, Linkoeping Institute of Technology, WWW: http://www.ikp.liu.se/hf/Automation.pdf. 5. Degani, A. and Heymann, M. (2000) Pilot-autopilot interaction: A formal perspective, In Proceedings of HCI– Aero 2000 (Eds, Abbott, K., Speyer, J. J. and Boy, G.), Toulouse, France, pp. 157–168. 6. Dillenbourg, P. and Baker, M. (1996) Negotiation spaces in Human-Computer Collaborative Learning, In Proceedings of COOP' 96. Juan-Les-Pins (France), June 12-14, 1996, pp. 187-206. 7. Funk, K. and Lyall, B. Flight Deck Automation Issues. Phase 2 - Compilation of Evidence Related to Issues. Dept. of Industrial and Manufacturing Engineering, Oregon State University, Corvallis, OR. WWW: http://www.flightdeckautomation.com. 8. Harrison, M. D., Fields, R. E. and Wright, P. C. (1997) Supporting Concepts of Operator Control in the Design of Functionally Distributed Systems, In Proc. of ALLFN' 97: Revisiting the Allocation of Functions Issue (Eds, Fallon, Hogan, Bannon and McCarthy), IEA Press, pp. 215-225. 9. Hourizi, R. and Johnson, P. (2001) Unmasking Mode Errors: a New Application of Task Knowledge Principles to the Knowledge Gaps in Cockpit Design, In Proceedings of Interact 2001 (Ed, Hirose, M.), IOS Press, 255-262. 10. Rudisill, M. (1995) Line pilots' attitudes about and experience with flight deck automation: results of an international survey and proposed guidelines, In Proc. of 8th International Symposium on Aviation Psychology (Eds, Jensen, R. S. and Rakovan, L. A.), The Ohio State University Press, Columbus, OH, April 24-27, pp. 288-293. 11. Sarter, N. B. and Woods, D. D. (1995) How in the world did we ever get into that mode? Mode error and awareness in supervisory control, Human Factors, 37, pp. 5-19. 12. Sarter, N. B., Woods, D. D. and Billings, C. E. (1997) Automation Surprises, In Handbook of Human Factors and Ergonomics (Ed, Salvendy, G.), Wiley, New York, pp. 1926-1943. 13. Solodilova, I., Lintern, G. and Johnston, N. (2003) The Modern Commercial Cockpit as a Multi-Dimensional, Information-Action Workspace, In Proceedings of 12th International Symposium on Aviation Psychology (Ed, Jensen, R.), The Wright State University, 14-17 April 2003, Dayton, Ohio, USA, pp. 1096-1101. 14. Suchman, L. A. (1987) Plans and Situated Actions: The Problem of Human-Machine Communication, Cambridge University Press, UK. 15. Terveen, L. G. (1995) An Overview of Human-Computer Collaboration, Knowledge-Based Systems, 8:2-3, pp. 67-81.