Document not found! Please try again

Decision Making: A Cognitive Function Approach - Semantic Scholar

4 downloads 1628 Views 657KB Size Report
European Institute of Cognitive Sciences and Engineering (EURISCO International) ..... 1983, an Air Florida Boeing 737 crashed after take-off on the Potomac .... including the First Officer and the aircraft itself since information technology ...
Guy A. Boy

Decision Making: A Cognitive Function Approach

Decision Making: A Cognitive Function Approach Guy A. Boy European Institute of Cognitive Sciences and Engineering (EURISCO International) 4 avenue Edouard Belin, 31400 Toulouse, France [email protected] ABSTRACT

This paper presents a cognitive function approach to decision-making. Traditional approaches are usually based on single-agent models that typically attempt to capture sequential information processing. They are limited to the account of cognitive functions such as observation, interpretation, planning and action execution. Multi-agent models enable the capture of both sequential and parallel information processing. From this perspective, cognitive functions are extended to interactive supervision, cooperation, coordination, negotiation and delegation. An agent is typically conceived as a society of other agents. Since a natural decision made by an agent results from the combination of mini-decisions made by other agents whether those are intrinsic or extrinsic to the agent, an extended theory of cognitive functions is required. This is what this paper is about. Examples are provided from the aeronautics domain. Keywords

Cognitive functions, decision making, multi-agent models. INTRODUCTION

This invited paper is about human decision-making, and to be honest, it took me quite a long time to decide what I was going to write to show that cognitive functions are useful in this regard. Hard decision! There were lots of possibilities that were not well formulated. I knew the Naturalist Decision Making community as an outsider, and I didn’t know if the best decision was to present a theory, tell a story or present cases where cognitive functions were used successfully in industrial set ups. I was very excited to present my views at the conference but how? I thought this was a case in itself that deserved some explanation, a case of naturalist decision-making where uncertainty on the target and scope, knowledge, past experience, situation awareness, fear of being misunderstood, risk, hope, trust and confidence. The time has come to act. There is no decision without action. I finally chose to take an experienced-based approach to convey a theory. For the last twenty-five years, I have tried to understand how aircrews make decisions in dynamic and safety-critical environments. My presentation is based on this accumulated experience and the adjacent literature on the topic. I believe that deep knowledge in a domain of expertise is mandatory to talk about a topic such as naturalistic decision-making. Of course, we can always generalize and prove that a theory born in a specific scientific corner is applicable to analog domains. Sitting in cockpits, I observed that a major asset of aircraft pilots is their ability to decide extremely fast and on purpose in dynamic and safety-critical environments. They do not stop making decisions, often small decisions, that if not made may lead to potential incidents or accidents. They are in closed-loop process. Some decisions need to be automated and handled by either human skills or external automata, because answers need to be fast and accurate. The task environment is an aircraft cockpit in critical flight phases in which decisions regarding whether to take specific actions have to be made based on a set of then-available information items. In the cognitive function approach, decision-making is explored in the framework of interaction among various agents. Agents are humans (or machines) who (that) have cognitive functions. Machine cognitive functions are typically implemented in the form of software agents that currently assist human operators in the control of dynamic life-critical systems. This paper presents a framework for decision-making based on cognitive function allocation. It proposes models for both situated and distributed decision-making. Various cognitive functions that support decision-making are described. An example of accident analysis is presented to highlight how several decisions performed at different levels of behavior can make the overall human-machine system evolve toward an unrecoverable situation. This work is supported by a methodology based on rationalization of experience using the group elicitation method and the cognitive function paradigm. Lessons learned are provided on the use of this methodology. The balance of the paper provides related work and research perspectives. REVISITING SINGLE-AGENT MODELS

Jens Rasmussen has already provided a very useful model that describes human information processing according to three levels of behavior (Rasmussen, 1986). I propose a re-interpretation of Rasmussen’s model in terms of available

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

1

Guy A. Boy

Decision Making: A Cognitive Function Approach

time for human decision-making. At the perception-action level, decision-making is immediate, i.e., it goes directly from perception to action without using conscious resources. Typically, when a human being receives a stimulus, he or she replies immediately according to his or her training or genetic skills. He or she develops global integrated responses to a specific stimulus. At the procedural level, decision-making may require a substantial amount of time, i.e., the human being recognizes a situation pattern that is interpreted according to a rule or a procedure that leads to the execution of actions. This is a conscious cognitive process. It actually depends if the rules are easily accessible in the long-term memory or already present in the short-term memory. It also depends if one has access to the appropriate rule at the right time. At the constructive level, decision-making may require a lot of time. Constructive decision-making is a very highly conscious and complex cognitive activity that can be decomposed into three high-level cognitive functions: situation construction; abduction of hypothetical actions, resources and constraints; and planning, i.e., construction of a sequence of actions that satisfies the constraints with respect to the available resources. People seem to have adapted these behavioral levels according to the urgency of action. The perception-action level is reinforced when urgent responses are mandatory. The procedural level is incrementally augmented and refined, and used to recall, anticipate and check appropriate actions. The constructive level is the last cognitive resource when the other two levels do not provide appropriate solutions. In the next sections, the three main high-level cognitive functions for the three behavioral levels are described in the light of experimental results in the aeronautical domain coming from a long process of experimental knowledge elicitation and cross-fertilized syntheses (Table 1).

High-level cognitive function class vs. Behavioral level

Situation awareness

Inference

Action taking

Perception-action level

Sensing

Reflex inference based on skills

Acting

Procedural level

Situation recognition as a pattern matching activity

Remembering a rule or procedure: If Situation then Algorithm of actions

Executing an algorithm of actions

Constructive level

Situation construction

Formulating hypotheses that involve possible actions, constraints and resources

Planning, i.e., constructing an appropriate algorithm of actions

Table 1. High-level cognitive functions classes vs. behavioral levels Situation awareness

Situation awareness in the control of highly automated safety-critical systems is a major issue in industry today. Interfaces between human operators (users) and the complex dynamic systems they have to control are much deeper than before. These systems include complex software that either amplify users’ inputs or interpret system outputs. The type of issues that we raise in this article is typical of interfaces for technical experts, as opposed to occasional (public) users. Most aircraft pilots need to be trained for a long period of time before becoming operational. They are technical experts in the sense that they know about thrust, lift, laws of physics, and other aviation-relevant parameters. They have learned how to handle aviation complexity, in particular their expert situation awareness requires technical knowledge about the system they control. In some situations however, when these users do not technically understand how to use the system, they use common sense and think by analogy. In all cases, expert users want to understand the situation, e.g., what they should and can do, what they have done effectively, and what will happen consequently from their actions. Situation understanding may take different forms. At the perception-action level, people pre-compile automatisms that can be rationalized off-line, i.e., when they are not directly involved in task performance. At the procedural level, people use situation patterns that were previously rationalized. These situation patterns are not necessarily automated, they require conscious retrieval in the long term memory. In all other cases, people construct appropriate situation patterns that provide a meaning to the actual situation. This constructive process might be difficult and time-consuming. This is why whenever possible, people pre-construct situation patterns either pre-compiled as automatisms that can be called survival situation patterns (they may not be totally rationalized), or rationalized situation

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

2

Guy A. Boy

Decision Making: A Cognitive Function Approach

patterns that should be retrieved appropriately. The more a system involves survival or rationalized situation patterns, the more it is usable. Flying an airplane is constant attention work. Pilots must supervise systems and software assistants at all times. While the number of parameters to be controlled is finite and acceptable to pilots, the number of possible situations is quite large. Pilots need to acquire, refine and appropriately use a large number of situation patterns acquired during both training and real flight situations. Pilots build categories of situations, organize them into contextual hierarchies (Boy, 1998a). In general, crews are good at figuring out physical situations and knowing when they are overloaded (Billings, 1997). They may have more difficulty to understand the behavior of a software assistant. In highly automated safety-critical systems, this issue is crucial. For this reason, situational rules are incrementally constructed from the appropriation of provided procedures and their refinement into more sophisticated pairs of situation patterns and more efficient actions. Situation patterns may be seen as problem statements that, if they are triggered, lead to the execution of appropriate actions. The focus of attention is commonly characterized by an appropriate set of situation patterns in the working memory. Any interface artifact that would suggest the activation of this set (abduction) would be extremely useful to pilots. The main problem with the use of currently designed procedures is that their explicit situation patterns are extremely simple and almost context-free. The result is that the analytical part of a procedure is usually either too complex because procedures designers have analyzed and included all the necessary conditions, or too simple because context was not available at design time (Boy, 1987). A deeper account of temporal aspects of situation awareness can be provided according to the three behavioral levels that were defined in Table 1. A GEM session was conducted to improve understanding of how pilots could improve time management and situation awareness in future cockpits and provide recommendations for the design of enhanced interfaces. The Group Elicitation Method (GEM) support experience elicitation from a group of experts. GEM is a brainwriting technique that is used to generate viewpoints on a specific topic, construct concepts (also referred as the reformulation phase), reach a consensus on these viewpoints and concepts, and finally refine elicited knowledge incrementally and cooperatively (Boy, 1986). This session involved ten participants including a test pilot, an airline pilot, a private pilot, three aerospace engineers, and three cognitive scientists knowledgeable in aeronautics matters. The main derived concepts were the following: •









Interruption management that involves two aspects: • context preservation, i.e., access to past events (pilots are always interrupted, in this case, context should be preserved to resume activity in the right way) and availability of past events (availability of reminders: "am I cleared to land ?"); • interruption avoidance, e.g., pilots should not have to manually enter frequencies received from the air traffic control (that is a potential source of human errors), the data should directly be sent to the radio system. Note that this aspect is also related to temporal aspects of action taking (see section 3.5). Affordances and usability of the displays and controls that improves reaction time in emergency situations at the perception action level because pilots have to use already constructed schemes instead of having to consciously pattern match or construct what they should do. Usability attributes are usually learnability, retention, efficiency, errors and recovery strategies, and subjective pleasure (Nielsen, 1993). Agent/organizational-environment cooperation that, in the aviation case, is represented by air-ground cooperation. Participants in the GEM session insisted on the need for a participatory design of this air-ground cooperation. In particular, more predictive information about the air traffic should be provided, e.g., more information about other aircraft (relative information such as speed, warnings, heading; absolute information such as type of aircraft; only relevant information). Today, pilots use information included in air traffic control messages to other airplanes heard through the radio. This is called party-line information and provides a partial organizational-environment context. Currently designed Cockpit Displays of Traffic Information on-board are aimed at providing a complete organizational-environment context to pilots. This concept leads to the crucial concept of cognitive periphery. Cognitive peripheral awareness can be improved by providing appropriate multi-modal extensions of both short-term and long-term memories. For example, voice messages are much better perceived when they are the mother tongue. Consequently, voice synthesis should be thought in an appropriate language with respect to context. Time management as workload management. When people better understand each other, they are able to improve their interactions. Making sense of received messages may be very cognitively demanding. Crew resource management courses were set to facilitate crew interactions in the cockpit. Most pilots are anxious to know what amount of workload they would have if a device fails, i.e., pilots should not depend on automation. Since they have a limited working memory capacity, multimodal output information should be reduced, except for alarms. Messages and alarms should be prioritized. In high workload situations, written information would be useful as a backup. Finally, issues related to airmanship are crucial such as anticipation and adaptation of pilots to new technology. Explicit (re)presentation of time. For example, after a critical event, such as windshear, it would be useful to present “time to collision” information. There is no need to present time if a graphical display of distance and speed

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

3

Guy A. Boy

Decision Making: A Cognitive Function Approach

information is available. Pilots usually think in terms of distance, except in emergency situations. Time information should be reliable in all cases to be useful. Presentations are meaningful with respect to time ranges, i.e., short-term versus long-term. Several concepts presented above are related to the first concept, that is, interruption management. Managing interruptions is a way to manage workload. Integration of cognitive peripheral information is likely to help anticipate and manage interruptions. Experience feedback on and participatory design of agent/organizational-environment is also likely to help elicit interruption categories that will improve the level of situation awareness, i.e., instead of discovering and constructing situations, situation patterns can be anticipated either by the machine or incrementally learned by people. It is clear at this point that time management for situation awareness and decision-making in dynamic environments is improved when cognition is appropriately distributed (Hutchins, 1995a,b). Inference

Inference may take different forms according to the behavioral level. At the perception-action level, skills direct inference. There are hard-coded inference mechanisms that provide immediate responses to appropriate sensed information. Skills are either genetically available or come from training. At the procedural level, a set of rules directs inference. This set may be large, but is always closed, i.e., it corresponds to pre-formed and pre-determined insights. At the constructive level, inference is a creative and constructive process based on an open world of actions, constraints and resources. In any case, inference may take three distinct forms (Peirce, 1958, 1966): abduction, deduction and induction. Charles Peirce considered that hypotheses are true if they are useful, and they are useful if they make the world “a less surprising place”. Hypotheses make predictions about the world. Tom Addis considers that abduction is the most general form of inference and involves three kinds of activities (Addis et al., 1993): •





Retroduction. Open retroduction is the creation of a new hypothesis (necessarily at the constructive level in our framework), and closed retroduction is the selection of a hypothesis from a pre-defined set (necessarily at the procedural or perception-action levels). “Closed retroduction is often referred to as ‘abduction’ in the literature. It depends upon the notion of reverse implication. Thus, if A B and we know B then A is a possible cause. However, this inference depends upon the pre-existence of A  B”. Abstraction. At the procedural and perception-action levels, “closed abstraction is the process of selecting taxonomic concepts from a pre-defined set.” At the perception-action level, abstraction is hard-coded. At the procedural level, abstraction is soft-coded. At the constructive level, “open abstraction is the process of creating or observing new taxonomic concepts”. Heuristic abduction is “the insight that creates and selects the process (the heuristic) on how to solve the problem. The heuristic informs the inference process how to continue with deduction. Whereas heuristic knowledge selects a path of reasoning, heuristic abductive inference proposes how such a decision should be made”. Again, this “howto” knowledge is hard-coded at the perception-action level, soft-coded at the procedural level, and totally open at the constructive level, e.g.., experts develop hard-coded heuristics at the constructive level.

Induction is the process of validating a hypothesis. Deduction is the process of inferring conclusions that must follow from the premises. These types of inference are not as used as abduction in dynamic complex environments where time is a crucial issue. When people have time to solve a problem, a diagnostic for example, they use deduction. When they have even more time, they may use induction by gathering information that enable the validation of a hypothesis. In the already-mentioned GEM session on time management and situation awareness, aviation experts insisted on the way information is provided to facilitate insight. For example, today current cockpits only include a numerical description of vertical information, such as vertical speed; they would like to have a 3D presentation of the trajectory that should provide more insight either short-term or long-term. The type of presentation strongly influences the behavioral level of inference. Pilots often ask themselves questions such as “what will happen if I choose this descent mode?” Decisionmaking is much easier when clues are proposed to answer this type of question. These clues may take the form of heuristics either learned in advance, or dynamically proposed by external agents. Inference is also based on trust and selfconfidence. Consequently, people need redundant information to either select or construct hypotheses and apply them. Pilots always make a tradeoff between time gain and safety to select or construct a hypothesis. This is why a pilot needs to crosscheck information with the other agents whether cockpit artificial agents, the other pilot or the air traffic control. In addition, good crosschecking is likely to improve time management. The more cockpits are software-intensive, the more pilots need to crosscheck. More generally, cooperation among agents requires coordination activity, and crosschecking in particular.

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

4

Guy A. Boy

Decision Making: A Cognitive Function Approach

Action taking

Action taking also depends on various behavioral levels that can be more or less automated. At the perception-action level, actions are hard-coded to insure survival. At the procedural level, algorithms of actions are soft-coded and available for conscious execution. At the constructive level, actions belong to a created or selected set and are combined to satisfy appropriate constraints and be allocated to appropriate resources. In high risk situations and under high time-pressure, people learn to take action at the perception-action level. Checklists and procedures can support them to make sure that they do not forget to do things that will keep them in a safety domain. An external memory that includes such checklists and procedures poses the problem of action timing. In modern cockpits, advanced electronic checklists tend to support this level of action. Prompts such as “you forgot to do this” are crucial in many safety-critical situations. In regular well-known situations, people tend to follow procedures. However, they may disregard these procedures if they do not understand their rationale (de Brito, 1998). Action taking involves various issues that emerged from a study that we conducted at EURISCO with airline pilots. It was observed that pilots often attempt to make sense of a safety-critical situation instead of routinely using checklists or dolists at the procedural level. They constantly make cognitive trade-offs to optimize human factors such as workload and performance, according to constraints such as safety and passenger comfort. We have tried to prove this claim by using four different methods in real-world aviation environments: a task analysis; questionnaires; observation protocols; and three GEM sessions involving thirty pilots. In the following, percentages are provided to corroborate the qualitative results of the GEM sessions. These percentages come from answers to a questionnaire completed by airline pilots and reinforced by observation protocols. Ten airlines from North America and Europe were consulted, 600 questionnaires of 35 questions were sent and 207 returned fully completed, 8045 answers and 4554 additional free-text comments were processed. An exhaustive statistical analysis was performed to elicit emerging human factors (Karsenty, Bigot & de Brito, 1995). In addition, observation protocols in fullflight simulators at Airbus Training in Toulouse: 35 crews (70 pilots) were observed for 140 hours of simulation. This investigation provided real activities of airline pilots involved in procedure following (de Brito, 1998). The use of an external memory of checklists leads to the following issues in action taking (de Brito, Pinet & Boy, 1998). In principle, checklists should be executed to verify that the right actions have been executed. In many cases, a pilot may use a checklist as an action aid (dolist). Pilots plan the execution of actions using checklists. Checklists usually serve to prepare the next phase of flight. In high workload situations, pilots select and group the most important checklist items. Almost half of the pilots who participated in the various experiments use their memory instead of reading checklist items when time pressure is high. Pilots repeatedly said that procedures should be better adapted to flight situations. The concept of operational interference implicitly assumes that pilots are supposed to literally follow procedures. When pilots are interrupted by an external event, they may not resume the execution of a procedure or checklist. Interruptions mainly come from air traffic control (78%), failure alarms (12%) or other reasons (10%.) Omissions are mainly due to interruptions (68% after an air traffic control interruption.) They may also be due to distraction, impression of an already done action, negligence, lack of experience, or different order in prescribed action items. Finally, crew resource management (CRM) usually supports crew coordination and role allocation. Even if CRM is an accepted practice, CRM should be supported by appropriate procedures (Kanki & Palmer, 1993). The use of an external memory of dolists leads to the following issues in action taking (de Brito, Pinet & Boy, 1998). After the occurrence of an abnormal situation, 92% of the consulted pilots recognize that they focus on immediate safety (48% of pilots analyze the situation before doing anything else); 83% of pilots recognized following required task sharing; 75% of pilots try to diagnose the situation without using available ready-to-use procedures; 52% of pilots had problems in selecting the appropriate paper dolist (this may be due to stress, table of contents quality, weakness of the indications enabling the access, and information dispersion). 35% of pilots look for the dolist to solve the problem in the case of an abnormal situation. Dolist items are selected according to their usefulness, relevance, or ease of execution. Items may be grouped to optimize available time. Item grouping is a matter of expertise, crew coordination, and physical proximity of actions. Some actions may be anticipated depending on the way pilots understand how systems work or should be used. In addition, when they predict time pressure in the near future, pilots tend to anticipate some actions. In order to control the situation at all times; pilots require a global view of the situation. They have a clear understanding of the difficulties such intrinsic complexity of some procedures, lack of information, availability and relevance of information displays, information organization, and clarity of terms and objects. Some pilots suggested that rationale and consequences of the prescribed task should be easily accessible. It is interesting to mention that 43% of pilots annotate their paper procedures. Pilots need to know the consequences of the execution of a dolist at the right time (status information.) Crew action coordination is also an issue that was greatly discussed during the GEM sessions. Pilots saw advantages in crew coordination such as using the best competence, enhancing intersubjectivity, optimizing workload distribution, and handling redundancy. They also saw drawbacks such as postponing the execution of a dolist, interrupting the execution of a dolist temporarily, and introducing additional workload resulting from mutual control. Finally, 61% of pilots would like more electronic procedures because they find that it can be a better context-sensitive

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

5

Guy A. Boy

Decision Making: A Cognitive Function Approach

information support if the screen is large enough to enable the presentation of the necessary dolist information. The issue of external consistency among paper and electronic procedures was a concern. These results show that checklists and dolists cannot be considered as action prescription (i.e., prescribed task from designers and airline management), but as action suggestion for safety assurance (i.e., work aid for the pilots.) This article takes the perspective that users adapt to situations whether they have prescribed procedures to follow or not. This adaptation is implemented in such a way that users satisfy cognitive trade-offs that may include what the procedures prescribe, to keep the control of the situation. WHAT DOES IT MEAN TO DECIDE?

The interrupted take-off of an aircraft is taken as a running example that provides a rich decision-making domain. Before take-off, pilots determine three decision speeds that are computed using the actual aircraft weight, the atmospheric pressure and runway parameters: • • •

V1, the speed after which the aircraft cannot be stopped safely on the runway; the breaking distance is the remaining length of the runway after this speed is reached; VR, the rotation speed which indicates to the pilot when he or she has to pull the stick to actually take off; V2, the decision speed in case of engine failure.

When pilots detect a problem on-board before V1, they need to stop the aircraft. Alternatively, they may decide to continue and take-off. This flight phase is crucial as far as decision-making is concerned. The sub-phase between brakerelease and V1 is short (about 20 to 30 seconds) and cognitively dense. The acceleration is about 4 to 6 knots per second. An intentional cognitive function is involved to control this acceleration. Pilots are trained to use a verification speed of 100 knots that defines a rolling time on the runway roughly equivalent to 70% of the time it takes to reach V1. When pilots decide to stop the aircraft at 100 Knots, they perform the job without major difficulty. In aviation, there are two important distinctions to consider. A responsibility-based distinction is made between Captain and First Officer (the Captain’s assistant). An operational distinction is usually made between Pilot Flying (PF) and Pilot Not-Flying (PNF). Whether the Captain is PF or PNF, he or she is always in charge of the flight. Let us provide the cognitive functions that are allocated to the various agents during the take-off phase. PF cognitive functions are: • • • • • •

set-up engine power; access primary controls, control brakes and front wheel; track aircraft on the runway; track take-off rotation at VR speed and subsequent flight; check consistency of reading calls of the other pilot; announce the speed read on indicator.

PNF cognitive functions are: • • • • •

monitor parameters, in particular engines parameters and speed; monitor abnormalities and alarms; announce critical speeds, i.e., 100 knots, V1 and VR, abnormalities and alarms; announce any observed configuration change on flight controls and automated devices; announce traffic radio messages.

Captain cognitive functions are: • •

after power set-up, put his or her hand on the throttle levers, reading to reduce power if his or her decision is to stop the aircraft on the runway; after V1, take his or her hand off the throttle levers and avoid touching them before a minimum altitude, generally 120 meters.

The aircraft has its own cognitive functions that pilots should know. For example, alarms of failures that do not have any influence on take-off are inhibited from 80 to 100 knots. Just before 100 knots, a non-inhibited alarm is likely to lead to the decision to stop the aircraft if the failure is serious. After 100 knots, only the major failures are indicated to the pilots, e.g., the aircraft informs about fire or an engine failure. Before take-off and when the pilots set up engine power, an alarm may inform them that they forgot to set up the right take-off configuration or to release brakes. Intentional and reactive decision-making cognitive functions

An event usually triggers decision-making. People need to be aware of such an event to start a decision process. An event may come from people’s minds, an internal event, or from the organizational environment, an external event. In both cases, they lead to the activation of a cognitive function. Internal events lead to the activation of intentional cognitive

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

6

Guy A. Boy

Decision Making: A Cognitive Function Approach

functions. External events lead to the activation of reactive cognitive functions. These cognitive functions may be simple or complex according to the complexity of the situation and the level of training and knowledge of the people involved in the decision-making process. In a flying task, there are four high-level cognitive functions that may be categorized as follows. One cognitive function is reactive, i.e., vigilance about deviations to prescribed trajectory and associated risks. One cognitive function is intentional, i.e., the strategic definition of the trajectory. Two cognitive functions are both reactive and intentional, i.e., short-term trajectory guidance and space-time positioning. Let us take the “take-off configuration set-up” cognitive function. It is intentional for the pilots when they plan to do it and consequently do it. It is a reactive cognitive function when the aircraft alarm system informs pilots that they forgot to do what they should have done. Installing such alarms on aircraft tends to increase redundancy and equilibrium of the overall human-machine system. People need redundancy to survive and safely evolve in their environment. They are naturally equipped with sensors that provide them with redundant information. For example, peripheral vision involves reactive cognitive functions that enable them to react to danger. Other cognitive functions enable them to plan appropriate responses. Reactive cognitive functions are usually working in the “background” and in parallel. Intentional cognitive functions usually work in sequence, or in some cases in time-sharing. Alerting pilots that the take-off configuration set-up is not done is an external cognitive function working in parallel that eventually triggers pilots’ appropriate cognitive functions that will solve a problem that they did not intentionally expect at the time the alarm “rings” (since they forgot it in this case). A model of situated decision-making

Flying is a closed-loop activity. Decision-making is thus included in a continuous series of control activities that are all of them crucial towards the safety of the mission. As already described, pilots should be aware of the external situation in order to make appropriate decisions. But “although the human is an exquisite processor of information by almost any measure, all of these means of acquiring information are subject to error... Most systematic treatment of the process of making optimal decisions emphasize the importance of the proper seeking and processing of information prior to the actual decision-making step.” (Nagel, 1988). Since the way people perceive the world around them strongly influences their decision-making process, a solution may be to compile domain knowledge by extensive training into situation patterns that will be reused at the perception-action level. In aviation for example, emergency situations require very compiled situational knowledge in the form of situation patterns. These situation patterns enable pilots to react appropriately. Another solution may be to document situated procedures that are likely to be followed at the procedural level. Finally, learning deep knowledge and being able to use it appropriately is always a good solution. This usually requires high standards of education or a very long experience. Real life involves a mix of these solutions. Studying decision-making in dynamic and safety-critical environments has led to development of models that take into account both information processing in the usual cognitivist sense, and reactivity in the usual control theory and feedback sense. Many authors (Anderson, 1990; Wickens & Flach, 1988) have proposed models of human information processing that involve three kinds of memory (sensory memory, working memory, and long-term memory) and a process of attention. This article revisits this kind of models by integrating the behavioral levels (perception-action, procedural and constructive). Figure 1 presents a model of situated decision-making that integrates situation awareness, inference and action taking (high-level cognitive functions) at the various behavioral levels (orthogonal dimension). Each high-level cognitive function uses both attention and memory resources. Connections between high-level cognitive functions and resources are not explicitly described on the figure, but there is no reason why some connections should be privileged with respect to others. In addition, it is important to notice that there are many feedback loops both on attention and memory resources. In particular, situation awareness is fed by effects from the other high-level cognitive functions. These micro-regulations are very difficult to take into account when a detailed cognitive function analysis is carried out, but should always be present in the mind of the analyst. People use high-level cognitive functions in the context of what they are doing. For example, current actions guide the emergence of appropriate (candidate) situation patterns. The cognitive load is lighter when these situation patterns are appropriately organized, in a hierarchy of context patterns for example. The focus of attention is defined by the set of situation patterns stored in the working memory at a given instant. Artificial agents that suggest the right situation patterns at the right time can be extremely useful. In addition, artificial cognitive functions that enable the monitoring of users’ actions and detect users’ errors would greatly enhance human-machine interaction.

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

7

Guy A. Boy

Decision Making: A Cognitive Function Approach

Environment Attention resources

Situation awareness

Inference

Action taking

Behavioral Levels

Memory resources (Sensory, Long-term and Working memories)

Figure 1. A model of situated decision-making Situation awareness: redundancy and consistency checking

When someone has to make a decision, he or she needs to base the decision process on reliable, understandable and believable information. Sometimes, information on the world is not so reliable, understandable and believable. In January 1983, an Air Florida Boeing 737 crashed after take-off on the Potomac Bridge in Washington D.C., USA. The pilots made a series of seven unbelievable decisions. Each of these decisions may have been sufficient to cause the accident. Major contextual conditions were bad weather conditions (sticky snow, negative temperature, poor visibility but acceptable for take-off.) The aircraft windshield had been de-iced using a water-glycol mix. The dilution of this mix was not known even if regulations require that dilution be made in function of external temperature. Take-off occurred 25 minutes after the de-icing process, even if the flight manual specifies that take-off should take place within the next 15 minutes after de-icing. Pilots ignored the regulation rules for the dilution or at least did not verify the actual dilution, and the required take-off time after de-icing. Thus they made a decision based on incomplete situation awareness, and eventually lack of knowledge. An external cognitive function that would have informed the pilots about the time threshold after de-icing would have warned them, may have suggested different options, and probably led to a different decision. A hypothesis could also be that pilots (implicitly) trusted the people who determined the water-glycol dilution; the resulting cognitive function is a delegation of responsibility to someone typically unknown. During the rolling on the taxi-way, the Captain said to the First Officer that they should follow the aircraft in front of them to de-ice them! The flight manual specifies that in icing conditions, airplanes should not be close to each other during rolling on the ground. This is probably the most serious erroneous decision that led to the accident. It was likely based on a heuristic either observed by the Captain in the past, or coming from discussion within the community as a good trick for de-icing. This kind of heuristic was certainly not validated, nor consistent with the current regulatory rules. In a very open and complex environment such as aviation, pilots are trained to follow rules and procedures that provide a safety framework, i.e., work at the procedural level. When they spontaneously create their own heuristics, i.e., work at the constructive level, they are likely to exit out of this framework. This does not mean that construction of alternative is a wrong behavior for pilots. In normal situations, they should follow procedures. When they are under time pressure with no appropriate pre-learned and procedural responses, they may create their own rules and adapt to the situation. After the take-off clearance provided by the air traffic control, when the aircraft was lined up for take-off, the Captain adjusted the engine pressure ratio (EPR), i.e., the ratio between pressures up and down the engines, to a 2.04 typically correct value for take-off. Pilots know that this value depends on aircraft weight, external temperature and runway altitude. The Captain was not aware that the throttle lever was inappropriately set. It should have normally been pushed to the front limit. It was half course. Pilots did not notice that other engine indicator were inconsistent. This analysis clearly points out that consistency checking is a critical cognitive function that was not used by the pilots. During rolling on the runway, acceleration was abnormally slow (obviously the throttle lever was half course). The First Officer said twice that engine indications were inconsistent. This consistency-related shallow observation was not followed by a deeper analytical check. In contrast, the Captain replied that the cause was only freezing, and he continued rolling.

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

8

Guy A. Boy

Decision Making: A Cognitive Function Approach

VR was reached at the end of the runway, and despite this very clear physical indication, the pilot pushed on the throttle lever. He could not maintain VR and could not reach V2. He then saw the Potomac Bridge. He pulled on the steering column. The aircraft stalled. He then pushed on the throttle lever, but it was too late. The landing gear touched a car on the bridge, and the aircraft crashed in the Potomac River. This last series of decisions were purely performed at the perception-action level. The main cause of the accident was the inappropriate value of the EPR. This ratio between pressures up and down the engines is measured by two gages. The front gage was not working because it was totally closed by a layer of ice. This was caused by the fact that the pilot made the decision to use the previous aircraft engine blow to de-ice his own aircraft. The blow did not entirely remove the water on the engine front gage, and thus the residual water iced and definitely closed the gage. Consequently, the EPR was no longer a pressure difference but the engine output pressure. It was calculated that the real EPR was 1.74 instead of 2.04. This explains the inconsistency among the various indicators. When a system is intrinsically complex, two usability strategies are possible: hide complexity by providing a userfriendly interface, or present schematic views of the complexity that users rapidly understand. While occasional users like the former type of interface, the latter type is usually for technically knowledgeable experts. The addition of software assistants to the interface tends to hide complexity and remove expertise requirements, even if it often introduces new kinds of complexity and expertise requirements. This human-centered automation problem raises the issue of choosing between creating a new job (usually by delegating old jobs to software assistants), training users to cope with complexity, or redesigning using existing experience feedback. Since current solutions promote the development of software assistants, they need to be accountable at all times, and provide summary information about what they have done, are doing and will do next. This does not remove the need for training users to cope with the complexity resulting from introducing software assistants. Decision-making is contextually adapted at the appropriate behavioral level

Flying consists in handling a trajectory by constantly comparing the perceived trajectory parameters provided in the cockpit, and either a virtual trajectory stored in the pilots’ mind (cognitive function 1), on maps (cognitive function 2), or the outside scenery when it is perceived through the windows (cognitive function 3). Pilots constantly make decisions by adapting their behavior. When they can see the outside scenery, they use their basic flying skills that involve the perception-action level. When they do not perceive well the outside scenery, they need to construct a cognitive representation of the actual situation (cognitive function 4). This representation depends on the kind of automated devices they are using. When they fly manually, the representation concerns short-term trajectory issues. This is mainly handled at the perception-action level (cognitive function 5). When they use the autopilot, the trajectory is a series of segments that pilots should manage (cognitive function 6). Anticipation is a key issue when they need to decide when the current segment will be finished (cognitive function 7), and when the next segment should be started (cognitive function 8). This is mainly handled at the procedural level. When they use the flight management system, trajectory management is entirely delegated to the system (cognitive function 9), the main decisions that pilots have to make take place during the trajectory programming (cognitive function 10), i.e., when they provide the flight plan to the system. This is mainly handled at the constructive level. People adapt to the various behavioral levels according to the distribution of cognitive functions among them and the organizational environment. Flying is not the only function that pilots have to handle. They also need to handle the airplane configuration, verify that systems work well (e.g., engines, air conditioning, etc.), execute checklists, manage communication with the air traffic control, handle meteorological conditions, avoid obstacles, detect any abnormal condition, and handle communication with the rest of the crew on-board. Decision-making in such a dynamic environment requires adaptation to handle priorities allocated to the various functions and tasks. In addition, task execution is often disturbed by interruptions that require even more just-in-time adaptation. Operational context setting, sharing and incremental construction

A cockpit involves group activities. When one has to make a decision within a group of agents, he or she should take into account the operational context of the group. Operational context is represented by a set of conditions that are persistent either in space, time or hypothetical worlds. For example, a flight is usually divided into phases that are defined by the persistence of specific contextual conditions. In the take-off phase, several persistent conditions are set-up prior to the phase. For example, the cockpit and systems must have been verified, and pilots assume that it is so during the take-off phase. The engines must have been powered up also. In addition, contextual conditions of the take-off phase include the following: a given number of passengers, fuel tanks are filled, the initial aircraft mass is known; runway status is known, weather conditions are known; V1 and VR are computed and set usually by adjusting speed bugs on the appropriate indicator. Contextual conditions may be incompletely known from all agents. It is critical that the most important contextual conditions are shared among the agents, i.e., pilots, aircraft and, in some cases, air traffic control.

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

9

Guy A. Boy

Decision Making: A Cognitive Function Approach

Pilots have briefings to make sure that contextual conditions are properly shared. For example, before take-off, pilots brief each other to concentrate on the take-off phase, to remind individual tasks and action responsibilities, and to remind what to do in the case of an accident. These briefings are extremely important because they activate a series of situation patterns that will be useful during the next contextual segment. In this case, a situation pattern might be defined by an altitude of 15,000 feet, an air speed of 200 knots, a heading of 23 degrees and a vertical speed of 800 feet per minutes. In addition, a situation pattern may be taken in the context of a strong lateral wind, snow conditions and a faulty air conditioning system in the passenger cabin. The notion of situation pattern (Boy, 1987, 1998b) is essential in aviation because it underlies the concept of reactive cognitive functions. There are reactive cognitive functions that are naturally activated at any time such as the one connected to peripheral vision for instance. However, there are reactive cognitive functions that are not necessarily activated at any time. People need to make sure that they are activated in order to make the appropriate decisions at the right time. A situation pattern may be seen as a problem statement that is connected to a pre-coded solution. A reactive cognitive function is typically in charge of both detecting the situation and implementing the solution. In aviation, safety requirements have led to the compilation of a large set of such situation patterns that are usually taught in training schools, and recently taken into account during design. Experience acquisition can be modeled as a mechanism that transforms typical episodes in the form of situation patterns, and attached reactions or simply actions. Situations are incrementally generalized into contextual conditions. The more experience is acquired, articulated and accumulated, the more typical situations are elicited and operational contexts constructed. Group experience acquisition, articulation and accumulation are commonly called experience feedback. Experience feedback is an incremental process that leads to the rationalization of a domain of expertise by separating the domain into relevant operational contexts of the various situations, decisions and actions. Since several agents perform this process, it should be necessarily coordinated. In current practice, a few key people have the responsibility of this coordination. Since pilots’ decisions are safety-critical and we know that when their world is better rationalized, i.e., well segmented into appropriate operational contexts, they are able to react and plan in better ways, then action should be taken to orchestrate incremental context construction at the top level of the overall organization. This leads to the distinction between individual and global categorization of operational contexts of decision-making in safety-critical environments. The former is based on the expertise of an individual, the latter on multi-expertise. Embedding cognition into external resources

Even if the Captain is the supreme decision-maker on-board, there are decisions that are delegated to other agents including the First Officer and the aircraft itself since information technology enables it to make many decisions that were made by pilots before. This delegation to information technology poses the problem of situation awareness in different terms than in the past. In particular, trust is a regulating factor. On one side, over-trust may lead to excessive confidence in systems and to hypo-vigilance. On the other side, lack of trust may lead to pure avoidance of use of the systems. In both cases, situation awareness cognitive functions may not be rationally used. Using safety-critical systems becomes an extended social activity where each agent tries to overcome surprises generated by other agents’ decision. Nobody questions the use of the clock today: the role of the clock cognitive function is to provide the time to its user. Its context of validity is determined by several parameters such as the working autonomy of the internal mechanism or the lifetime of the battery. Its resources include, for instance, a battery, the ability of its user to adjust time when necessary or to change the battery. Note that the user is also a resource for the clock’s cognitive function. Humans have designed tools either to extend their capabilities or to create new capabilities such as flying. If computers are extensions of human brains, the World-Wide Web creates new capabilities that were not possible before, such as consulting almost any kind of document located anywhere in the world after a few mouse clicks. It is now possible to involve a group of people distributed all over the world in a meeting. Such a virtual meeting induces new types of interaction that themselves influence the design of supporting technology. This mutual influence between the emergence of new types of interaction and the evolution of technology is a key factor that drives the evolution of our societies. Tools are either physical, e.g., hammers, forks, and bicycles, or conceptual, e.g., methods, theories and know-how. Tools are concrete models of intelligence, but they often require experienced people to be efficiently used. The userfriendliness concept introduced by human factors specialists needs to be used carefully. We should not believe that userfriendly tools would guarantee task efficiency and performance. No matter how sophisticated is a tool, the user often needs to be efficient to obtain satisfactory results. I was amazed to see how a 21-year-old carpenter could design and produce a staircase that fit perfectly in my house in only a few hours. The physical tools he used were very basic. I deduced that his conceptual tools were extremely sophisticated, and he must have learned how to use the appropriate tool for the right job at the right time. His conceptual world matched the real world of staircases! He must have very precise knowledge of dimensions and relations among staircase elements. This knowledge provides him with very clear situation patterns and appropriate action solutions. Decision-making is then much easier. Today, external cognitive functions have become more complex than the clock. People delegate to these external cognitive functions some actions that they used to do before. They have to plan, monitor, negotiate, supervise,

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

10

Guy A. Boy

Decision Making: A Cognitive Function Approach

communicate, cooperate and coordinate with composite artifacts, e.g., a travel agent needs to work with a composite world-wide travel system (i.e., a composite artifact that embeds many external cognitive functions). This system is a network of a large number of computer systems. The travel agent learns about information traffic jams in the network, local software crashes and tricks for booking a trip using a cheaper carrier, for example. These are cognitive functions that are relevant to his or her job. They are valid in time-specific contexts such as “during a holiday period”, or “the Paris Orly airport is always very busy on Monday mornings.” Travel systems have taken this into account for a long time with pricing. Cheap flights are usually available during the day, not in the morning or the evening. Today, such systems learn very fast, i.e., both human and external cognitive functions also need to adapt very fast. The travel agent needs to assimilate and accommodate more cognitive functions to handle the increasing number of options. Distributed human-machine decision-making

As we have already seen in the situated decision-making model presented above, attention and memory resources are key entities that constrain human decision-making. Resulting constraints are crucial in safety-critical environments. This is why specific instruments have been developed to augment the capacity of both attention and memory resources. We do not make a decision in isolation from other people. We ask for advice; we test hypotheses with others; we listen to people’s experience, for example. The main role of decision-making is acting. This is why the notion of agent is crucial in decision-making. An agent is a person or an artifact who/that acts and communicates with the other agents of a society. An agent produces actions that produce effects. Agents are taken in the sense of Minsky's terminology (Minsky, 1985). An agent is always associated to several cognitive functions. In this perspective, a new generation aircraft cockpit is a cooperating “society” of artificial and human agents. With the evolution of information technology, artificial agents have become software agents (Bradshaw, 1997). They are computer programs facilitating human-machine interaction, as well as human-human communication. For example, software agents and metaphors can be seen as remembering facilitators. We have built artificial agents, but it is time to better understand how they are used and influence our lives, model them in order to better control them. A major issue is that artificial agents cannot be studied in isolation from people who are in charge of them. In contrast with people, artificial agents are not able to invent (at the constructive level) any strategy, procedure or action that is not programmed in advance, even the most sophisticated. Artificial agents are good at making decisions and performing actions when everything is well-determined and consequences well-assessed in advance. In safety-critical situations, people need to be aware of anything that matters in decision-making cognitive functions used by these artificial agents. This is why cognitive function congruence is a key issue. A bad cognitive function congruence among human and machine agents may result in conflicts. A conflict arises when two agents, or more basically two cognitive functions (even if they belong to the same agent), have different roles (and goals) and/or compete for a same resource. Organizational decision-making depends on the way the organization is set up and actually works. The type of interaction among agents depends, in part, on the knowledge that each agent has of the others. An agent interacting with another agent, called a partner, can belong to two classes: (class 1) the agent does not know its partner; (class 2) the agent knows its partner. The second class can be decomposed into two sub-classes: (subclass 2a) the agent knows its partner indirectly (using shared data or a mediating space for instance), (subclass 2b) the agent knows its partner explicitly (using communication primitives clearly understood by the partner). Any of these classes may lead to conflict. Conflict arises from unshared high-level goals or competition for available resources. Thus, it is necessary to define a set of synchronization rules for avoiding problems of shared high-level goals or resource allocation between agents. Synchronization rules may be handled following three different models: (A) supervision (class 1); (B) mediation (subclass 2a); (C) cooperation by mutual understanding (subclass 2b). In the supervision model, the agent is totally ignorant or vaguely aware of the cognitive functions of the other agents. Typically, synchronization rules have to be handled by a supervisor (Figure 2).

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

11

Guy A. Boy

Decision Making: A Cognitive Function Approach

Figure 2. Supervision: agents need to have a supervisor to assist or manage their activities

The supervisor can be one of the partners or an external agent using an appropriate knowledge base, that may be an operation manual. Typical interaction between a user and a VCR may lead to conflict for example. When a user attempts to program a VCR, he or she usually requires assistance. In this case, the programming cognitive function includes the delegation to a supervisor who is either someone who knows how to do it, or an association of the user and a user-guide manual. In the model of mediation, the agent knows that cognitive functions of its partner exist through the results of (at least) some of the partner’s actions. Both of them use an active shared database, called a mediator (Figure 3). Such a shared database can be an agent itself if it actively informs the various agents involved in the environment, or requests new information (self updating) from these agents. Agents use and update the state of this database. An example would be both agents noting all their actions on a blackboard to which the other agents refer before acting. Agents have to cooperate to manage the shared database. This is no longer a problem of resource allocation, but a problem of sharing data, which each agent can use as it is entitled to. This paradigm is called a data-oriented system. Such a system has to control the consistency of the shared data. Cooperative relations between agents do not exclude competitive relations, i.e., shared data are generally supported by resources for which the corresponding agents may be competing. In this case, synchronization rules have to deal with resource allocation conflicts and corresponding data consistency checking. Direct manipulation interfaces (Shneiderman, 1987) of current personal computers are based on this model.

Figure 3. Mediation: agents manage to communicate through an active database mediating interaction among agents

The previous models do not allow for co-adaptation. In the model of cooperating by mutual understanding, human agents interact directly with the others using a mental model of their organizational environment (Figure 4). Agents share a common goal and a common language expressed by messages, e.g., experts in the same domain cooperating to solve a problem. Agents communicate by incrementally constructing and sharing a common context. Each agent always attempts to construct a meaningful representation of the other agents in order to anticipate their behavior and reactions. Cooperation by mutual understanding involves learning about the other agents. Difficulty in or absence of learning may result in switching to either requiring supervision or mediation. In our everyday life, when people try to communicate, they attempt to understand the cognitive model of the other. They incrementally learn the cognitive functions of the others in order to improve direct communication. This natural communication process fails when mutual understanding is no longer possible. A few software agents learn from users, and support cooperation by mutual understanding, in very specific contexts. However, outside of these contexts, the process of cooperation by mutual understanding is not guaranteed.

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

12

Guy A. Boy

Decision Making: A Cognitive Function Approach

Figure 4. Cooperation by mutual understanding: agents incrementally construct a mental model of their organizational environment

The previous models do not allow for co-adaptation. In the model of cooperating by mutual understanding, human agents interact directly with the others using a mental model of their organizational environment (Figure 4). These three models can be used to represent a single agent as a society of agents, i.e., cognitive functions, or a multiagent (human and machine) system. A society of agents works towards reaching a conscious state of mind. As Daniel Dennett put it: “Enjoying a certain renown is not limited to appearing on television, at such and such a time. It means, more broadly speaking, enjoying the power of being able to influence the course of things. Conscious states of mind are, in an analogous way, those that are able to dominate our brain. And I would gladly define the spirit not as a ‘theatre’, but as an ‘arena’ (an arena without spectators, or course) in which different sequences of events, competing with each other, fight for domination. That which we refer to as ‘memory’, for example, would be nothing more than a series of events that, once taken place, have enjoyed a ‘renown’ in our mind superior to that of all the others, or that have held a position of influence for a long time — from which our consciousness is made up.” (Dennett, 1999, page 103). The main issue in distributed human-machine decision-making is synchronization. Synchronization is not only supported by procedures or rules (handled at the procedural level), but also at the perception-action level by incrementally creating and refining situation patterns, and the constructive level by using (and synchronizing) affects such as motivation and emotions. Synchronization may be handled by a supervisor, by an active database mediating interaction among agents, and by all agents when they are able to incrementally construct appropriate mental models of their organizational environment. DECISION-MAKING SUPPORT

Human-centered design of multi-agent systems where decision-making is crucial, leads to the development of both training courses and software agents. The former tend to adapt people to machines, the latter tend to adapt machines to people. The conceptual framework presented above and the various issues related to distributed decision-making were investigated and developed to help design such decision-making support, i.e., training courses and software agents. Both products involve the creation of appropriate cognitive functions that are distributed among people and machines. Cognitive functions that support situation awareness

Objects have affordances whether they are natural or artificial. Very simple artifacts such as door handles have different shapes. A horizontal flat door handle that is located in the middle of a door suggests pushing. A vertical cylindrical door handle that is located on one side of a door suggests pulling. Human beings establish a relationship between door handles and the appropriate action to open doors, i.e., push or pull. Gibson defines this kind of relationship between a human and an artifact as affordances (Gibson, 1977, 1979). This kind of relationship is not necessarily visible, known or desirable (Norman, 1999). It is thus important in some cases to identify affordances, e.g., in safety-critical systems. The important thing to remember is that the best designed operational documentation may not be useful in using artifacts that have counter-intuitive affordances. Affordances are properties of physical artifacts as well as properties of cognitive functions. Some cognitive functions are learned or artificially constructed and are sometimes called cognitive artifacts. Other cognitive functions are innate. Information technology also has affordances that need to be found in tools that enable people to: •

generate information, i.e., tools that enable making information explicit to others;

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

13

Guy A. Boy

• • •

Decision Making: A Cognitive Function Approach

maintain information awareness, i.e., tools that enable people to be aware that appropriate information exists somewhere; access information, i.e., tools that enable people to access appropriate information at the right time in the right format; understand information, i.e., tools that enable people to understand information chunks.

At the perception-action level, people need to locate clear contrasted information in order to perceive it. The use of peripheral cues is usually recommended to trigger attention and direct the focus of attention. Color coding is also a good way to discriminate information. When someone needs to make a decision based on a large number of data, these data need to be organized into chunks according to the current context, and sometimes integrated into a model. The first recommendation may be realized by simply grouping the data; proximity may be sufficient to simplify and insure good perception of the data. Data integration into a model involves interpretation. The way data are transformed to provide a meaningful representation to the pilots is crucial. Cockpit designers are currently facing this kind of issue. The design of new cockpit situation awareness systems involves data on the situation of the aircraft with respect to terrain, aircraft performance, possible terrain conflicts within a specified range, and current regulations. Usually, pilots have to integrate these data “manually” by using outside views when available or cockpit parameters, maps, aircraft performance parameters, ground proximity alarms, and procedures. Integrated models of these data may improve the perception and consequently decision-making of pilots. For example, three dimension representations of the flight where safety altitudes, terrain and system information are integrated in order to provide meaningful situation awareness to pilots. In the control of dynamic systems, human operators need to identify abnormal situations. Since parameters constantly evolve, they need to know what changes are interesting to process, and what changes are “normal”. This leads to the development of external cognitive functions that provide appropriate interpretation of interesting changes. Alarms are good examples of such cognitive functions. An example of user interface that deals with situation awareness in aviation is the display of horizontal and vertical projections. Up to now only horizontal information is graphically available on-board commercial aircraft. Vertical information, i.e., altitude and vertical speed, is available on the variometer that provides numerical information. The vertical profile can be displayed dynamically. However, the way this profile is represented is crucial. We have observed that the developed planned trajectory was a good solution. Indeed, pilots should have clear information on possible obstacles and excessive flight-path angle at a glance. Pushing the analysis further, we discovered that vertical information and horizontal information should be integrated. This is why 3D displays are good candidates. Appropriate units also improve safe perception and interpretation. For example in approach to landing, pilots make decisions using distances and speed, very rarely time. However, in emergency controlled-flight-into-terrain situations, pilots reason in time to anticipate possible impact. Thus resulting external cognitive functions should be adapted in context. In contrast, units should be consistent; changing scales might be disturbing and generate human errors. Cognitive functions that support inference

Depending on the behavioral level, inference may be supported differently. At the perception-action level, inference is directly related to affordances of objects in the environment. At the procedural level, accessing the right procedure at the right time is crucial. At the constructive level, assistance in hypothesis generation (abduction), constraint identification and hypothesis selection are critical. In all cases, two high-level cognitive functions are at stake: remembering and time management. Under time pressure, pilots need rapid access to the appropriate knowledge. In well-known situations, procedures are available. The only requirement is thus to have rapid access to the right procedure. The appropriate procedure should be presented to pilots at the right time and in the right format. Recognition is always better than recall. Sometimes, access to the right information in the operational documentation is due to an inappropriate effectivity, i.e., the documentation is not the right version with respect to systems. This is due to the efficiency of the revision process. Flight operations rely on information that is accurate, up-to-date, and easy to access and use. Sometimes, the consistency of the operational documentation is also a cause of difficulty of access. Sometimes, the required information is distributed in various places in the documentation. This is due to the fact that the structure of the documentation is not appropriate in the context of use, i.e., there are references to sections in the documentation that are difficult to find in context. As aircraft have become more complex, the amounts and types of necessary operating information have grown. The Flight Crew Operating Manual (FCOM) for a commercial aircraft is an essential part of the documentation that must be supplied by an aircraft manufacturer. In spite of significant advances in aircraft design, particularly relative to the cockpit, the FCOM format and medium have remained basically unchanged for years. It continues as a “classic” paper document contained in portable binders that can be updated periodically by physically replacing or adding pages. The

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

14

Guy A. Boy

Decision Making: A Cognitive Function Approach

quantity of information presented should be limited to what is really necessary. People have a limited processing capacity. We have observed that, at use time, information related to direct-manipulation needs to be accessible first, deeper knowledge about how systems work second. One area of particular focus for the INFO project (Information Needs for Flight Operations) was the definition of levels of detail for information in an FCOM (Blomberg, Boy & Speyer, 2000). This hierarchy would enable users to separate information considered essential for the operation of the aircraft by the manufacturer from clarification and amplification material that could be used optionally at operator discretion (see section 6 of this article for details). Workload is certainly one of the major causes of problems in cognitive processing. As already said, workload is usually related to time. When the required time to implement a cognitive function is greater than the corresponding available time, then workload is high. For this reason, people manage time to maintain a reasonable amount of workload. There are four interesting cases related to time management: cooperation, forgetting, time estimation and synchronization. Cooperation among agents in the cockpit is always an important issue in terms of time management. Pilots need to constantly find out the right compromise between constructing action plans that are adapted to the current situation and mutual understanding, and following coordination procedures that are pre-established. Under time pressure, procedures are not always precisely followed, articulation work helps to manage time. This is where crew resource management training helps especially at the constructive level. When pilots share the same information, and even better the same knowledge, time is saved in the construction of solutions. At the procedural level, cooperation and related cognitive functions help in finding the right procedure at the right time. Forgetting is a significant cause of time waste. During inference, pilots forget either because the capacity of their working memory is limited, or because they have difficulty to access the right knowledge chunk in their long-term memory or their operation manual (external memory). External cognitive functions may help avoiding forgetting by providing both working memory extensions and situated knowledge chunks. Time estimation remains a source of worry during inference, especially at the constructive level. Inference involves finding the appropriate resources, constraints and actions with respect to the identified situation. This depends on the level of expertise and experience. When people estimate that it will take too much time to find the possible response elements, they usually adopt a sub-optimal strategy that they know well and implement it. This is reinforced with time pressure, situation complexity and when people know that situation awareness is inappropriate. Pilots need to synchronize their inference activity to avoid developing conflicting actions. This is where the type of interaction among agents is crucial. Either someone supervises the overall inference or does the inference himself or herself, or agents’ inferences are mediated by a third party, or someone knows about the inference processes of the others and integrates them into his or her own inference process. Cognitive functions that support action taking

At the constructive level, people plan a set of actions with respect to constraints and available resources. They need to schedule each action, taking into account its required time and the time available. At the procedural level, actions are already ordered, but this order usually requires adaptation to the context of use. Some planned actions may conflict with other actions that are more urgent (and not planned). Interruption management is crucial in aviation or domains where attention is required to perform conscious control of a safety-critical system. When pilots execute a checklist for example, they may be interrupted at any time by an alarm or air traffic control. If this interruption requires full attention, they switch to another activity. They may not remember where they stopped in the checklist, and sometimes even that they were executing the checklist. External working memory facilities, that provide action history and suggested resuming points, would increase reliability of action taking. Pilots may postpone the execution of actions when they are overloaded and have high-priority actions to execute. If they know that there will be a less loaded period coming, they usually postpone some actions. They also anticipate actions in order to be ahead for the next phase. Being ahead is a key issue in aviation, and provides maximum availability to process emergency situations. For example, some checklists are performed in advance in order to avoid an excessive workload later. An external cognitive function that would provide an action history would help pilots. Pilots execute erroneous actions. These actions deviate from the required task. External cognitive functions could be implemented that would tolerate some of these actions, and others that would resist these actions. This article does not develop the human error aspect that is already broadly covered by many authors (Reason, 1990; Hollnagel, 1993). Whether people forget, postpone or execute erroneous actions, they learn appropriate types of interaction that improve safety. For example, written procedures are used to support the supervision of the overall cockpit activity, to mediate interactions among new members, or to provide shared contextual references that support cooperation.

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

15

Guy A. Boy

Decision Making: A Cognitive Function Approach

LESSONS LEARNED

Working in the aerospace domain for more than twenty years, I have learned that experience is a key asset for deepening cognitive processes such as decision-making in dynamic and safety-critical environments. Naive views on a domain such as aerospace might be very limited and generate data that would hardly lead to believable generalizations. In addition, considering decision-making as problem solving, i.e., mechanical problem decomposition, can be very counter-productive. This Cartesian approach is usually suited for academic problems defined in close-world situations (otherwise mathematics could not be applied). More generally, it belongs to the “cognition as information processing” paradigm that has made cognitive science an axiomatic science looking for a set of phenomena (Dawson, 1998). Unfortunately, world phenomena that emerge today from the use of information technology are difficult to capture using this approach. As Varela et al. put it: “A growing number of researchers in areas of cognitive science have expressed dissatisfaction with varieties of cognitive realism. This dissatisfaction derives from a deeper source than the search for alternatives to symbol processing or even mixed society of mind theories: it is dissatisfaction with the very notion of a representational system. This notion obscures many essential dimensions of cognition not just in human experience but when we try to explain cognition scientifically. These dimensions include the understanding of perception and language, as well as the study of evolution and life itself.” (Varela, Thompson & Rosch, 1999). Varela, Thompson & Rosch (1999) link the dissatisfaction with a variety of cognitive realism to what they call “the Cartesian anxiety”: “... we have the two extremes, the either-or of the Cartesian anxiety: There is the enchanting land of truth where everything is clear and ultimately grounded. But beyond that small island there is the wide and stormy ocean of darkness and confusion, the native home of illusion... This feeling of anxiety arises from the craving for an absolute ground. When this craving cannot be satisfied, the only possibility seems to be nihilism and anarchy.” I tried to apply this view to the investigation approach we have used to rationalize the various levels of decision-making in aircraft cockpits. This rationalization process was based on experience and organized interactions with domain experts. I used the cognitive function paradigm and formalism to support it. In this context, the GEM has proved to be a very efficient ethnomethodological tool. Similarly to Piaget’s schemes, cognitive functions are not innately supplied but constructed over time from experience (Piaget, 1952, 1954). I suggest that elicited cognitive functions be documented, and their description refined, in the same way. This process would lead to an external memory of cognitive function descriptions and references. At the start of the learning process, a cognitive function is assimilated within a limited context of use. This context evolves when the owner of the corresponding cognitive function accommodates it to various situations. In knowledge management systems, cognitive functions are related to information generation, retrieval, understanding or interpretation, for example. As action schemes, they may evolve towards specialization or generalization based on experience in various contexts of use. One of the main applications of the cognitive function analysis of human-machine systems is human-centered automation, i.e., human-centered cognitive function allocation among humans and machines. Computerized knowledge management involves human-centered automation. Some information is generated automatically. User interfaces and new jobs should be better co-designed. Design in this context has become a real challenge because the resulting coevolution is very fast. In safety-critical systems, it then becomes crucial to incrementally document this co-adaptation or co-construction of human and machine cognitive functions to insure further traceability (Boy, 1999). Access to human experience without referring to an established theory or framework leads to taking into account methods such as introspection. Husserl, the founder of phenomenology, was interested in describing the structures of human experience. This 20th-century philosophical movement called phenomenology departs from the grounded positivist approach of cognitivism. In the light of phenomenology, I would like to propose that the art of knowledge representation, supported by cognitivism, should be revised. I am not suggesting that the knowledge representation approach does not make sense any longer. It should be reformulated according to human experience especially in open domains such as dynamic and safety-critical systems. In particular, concepts such as interactivity, cooperation and emergence of practices should be taken into account in knowledge representation especially since this article is initially targeted towards dynamic and safety-critical decision-making. Current trends in artificial intelligence propose to take expertise in context (Feltovich, Ford & Hoffman, 1997). Artificial intelligence is no longer only a science of autonomous-system design but an attempt to understand and further co-design human and machine cognition. The main question is: “are we designing human cognition?” My answer is Yes, in two ways. First, we are designing explanations of observed cognitive behaviors. The result of such design process can be expressed in terms of cognitive functions. Second, we are designing new practices by developing new machines. These practices lead to the definition of new cognitive functions. Finally, the Art of Memory (Yates, 1966) is coming back to the front line because human experience needs are growing in our information-intensive world. The Art of Memory is a particular indexing mechanism that enables people to invent loci and images (indices) to help remember things. Emotional events tend to facilitate the formation of such loci and images. Images can be shocking and unusual, beautiful or ugly, funny or rude. Good stories create emotions that are likely to create useful indices that will facilitate remembering. The need for methods that link abstract concepts to

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

16

Guy A. Boy

Decision Making: A Cognitive Function Approach

experienced ones, objects to subjects, involves new approaches to human cognition and decision-making in particular. This is the case in situated actions (Suchman, 1987), or distributed cognition (Hutchins, 1995b), and activity theory (Engeström, Miettinen & Punamäki, 1999; Nardi, 1997). RELATED WORK

Many authors have studied decision-making. Klein and Crandall (1995) presented the role of mental simulation in decision-making. Mental simulation is the process of consciously enacting a sequence of events. Their approach is similar to Kahneman and Tversk’s one (1982) that consists in generating predictions, assessing event probabilities, generating conditional probabilities, assessing causality and generating counterfactual assessments. Beach and Mitchell (1987) undertook a broader approach on image theory that involves images about individual values, eventual outcome of the current situation, desired outcome and effects of adopting different courses of action. In the same direction, other work can be cited such as progressive deepening (de Groot, 1965), mental imagery (Shepard & Mezler, 1971), and mental models (Rouse & Morris, 1986; Kieras & Bovair, 1984; DeKleer & Brown, 1983; Forbus, 1983). These approaches consist in generating a hypothetical course of actions, evaluating it, interpreting issues and refining it. The approach presented in this article focuses on situated decision-making where an effort is put on differentiating levels of behavior instead of working on possible sequences of action rules. Hunt and Joslyn’s work (2000) on time-pressure decision-making is very relevant to the work presented here. They carried out a task analysis on public safety (911) dispatching. They used abstract decision-making (ADM) that is “a computer game in which participants earn points by sorting objects into bins as rapidly as possible.” Authors used professional dispatcher performance on a realistic simulation task to verify the results of nonprofessional subjects in the same simulation. In other words, they used professional experience to validate analytical results. In our work, we did not run laboratory experiments. We captured information and knowledge from field studies and GEM sessions. Our experienced-based approach is strongly based on experience feedback, interaction with domain practitioners and incremental rationalization of elicited knowledge. CONCLUSION AND PERSPECTIVES

Decision-making may be modeled by a network of cognitive functions that may be distributed among many agents. This article has tried to show that group knowledge elicitation and cognitive function analysis can be important conceptual tools to investigate decision-making in dynamic and safety-critical environments. First, a conceptual framework for decision-making has been developed and applied in the aviation domain. Second, decision-making was analyzed from a socio-cognitive perspective. Third, according to this background, decision-making supports have been derived, and an example of application in the aviation domain provided. Decision-making is inherently situated. This involves difficulty in providing decision aids to people because they need them in context. Context elicitation and categorization is a difficult task because we are not aware of what context elements should be kept. More generally, what is context? This question has been raised for a long time with very few answers. Making categories of situated decision-making patterns according to the levels of behavior and high-level cognitive functions that were described in this article should be a good start. This is what we have done in the INFO project (Blomberg, Boy & Speyer, 2000). In addition, a decision needs to be made within a given period of time. In most scientific fields, time is an independent variable. However, time can be a very important dependent human factor. Perception of time may drastically influence decision-making. Time might be perceived to be shorter than it is in reality because workload is excessive. Time perception and action are intimately related. In particular, the time it takes to make a decision to act may be variable according to the uncertainty of the repercussions of the action. Action irreversibility is crucial in decision-making, especially in safety-critical environments. In such environments, time units are incrementally discovered in order to better understand and control these systems. In aviation for example, a flight may be decomposed into phases. In each flight phase, several parameters are persistent, and simplify the way flying is handled by pilots. These time units are domain-dependent. They enable domain human agents to allocate cognitive functions appropriately. Domain human agents include end-users, designers, and other people who have an influence on the life cycle of the artifacts involved in the actual decision-making process. More research should be carried out to better understand time issues in decision making. Training is also an important perspective of this work on decision-making. Aerospace research, airlines and the aircraft industry realized in the early eighties that Cockpit Resource Management (CRM) might improve psycho-sociological problems in the cockpit. Today, CRM has become a current practice in aviation training centers, but is far from being sufficient to solve distributed cognition problems in the cockpit. In particular, it becomes clear that many decisions are already determined during design. A decision may be very simple to make. Let’s call it a mini-decision. The accumulation of several mini-decisions usually leads to a complex decision. Even if a specific agent makes this decision, each mini-decision that is part of it may belong to a different agent.

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

17

Guy A. Boy

Decision Making: A Cognitive Function Approach

Another perspective is to investigate decision-making as conflict resolution. Although most conflicts are usually overcome immediately, a few conflicts may lead to difficult situations. The cognitive function paradigm is appropriate to represent local and global conflicts within a community of agents. These conflicts usually result from lack of appropriate knowledge or timely information, lack of training, ignorance, forgetting, imprecise and uncertain information, inability to anticipate future action, role confusion, poor access to necessary information, competition among agents, not enough resources, personality incompatibility, lack of compliance to procedures, human reliability, inability to delegate, misunderstanding, power and so on. All these factors are encountered in aircraft cockpits and partially explain why conflict resolution is so important to keep the overall system safe. Conflict resolution may take place any time during a flight. Conflicts can be overcome either by active supervision, mediation or cooperation by mutual understanding. Agents, either by themselves or in a group, have to make the right decision at the right time according to the information they have. High-level cognitive functions and behavioral levels presented in the above framework will be useful to study distributed decision-making in terms of conflict resolution. Finally, we should extend current work on operational procedures. They are designed to support anticipation, dynamic decision-making, real-time communication, collaborative work, action coordination, and other cognitive functions that deal with time. Even if pilots are trained to use operational procedures, they are not only procedure followers, they solve problems based on their own experience. Specific human skills are always crucial and mandatory in safety-critical situations. ACKNOWLEDGMENTS

Richard Blomberg, Gabrielle de Brito, Bob Faerber, Laurent Moussault, David Novick, Eric Petiot, Jean Pinet, JeanJacques Speyer, Helen Wilson, all the pilots, flight instructors and documentation specialists involved in the Airbus INFO project, and the airline pilots involved in the SFACT project on checklists greatly contributed in the overall research effort. Thanks all. The viewpoints expressed in this paper are the responsibility of the author and do not represent official industrial or regulatory statements. REFERENCES

1.

Addis, T.R., Gooding, D.C. & Townsend, J.J. (1993) Knowledge acquisition with visual functional programming. Proceedings of EKAW'93, Lecture Notes in Computer Science Series, Springer Verlag, Berlin, 379-406.

2.

Anderson, J.R. (1983). The Architecture of Cognition. Harvard University Press. Cambridge, MA.

3.

Amalberti, R. (1995) La conduite des systèmes à risques, PUF, Paris.

4.

Beach, L.R. & Mitchell, T.R. (1978). A contingency model for the selection of decision strategies. Academy of Management Review, 3, 439-449.

5.

Berthoz, A. (2003) La décision. Odile Jacob, Paris.

6.

Billings, C.E. (1997). Aviation automation: The search for a Human-centered approach. Lawrence Erlbaum Associates, Publishers. Mahwah, NJ.

7.

Blomberg, R., Boy, G.A. & Speyer, J.J. (2000). Information Needs for Flight Operations: Human-Centered Structuring of Flight Operations Knowledge. In Proceedings of HCI-Aero 2000, International Conference organized in Cooperation with ACM-SIGCHI, (Toulouse, France, September).

8.

Boose, J.H. (1984). Personal construct theory and the transfer of human expertise. AAAI-84 Proceedings. pp. 27-33. California: American Association for Artificial Intelligence.

9.

Boy, G.A. (1987). Operator Assistant Systems. International Journal of Man Machine Studies. 27, 541-554.

10. Boy, G.A. (1996). The Group Elicitation Method: An Introduction. Proceedings of EKAW'96, Lecture Notes in Computer Science Series, Springer Verlag, Berlin. 11. Boy, G.A. (1998a) Cognitive Function Analysis for Human-Centered Automation of Safety-Critical Systems in Proceedings of CHI'98, ACM Press, 265-272. 12. Boy, G.A. (1998b) Cognitive function analysis. Ablex, Stamford, CT. 13. Boy, G.A. (1999). Traceability. EURISCO/Airbus Industrie Technical Report T-99-060. 14. Bradshaw, J. (1997). Software agents. AAAI/MIT Press, Cambridge, MA. 15. Dawson, M.R.W. (1998). Understanding Cognitive Science. Malden, MA: Blackwell Publishers Ltd. 16. de Brito, G. (1998). Study of the use of Airbus flight-deck procedures and perspectives for operational documentation in Proceedings of HCI-Aero’98, International Conference organized in Cooperation with ACMSIGCHI, (Montreal, Canada, May 1998), 195-201.

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

18

Guy A. Boy

Decision Making: A Cognitive Function Approach

17. de Brito, G., Pinet, J. & Boy, G.A. (1998). Using written procedures in new generation cockpits: abnormal and emergency situations. SFACT/EURISCO Technical Report n. T-98- 049. EURISCO, Toulouse, France. 18. De Groot, A.D. (1965). Thought and choice in chess. New York: Mouton. 19. DeKleer, J. & Brown, J.S. (1983). Assumptions and ambiguities in mechanistic mental models. In D. Gentner and A.L. Stevens (Eds.), Mental models. Lawrence Erlbaum Ass., Hillsdale, NJ, 155-190. 20. Dennett, D.C. (1999). L'âme et le corp? No Problem. Interview by Ch. Delacampagne. La Recherche, 323, September. 21. Engeström, Y., Miettinen, R. & Punamäki, R.L. (1999). Perspective on Activity Theory. Cambridge University Press. U.K. 22. Feltovich, P.J., Ford, K.M. & Hoffman, R.R. (1997). Expertise in Context. AAAI Press/MIT Press, Menlo Park, CA. 23. Fisher, D.H. (1987). Knowledge Acquisition via Incremental Conceptual Clustering. Machine Learning, 2, 139-172. 24. Forbus, K.D. (1983). Qualitative reasoning about space and motion. In D. Gentner and A.L. Stevens (Eds.), Mental models. Lawrence Erlbaum Ass., Hillsdale, NJ, 53-72. 25. Gibson, J.J. (1977). The theory of affordances. In Perceiving, Acting and Knowing, R.E. Shaw & J. Bradshaw (Eds.). Lawrence Erlbaum Associates, Hillsdale, NJ. 26. Gibson, J.J. (1979). The ecological approach to visual perception. Boston: Houghton, Mifflin. 27. Hoc, J.M. (2000). From human-machine interaction to human-machine cooperation. Ergonomics, Volume 43. Number 7, July, 833-843. 28. Hollnagel, E. (1999). From function allocation to function congruence. In Coping with computers in the cockpit, S. Dekker and E. Hollnagel (Eds.), Ashgate, Aldershot, UK, 29-53. 29. Hollnagel, E. (1993). The phenotype of erroneous action. International Journal of Man-Machine Studies, 39, 1-32. 30. Hutchins, E. (1995a). How a cockpit remembers its speeds. Cognitive Science, 19, 265-288. 31. Hutchins, E. (1995b). Cognition in the wild. MIT Press, Boston, USA. 32. Kahneman, D. & Tversky, A. (1982). The simulation heuristic. In D. Kahneman, P. Slovic and A. Tversky (Eds.). Judgement under uncertainty: Heuristics and biases. Cambridge University Press, Cambridge, MA, 201-208. 33. Kanki, B. & Palmer, M. Communication and crew resource management. In Wiener, E., Kanki, B., and Helmrich, R. (Eds.), Cockpit resource management. Academic Press, San Diego, CA, 1993. 34. Kaptelinin, V. (1995). Designing learning activity: a cultural-historical perspective in CSCL. Proceedings of the Computer Supported Cooperative Learning (CSCL’95). Indiana University, Bloomington, IN. 35. Karsenty, L., Bigit, V. & de Brito, G. (1995). The use of written procedures in new generation cockpits (in French). EURISCO-SFACT Technical Report no. T-95-023, EURISCO, Toulouse, France. 36. Kelly, G.A. (1955). The Psychology of Personal Constructs. New York: Norton. 37. Kieras, D.E. & Bovair, S. (1984). The role of a mental model in learning to operate a device. Cognitive Science, 8, 255-274. 38. Klein, G. & Crandall, B.W. (1995). The role of mental simulation in problem solving and decision-making. Local applications of the ecological approach to human-machine systems. P. Hancock, J. Flach, J. Caird and K. Vicente (Eds.) Volume 2. Lawrence Erlbaum Ass., Hillsdale, NJ, 324-358. 39. Leont'ev (1981). Problems of the development of the mind. Moscow: Progress. 40. Minsky, M. (1985). The Society of Mind. Touchstone Simon and Schuster. 41. Nardi, B.A. (1997). Context and Consciousness: Activity Theory and Human-Computer Interaction. MIT Press, Cambridge, MA. 42. Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press. 43. Nielsen, J. (1993). Usability engineering. Academic Press. London. 44. Norman, D.A. (1986). Cognitive Engineering. In User-Centered System Design., D. Norman & S. Draper (Eds.), Hillsdale, NJ: Lawrence Erlbaum Associate, 31-61. 45. Norman, D.A. (1990). The Design of Everyday Things. Doubleday, New York. 46. Norman, D. A. (1992). Cognitive artifacts. In J. M. Carroll (Ed.), Designing Interaction: Psychology at the HumanComputer Interface. (pp. 17-38). Cambridge: Cambridge University Press. 47. Norman, D.A. (1993). Things that make us smart. Addison-Wesley Publishing Company, Reading, MA.

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

19

Guy A. Boy

Decision Making: A Cognitive Function Approach

48. Norman, D.A. (1999). Affordances, Conventions and Design. Interactions, ACM, New York, May/June, 38-43. 49. Peirce, C.S. (1958). Science and philosophy: collected papers of Charles S. Peirce, V. 7, Harvard University Press. 50. Peirce, C.S. (1966). Charles Peirce: Selected writings, edited by P.P. Wiener, New York: Dover. 51. Piaget, J. (1952). The Origins of Intelligence in Children. New York: Norton. 52. Piaget, J. (1954). The Construction of Reality in the Child. New York: Ballantine. 53. Reason, J. (1990). Human error. Cambridge: Cambridge University Press. 54. Rasmussen, J. (1986). Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering. Amsterdam, The Netherlands: North-Holland. 55. Rouse, W.B. & Morris, N.M. (1986). On looking into the black box: Prospects and limits on the search for mental models. Psychological Bulletin, 100(3), 349-363. 56. Shaw, L.G. & Gaines, B.R. (1993). Personal construct psychology foundations for knowledge acquisition and representation. Knowledge Aquisition for Knowledge-Base Systems. EKAW'93 Proceedings. N. Aussenac, G. Boy et al. (Eds.). 256-276. Berlin: Springer-Verlag. 57. Shepard, R.N. & Metzler, J. (1971). Mental rotation of three-dimentional objects. Science, 701-703. 58. Sheridan. T.B. (1984). Supervisory control of remote manipulators, vehicles and dynamic processes: experiment in command and display aiding. Advances in Man-Machine Systems Research, J.A.I. Press, 1, 49-137. 59. Suchman, L. (1987). Plans and situated actions: The problem of human-machine communications. New York: Cambridge University Press. 60. Varela, F.J., Thompson, E. & Rosch, E. (1999). The embodied mind. Seventh Printing. Cambridge, MA: The MIT Press. 61. Wickens, C.D. (1996). Situation awareness: Impact of automation and display technology. NATO AGARD Aerospace Medical Panel Symposium on Situation Awareness: Limitations and Enhancement in the Aviation Environment (Keynote Address). AGARD Conference Proceedings 575. 62. Wickens, C.D. & Flach, J.M. (1988). Information processing. In Human Factors in Aviation. Edited by Earl. L. Wiener and David C. Nagel. Academic Press, Reading, MA. 63. Winograd, T. & Flores, F. (1986). Understanding computers and cognition-A new foundation for design. AddisonWesley Pub. Comp., Inc. Reading, MA. 64. Yates, F.A. (1966). The Art of Memory. French translation by Daniel Arasse, 1975, Editions Gallimard, Paris, France.

Proceedings of the Seventh International NDM Conference (Ed. J.M.C Schraagen), Amsterdam, The Netherlands, June 2005

20

Suggest Documents