EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
Knowledge Acquisition In Dynamic Systems: How Can Logicism And Situatedness Go Together? Guy Boy European Institute of Cognitive Sciences and Engineering (EURISCO) BP 4032, 10 avenue Edouard Belin, 31055 Toulouse Cedex, France Tel. (33) 62 17 83 11; FAX (33) 62 17 83 38 Email:
[email protected] Abstract. This paper presents an investigation of knowledge acquisition in dynamic s ystems. The nature of dynamic systems is analyzed. A first ontology of the domain is proposed. Various distinctions are presented such as the agent perspective, the percept ion of temporal progression, and the notions of conseqences and expertise in dynamic systems. We use Rasmussen's model to characterize ways knowledge can be acquired in dynamic systems. Procedures are shown to be essential knowledge entities in intera ctions with dynamic systems. An emphasis on logicism and situatedness is presented and discussed around the situation recognition and analytical reasoning model. The kn owledge block representation is introduced as a mediating representation for knowled ge acquisition in dynamic systems.
1 Introduction Recent contributions clearly show that the knowledge acquisition (KA) field has grow n up to the point that formal methodologies are now available such as KADS (Wielin ga et al., 1992). However, there is very little done in the direction of KA in dynamic s ystems. Most of the work has been done in static worlds or very slow moving worlds. The kind of problems that are of interest in this paper deal with human-machine intera ction where time is a critical issue. A tremendous amount of work has been and is bei ng done in automatic control research. However, most of this work is focused on very low level activities, essentially sensory-motor. The models and technology that are us ed are primarily numerical, e.g., matrix theory, optimization, linear and non-linear con trol. In contrast to automatic control research, part of computer science has evolved to wards symbolic computation (instead of numerical) with the promotion of artificial int elligence (AI) in the 80's. In AI, the notion of feedback has not been developed to the same extent as it has in automatic control research, although, a few attempts have bee n developed in reactive planning (Drummond, 1989) and in procedural reasoning (Ge orgeff & Ingrand, 1989). These attempts took place in engineering domains and in par ticular in the space domain. It is also clear that no real effort has been made in the acq uisition of knowledge involved in the control and evolution of dynamic systems. For almost 13 years, our work has been directed towards better understanding of hum an-machine interaction in the aerospace domain. Most of the systems such as airplane s, spacecraft, air traffic control, etc., are highly dynamic and require accurate control t hat guarantees safe and reliable operations. The main concern that aerospace designer s have is that it is tremendously difficult to anticipate how end-users will use the tool t hey are designing, i.e., it is usually impossible to anticipate situations or contexts that end-users will be facing. Context is a question of focus of attention. If someone has n ot yet experienced a situation or context, then he/she cannot describe it because he/she
Page 1
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
does not own the corresponding patterns allowing a situation recognition process to t ake place. Acquisition of such situation patterns and appropriate behavior can only be carried out when users are interacting with dynamic systems in real situations. For thi s reason, such a knowledge acquisition process is intrinsically incremental and situate d. Furthermore, the more the system is dynamic, the more the acquisition needs to be on-line. This paper presents several points of view on knowledge acquisition in dynamic syste ms. The dynamic systems that we are talking about are controlled by expert users. Hu mans and machines are viewed as agents that interact with each other. The knowledge to be acquired is the knowledge involved in the interaction between human and mach ine agents. The knowledge acquisition process is performed using the paradigm of int egrated human-machine intelligence (IHMI). The notion of situation is developed, as well as the procedures that people use for controlling dynamic systems. We try to sho w that the knowledge level cannot be dissociated from the lower levels of human beha vior in the control of dynamic systems. This makes KA more difficult. However, this view is challenged by another view supporting the fact that society is changing from a n energy-based world to an information-based world. In this view, humans tend to co ntrol information-based systems using the knowledge level. This does not mean that e xpertise is clearly identified at design time. We claim that in information-based world s, expertise that is necessary to control dynamic systems is a key issue for the design o f user interfaces. Top-down knowledge acquisition is then possible if expert users are clearly identified during the design process. However, when there is no expertise avail able at design time, knowledge acquisition has to be bottom-up. In the balance of the paper, we develop a model and a mediating representation specific to KA in dynamic domains.
2 Problem Statement In this paper, domains are restricted to human-machine systems where the machine is a dynamic system, e.g., an airplane. Knowledge we are looking for has two distinction s: — it describes how the system works (technical knowledge); — it describes how the system is or should be used (operational knowledge). We are mainly concerned by the second type of knowledge, i.e., what the user needs t o know to control the system. However, it is obvious that there are links between tech nical knowledge and operational knowledge. The main goal of this paper is to propose a methodology including appropiate formali zation tools to facilitate the construction of intelligent assistant systems (IASs). We ha ve already introduced and described the concept of IAS in previous work (Boy, 1987, 1991). From a human-machine interaction point of view, an IAS mediates interactions between a human operator and the physical system being controlled. The evolution of aircraft cockpit technology, for instance, tends to increase the IAS role to the point th at pilots interact almost only with it instead of interacting directly with the aircraft (as a mechanical system). IASs create "illusions" to human operators by providing inform
Page 2
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
ation that is not the same information processed by the mechanical system. The main problem is to validate these IASs in real world environments so that such "illusions" b ecome natural and guarantee the safety of the overall system. In this paper, an IAS has three modules: a proposer that displays appropriate information to the human operato r; a supervisory observer that captures relevant actions of the human operator; and an analyser that processes and interprets these actions to produce appropriate information for the proposer module. These modules use two knowledge bases: the technical kno wledge base and the operational knowledge base. Over the years, we have developed the paradigm of integrated human-machine intellig ence (IHMI) (Boy & Nuss, 1988; Shalin & Boy, 1989; Boy & Gruber, 1991) to give a framework to acquire knowledge useful for the implementation of an IAS. This parad igm is presented in Figure 1. Arrows represent information flows. This model include s two loops: — a short term supervisory control loop that represents interactions between the human operator and the IAS; — a long term evaluation loop that represents the knowledge acquisition process .
SUPERVISORY CONTROL LOOP PROPOSER
SYSTEM BUILDER
TEMPORARY OPERATION K.B.
OPERATOR ENVIRONMENT SITUATION EVALUATOR
SUPERVISORY OBSERVER
KOWLEDGE ACQUISITION OBSERVER
ANALYSER
KNOWLEDGE ACQUISITION K.B.
TE CHNICAL DOMAIN K.B.
OPERATIONAL K.B.
EVALUATION LOOP
OBSERVED RESULTS Figure 1. An Integrated Human-Machine Intelligence model
The problem that we try to solve in this paper can be stated as the following question: what kind of methodology and tools can be developed today to elicit and formalize kn owledge that characterizes dynamic systems? This problem can be divided into smalle
Page 3
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
r subproblems. The first subproblem that comes to mind is to define the concept of dy namics by constructing an ontology of the dynamic systems domain. The second subp roblem is to propose an appropriate knowledge representation for the dynamic system s domain. In particular, we defend the view that the concept of agent is very useful to represent dynamic systems.
3 Dynamics attributes The representation of time has already been a concern in AI (Allen, 1985). This paper tries to develop an ontology that is useful for acquiring knowledge about dynamic syst ems. Aspects of dynamic systems that will be described in this paper will be elicited fr om real-world task environments. Dynamics is a difficult concept to define. It deals with the duration of events or action s involving reasoning about temporal intervals. Relations between events and actions are perceived by people according to their own experience. If these relations persist th en people cognitively (re)construct patterns of them upon refelection. The concept of persistence can be associated with the concept of context, i.e., as long as a fact remain s true it can be included in the current context. Periodicity and rhythms are other conc epts that can be associated with dynamics. Parallel occurences of events leads to the c oncept of choice. Dynamics is also associated with the notion of conseqences, when a situation is poorly perceived or memorized for instance. In this section, we give a first ontology of the concept of dynamics. Dynamics deals w ith action and re-action. Actions are the main characteristics of agents. Agents can be humans or machines that act. Thus, the concept of an agent is essential in dynamic sys tems. The perception of temporal progression is often a matter of context construction . The notion of conseqences in dynamic systems helps to characterize them. Finally, d ynamic system management is a matter for experts. 3.1 Towards a first ontology of dynamics As we already mentioned in section 2 of this paper, we differentiate between technical domain knowledge and operational knowledge. The same distinction has been made by Mizoguchi, Tijerino and Ikeda (1992) when they represent expertise by domain kn owledge and task knowledge. Mizoguchi et al. proposed a domain ontology including first principles, basic theories and a device model, and a task ontology including a mo del of problem solving, a generic vocabulary (task-dependent verbs and nouns) and ge neric tasks. We take the view that a dynamic system evolves with time according to its own intern al processes and external inputs/outputs from/to the environment. The following table presents a task ontology on dynamics according to Mizoguchi et al.'s distinction. This first ontology includes hierarchical descriptions of an agent, a human, and dynamics ( Table 1). The goal of these descriptions is to provide a vocabulary useful in task analy sis as well as in user's activity observation and analysis. Human factors issues have gu ided its development.
Page 4
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
agent
– Table 1 – act interact control process follow procedures perform a task send a message wait (waiting loop) sense information perceive recognize situations learn adapt analogize situations generalise context use cognitive capabilities anticipate behave rationaly control and command control continuous parameters set parameter values (set points) coordinate decide interrupt iterate (iteration loop) monitor signals maintain and monitor markers monitor and react to warning signals and alarms monitor trends postpone (retardability) reason quantitatively and qualitatively revise hypotheses schedule, plan supervise
human act talk (verbal channel) use gesture resources is a cognitive agent regulate activity cope with workload and time pressure, respond to cognitive demand supervise (supervisory control) switch from automatic mode to manual mode sense information feel force feedback hear (auditory system) watch (central vision) watch (peripheral vision)
Page 5
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
use cognitive capabilities maintain attention directed attention focus of attention attention switching maintain vigilence prioritize tasks dynamically use and improve skills
– Table 1 (cont.)-
dynamics related concepts verbs acknowledge reception delegate enter (into), exit, leave push, pull redo, undo release, engage start, stop, select stay (within), keep turn nouns action, force, energy events breakdown change, move feedback parallelism, corequisite sequentiality, causality, prerequisite, chronology, history precision, uncertainty reliability, safety, conseqences situation abnormal situation actual state or situation context desired state or situatoin expected or unexpected situation perceived situation signal situation awareness time instantaneous phase required time, available time In order to illustrate these concepts, we will take three short examples. The first one d escribes dynamics as interaction with a dynamic machine. In the second example, we describe dynamics as coordination between people to solve problems in real-time. Fin ally, the third example introduces the concept of procedural knowledge as a real-worl d requirement for safe and reliable human-machine interaction.
Page 6
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
Example of knowledge that is involved in driving a car. The kind of knowledge tha t is used in driving a car includes dynamics concepts. A scenario for using a car could be the following. First you have to enter into the car and sit. Turn the ignition key an d start the engine. Then, look around for merging traffic. When the road is safe, you r elease the handbrake, and engage the first gear, turn the steering wheel and start rolli ng. When driving, you have to keep yourself aware of the road situtation, e.g., obstacl es, road conditions (wet, damaged surface, etc.), merging traffic, etc. Situation awaren ess is essential. You have also to control your car in order to keep it on the right side o f the road. You have to trust the breaking system as well as the power system, turn sig nals and fuel gauge. This continuous monitoring ensures safety and reliability in drivi ng. Coordination between team members to solve problems in real-time. Cooperative work is becoming a subject of primary importance as knowledge of several experts is often needed to solve current complex problems in real-time. People need to interact t o solve these problems. In an aircraft cockpit for instance, delegation is a key activity of pilots who have to trust their partners in the accomplishments of required tasks. Th e more the cockpit becomes automated, the more artificial agents are available to pilot s. As a result, pilots have to delegate tasks to these artificial agents. The more these ne w agents are used the more user's trust develops along with successes and failures. Mo dern airplanes include larger quantities of such agents. The shift from "self delegation " (i.e., the crew member performs the task directly without delegating to anyone else) and delegation to another qualified human being, or to an artificial agent is not obviou s in practice. This is because the agent metaphor or the magic of the human-machine i nterface (Toggnazzini, 1993) has to hold its promises, i.e., effects that are produced m ust match users' expectations. In particular, agents need to acknowledge accomplishm ents of their actions. This is true whether the cooperating agent is a human or a machi ne. Capturing such knowledge is not trivial because it involves experimentation and a great deal of prerequisite domain knowledge. Procedural knowledge. Procedural knowledge is often used when safety and reliabili ty are issues (e.g., flying an airplane). Very little effort has been devoted to the acquisi tion of procedural knowledge. Boy (1989, 1991), Mathé (1990, 1991, 1992) and Saito et al. (1991) have developed methods to acquire procedural knowledge. In particular, Boy and Mathé developed a mediating representation called knowledge blocks to help the acquisition of this type of knowledge (see section 6.2 of this paper). Procedural kn owledge is always unstable because it is constantly revised to improve the control of d ynamic systems with respect to new experimental findings. The more dymanic system s are used, the better people know how to operate them and the better good procedures can be developed. Operators need to have lots of confidence in the procedures they ar e using, otherwise they do not use them anymore. In the best case, they annotate them and modify them. Thus, the acquisition of procedural knowledge is necessarily a highl y dynamic process itself. 3.2 The agent perspective In dynamic systems, the issue of procedure management and maintenance is extremel y important. Ususally, procedures complement interface instruments and controls. The y are used to provide human operators with more or less strict guidelines that help the
Page 7
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
m conduct these systems safely. Procedures can be more or less complicated accordin g to the transparency of the human-machine interface, and in themselves. Automation separates the operator from the real (usually mechanical) system. As a matter of fact, automation can be seen as a deeper user interface than conventional (surface) interfac es. From this perspective, the more the interface separates the operator from the real s ystem, the more procedures are needed either to learn how to operate new systems tha t do not have mechanical feedback (this is to create appropriate cognitive automatisms ), or to make sure that operations are executed with respect to specifications. Convers ely, when the interface presents the right information, at the right time and in the right format, the operator tends to understand what is going on and acts appropriately. When some procedures are directly implemented in the system and show up on the hu man-machine interface, we will talk about artificial agents. Such agents are characters (Laurel, 1991) that have properties and behavior. They ususally act on the system bei ng controlled. They serve the operator as an assistant would do. We say that the huma n operator delegates some tasks to these agents. Interacting agents. Let us take the example of dynamic knowledge in fault diagnosis. In static domains, diagnosing a fault involves knowledge about the structural relation s between components of the faulty system. In dynamic domains, diagnosing a fault is an activity in addition to the ongoing activity of controlling the faulty system. Let us say that an agent takes care of such activity (the diagnostic agent). In an airplane for i nstance, when a fault occurs pilots cannot stop the flying task to only focus on the dia gnostic task. Let us say that the flying agent manages the flying (control) task. Furthe rmore, diagnosis and control are not independent activities: the fault disturbs the syste m being controlled; sometimes the system needs to be disturbed (tested) to find the ca uses of ambiguous symptoms; some regular control actions on the system modify the course of the diagnosis activity. The two corresponding agents interact. In this case, th e acquisition of the operational knowledge is very complex. There are two ways of implementing this KA process: by construction and by observa tion. (1) Advocates of model based reasoning would promote the construction of oper ational knowledge from technical knowledge on both control and diagnosis. However, this construction will hardly reproduce operators' expertise in dynamic fault manage ment. (2) The solution of observing people diagnosing faults in dynamic environment s is also problematic. Indeed, when a fault occurs, operators are ususally overloaded (t ime pressure), they have multiple decisions to make, etc. 3.3 Perception of temporal progression People perceive dynamics differently according to the feedback they are able to receiv e. There are rapid systems and slow systems. Rapid systems can be perceived as not " moving" at all if the focus of attention is on slower agent performance. Conversely, sl ow systems induce problems of vigilance. It is usually better to isolate agents and acq uire knowledge separately to avoid such confusions. But once each individual dynami cs is understood, it is necessary to better understand how people process with several agents all together that have different dymanicities. Then, the operator's focus of atten tion is a crucial target for the knowledge engineer.
Page 8
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
The perception of dynamics depends on the user's expertise. The more the user is fami liar with an agent, the more he/she is able to anticipate quickly its reactions. Experts a re always ahead of the dynamic systems they are controlling. Such expertise is compil ed for efficiency purposes, and then difficult to elicit. In the HORSES project, we dev eloped a methodology to acquire such an expertise by observation (Boy, 1986). Elicita tion of correponding knowledge can be disturbed by workload problems. Indeed, time -pressure and high workload tends to modify the regular activity. Thus, the correspon ding factors have to be detected in order to correctly validate or contextualize knowle dge that is acquired. "Psychologists often think that it is possible, in principle and in practice, to ex amine cognitive processes without concern with context, i.e. to neutralize the t ask so that performance reflects "pure process"... Evidence suggests that our a bility to control and orchestrate cognitive skills is not an abstract context-free s kill which may be easily transferred across widely diverse domains but consist s rather of cognitive activity tied specifically to context... This is not to say tha t cognitive activities are completely specific to the episode to which they were originally learned and applied. In order to function, people must be able to gen eralize some aspects of knowledge and skills to new situations. Attention to th e role of context removes the assumption of broad generality in cognitive activ ity across contexts and focuses instead on determining how generalization of k nowledge and skills occurs. The person's interpretation of the context in any p articular activity may be important in facilitating or blocking the application of skills developed in one context to a new one." (Rogoff, 1984). The notion of context is essential in dyamic systems. The use of dynamic systems pro vides a very large number of situations. The awareness of the temporal progression is context-sensitive. Context is usually related to other entities like situation, behavior, p oint of view, relationships among agents, discourse, dialogue, etc. Context can be defi ned in several ways. It can be a dynamic 'window' which shows the state of the enviro nment including the user (e.g., his/her intentions, focus of attention, perceived state of the environment, etc.) In the Computer Integrated Documentation (CID) project, the notion of context has been used to tailor a documentation system to users' information requirements. It allows the system to narrow domain and search. 3.4 Dynamics and conseqences Whenever there is a breakdown in human adaptation during dynamic system manage ment, human operators involve different cognitive capabilities according to the type o f system they are controlling. Furthermore, the current trend is to go from energy-base d systems to information-based systems. This means that whereas humans used to phy sically control systems, today they easily monitor information systems that mediate in teraction between them and the systems being controlled (there is at least no real cogn itive overload in normal situations). The main problem comes from the fact that when a failure occurs, human operators now have to understand it. Thus, even if workload i s extremely low during monitoring (sometimes to the point that vigilance is an import ant issue), it may exponentially increase during fault diagnosis. This fact is extremely important to consider with respect to knowledge acquisition in dynamic systems. In or
Page 9
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
der to better master this issue, we now describe instances that represent three classes o f dynamic systems. Coffee maker, car and airplane. You can stop your coffee maker if there is any prob lem. In this case, you can stop and nothing dramatic will result from this act. If you st op your car for any reason, let us say to avoiding a pedestrian crossing the road, you might cause another accident by the fact that the car following you did not anticipate t his unpredictable stop. In this case, you can stop but... If you are a pilot and you are fa cing a severe problem on board, you just cannot stop the airplane, otherwise you fall! In this case, you cannot stop at all. Unconstrained dynamic systems such as the coffee maker can be stopped safely at any time independently of the current evolution of the environment. They are evolving wi th time in an open-loop fashion. In other words, you can anticipate the final condition s before stopping. Loosely-constrained dynamic systems such as the car can be stoppe d according to conditions set by the environment. They are evolving with time in a clo sed-loop with the evolution of their environment. In this case, human operators have t o be physically in the loop. In other words, you cannot fully anticipate the final condit ions before stopping. Strongly-constrained dynamic systems such as the airplane cann ot be stopped at any time when in operation. They are evolving in a closed-loop fashio n. Beside this closed-loop evolution, physical control is becoming very remote from h uman operators. According to these distinctions, we claim that the perception of dyna mics in one or the other system is quite different. This classification of dynamic systems is done with respect to the conseqences percei ved by a human operator to safely stop them. It is particularly interesting from a KA p oint of view. Knowledge involved in failure recovery depends on the type of dynamic system. In the first type, there is no uncertainty. Thus, the dynamic system can be ope rated without requiring attention. Even if the operator makes an error, the result in the manipulation of the corresponding tool will not cause any dramatic problems. In the s econd type, human activity requires attention during operation. Human operators have to compromise between choices. Knowledge about such compromises is very difficul t to acquire. Activity variation around the task requirements is very difficult to predict off-line. Thus, observation methods must be used, and reporting systems are frequent ly implemented. In the third type, human activity demands continuous attention. The machine is a reactive agent. Cooperation between the human and the machine governs the entire stability of the overall system. The capture of this cooperation knowledge i nvolved in the control task is extremely difficult for knowledge engineers that are nov ice in the expertise domain. Both self-training and frequent observation of real experts are always necessary. 3.5 How do experts cope with dynamic systems? Experts usually improvize according to the situation. They have the sense of the evolv ing situation. They anticipate what will happen next. They are never out of the loop, e xcept when either their vigilance is too low or their workload is too high. The way thi s anticipation is perfomed is very important to acquire. Experts can be "ahead" or "be hind" according to several factors including:
Page 10
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
— — — — —
stage in learning (proficiency); boredom or motivation; complacency; workload; fatigue; etc.
Operators' adaptation to the task and situation is tremendously important to understan d in order to capture knowledge specific to systems belonging to the third class descri bed above. At this point, it is essential to notice that tasks are the prescribed activities that operato rs must perform to reach their goals. Operators' activities are generally quite different from the tasks that are demanded. We then differenciate between task demand and op erator activity in knowledge acquisition for dynamic systems. Humans "apparently" s olve complex problems, but this is by using good enough solutions (Rappaport, 1993). This is very true in dynamic environments. These solutions most of the time fit withi n safety bounderies.
4 Knowledge involved in dynamic systems 4.1 Rasmussen's Model Knowledge involved in the control of dynamic systems has been translated into huma n operator behavior (Rasmussen, 1983). Rasmussen's model was developed to represe nt the performance of an operator in a process control situation. It provides a framewo rk to study human information processing mechanisms. According to our interpretatio n of Rasmussen's model, human beings work as hierarchical systems including two ty pes of processors: — a high level processor, subconscious and highly parallel; — a low level processor, conscious and sequential. The high level processor corresponds to sensori-motor functions. It is highly dynamic because of its parallelism. The low level processor is limited by the capacity of the sh ort term memory. However this (symbolic) information processing capacity allows th e treatment of a large variety of problems with reasonable efficiency. Rasmussen distinguishes between three levels of behavior of an operator interacting w ith a dynamic system (Figure 2): — skill-based behavior; — rule-based behavior; — knowledge-based behavior. The acquisition of both sensory-motor and cognitive skills results from long and inten sive training. Skills allow rapid operations such as stimulus-response actions. At the r ule-based behavior level, people manipulate specific plans, rules (know-how) or proce dures (such as checklists). This level is operative. The knowledge level includes vario
Page 11
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
us mechanisms representing what we usually call intelligence. Current expert systems are situated at the rule-based level. The main reason is that it is difficult (and often im possible) to elicit compiled expert knowledge from the skill-based level. Knowledge a t the middle level is easier to formalize and elicit from expert explanations. As a profe ssor would usually do, the expert must de-compile his/her knowledge to explain the " why" and "how" of his/her own behavior. Results from such a de-compilation can be easily implemented in a declarative fashion. Usually the IF-THEN format is being use d to represent rules. However, the result of the de-compilation does not necessarily ca pture the expert knowledge and behavior at the skill-based level.
Goal(s)
Knowledge
Identification
Rules
Situation Recognition
Skills
Sensors
Decision Making
Planning
Situation(s) / task(s)
Tasks
Effectors
Environment Figure 2. Rassmussen's behavioral levels.
4.2 Knowledge acquisition and machine learning in dynamic systems Rasmussen's model is used here to classify KA and machine learning (ML) methods t hat would be useful for knowledge acquisition in dynamic systems. From a strict KA point of view, we experienced that the intermediary level (rule-based level) is the easi est to elicit. Researchers in ML usually distinguish three types of learning (Kodratoff, 1988): — skill acquisition and speed-up learning; — analogy and case-based learning; — induction and empirical learning. The first type is itself divided into two main categories: macro-operators construction (Korf, 1985) and explanation-based learning (DeJong, 1981). The second type concer ns the generation of new chunks of knowledge from existing chunks (Gentner, 1989;
Page 12
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
Hammond, 1989). The third type can be split into three groups of methods: supervised learning of concepts (Mitchell, 1982), similarity-based learning (Quinlan, 1986), disc overy (Lenat, 1977; Langley et al., 1987; Falkenhainer, 1990) and nonsupervised clust ering (Fisher, 1987). In the next three subsections, we review some previous work illu strating these three types of learning. Skill acquisition to increase performance. Although interview methods have been r eported to be very efficient (LaFrance, 1986), it is extremely difficult to elicit skills us ing such methods in dynamic systems. Protocol analysis allows us to access this type of knowledge by re-construction. This knowledge is even very difficult to design from data acquired by observation without an already sufficient account of domain knowle dge. Methods that facilitate the observation of the expert (or the user) at work allow th e elicitation of situational patterns. We used this method during the HORSES1 projec t at NASA Ames to elicit diagnostic knowledge. In all cases, a model is very useful to interpret results from these methods. Situational knowledge can be constructed from analytical knowledge. This recompilation can be performed using speed-up learning methods. We have developed an algorithm that transforms analytical knowledge into s ituational knowledge for the SAOTS2 project with CNES 3. This work has been report ed in (Boy & Delail, 1988). Analogy and case-based learning. We worked with test pilots to build a first body of knowledge for use in the MESSAGE system4. Raw data verbalized by the pilots wer e very often small stories such as: "I was flying at 10 000 feet, the air traffic control g ave me the clearance to prepare landing, my copilot...". Experts remember anecdotes. They tend to express their dynamic knowledge by telling stories. This is Roger Schan k's point of view about knowledge expression by people in general. This type of know ledge has to be compared to knowledge that has already been acquired and eventually generalized. In our work, we used analogical methods manually (Boy, 1983). Analogy and case-based learning are essential knowledge elicitation methods in dynamic syste ms. Induction and empirical learning. We developed an algorithm for dynamic empiric al learning of indices in the Computer Integrated Documentation (CID) project at NA SA. Context-sensitive indices are learned as CID is used. The main probem is that the more you learn the more you generate knowledge that will be more difficult to retriev e. It is then necessary for real-time reasons to cluster the resulting knowledge base to i mprove its accessibility. Cobweb is certainly the most well known and used concept c lustering algorithm (Fisher, 1987). We have proposed a similar approach to context cl ustering for CID (Boy, 1991b). Induction and empirical learning are essential when sit uated knowledge needs to be reduced by generalization. This is very important in dyn amic or evolving systems.
1Human Orbital-Refueling-System Expert System. 2French acronym for "Système d'Assistance à l'Opérateur en Télémanipulation Spatiale" (Oper
ator Assistant System in Space Telemanipulation). 3Centre National d'Etudes Spatiales (French Space Center). 4French acronym for "Modèle d'Equipage et des Sous-Systèmes Avion pour la Gestion des Equ ipements" (Crew and Aircraft Sub-Systems Model for Equipments Management).
Page 13
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
5 The task-tool-user triangle User-centered design (Billings, 1991) should take into account three main component s, i.e., task(s), tool(s) and user(s), as well as interactions between these components. T he conventional engineering approach is centered on tool construction, tasks and users are generally considered implicitely. The task is usually represented using task analys is methods and/or by modeling the process that will be controlled by the tool to be des igned. This modeling work involves conceptual or physical simulations that are typica lly performed using software programs. Results of such analyses give a set of require ments for the tool. The user is rarely taken into account explicitely at design time. A u ser model is incrementally built with respect to the current task either by analogy with existing models or by specification of a syntax and a semantics. Task-tool interaction provides information requirements (from task to tool) and technological limitations (fr om tool to task). Task-user interaction can be analyzed through task analyses (from ta sk to user) and user activity analyses (from user to task). Such analyses are increment al because the nature of the task can be modified by the tool. User-tool interaction is mediated through an interface that induces training requirements (from tool to user) a nd ergonomics modifications (from user to tool). This approach is called the task-tooluser triangle (Figure 3). The main problem is that the three components cannot be iso lated, they are interdependent. The more dynamic a system is, the more this interdepe ndency is difficult to takle and understand. In the design of a dynamic system, such as an airplane, it is important to take human operators into account in the design loop (B oy, 1988). Task
Information requirements and technological limitations
Task analysis and activity analysis
Tool
User Training and ergonomics
Figure 3. The task-tool-user triangle
5.1 Situations and ready-to-use procedures A way to handle this problem is to design an appropriate interface for the tool and ope rations procedures for the user who has to perform the task. If the user is able to perfo rm the task with the tool without many procedures this means that the interface is well designed. Conversely (and it is usually the case), if the user is stuck and needs help, t hen procedures are usually welcome. But this may mean that the interface has been ba dly designed. In general, user interfaces are designed with a body of procedures that g o along with them. In aeronautics for instance, procedures are designed to improve saf
Page 14
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
ety and reliability (legal issues). The main reason is that in highly dynamic systems, p eople do not have time (especialy when overloaded) to fully reconstruct procedures fr om scratch, thus they must have already prepared and tested procedures appropriate to the current situation. The acquisition of such procedures is difficult and never comple ted. Good enough solutions are the best we can do. Procedures are incrementally revis ed. 5.2 The procedures-interface duality Each time a procedure is well understood and "fully" tested, it can be integrated into t he interface as an agent that will perform it automatically. This integration assumes th at results generated by the appliation of the procedure are well identified and easily un derstandable by the user. There is an interesting approach that covers this practice of p rocedure integration into the interface that is known under the name of "programming by demonstration" (Cypher et al., 1993). Once habits have been experienced for a fair amount of time in manipulating dynamic systems, traces or procedues being followed can be stored and reused as interface agents. In dynamic systems, the difficulty comes from the fact that interface agent performance and results are extremely important iss ues. For instance, it is important to know how much time you need to wait for an ans wer from an interface agent. In the IHMI paradigm, operational knowledge essentially includes procedures. Howev er, these procedures have to be used in "the" appropriate situation. For this reason, a si tuation pattern must be ideally attached to each procedure. In practice, it is very diffic ult to acquire such situation patterns. Previous contributions showed that situation patt erns can be constructed by direct experimentation on dynamic systems such as space f ault diagnosis (Boy, 1987) and telerobotics (Boy & Caminel, 1989). Expertise in the c ontrol of dynamic systems is not only in the way procedures are constructed, it is also and foremost in the way procedures are executed at the right time and in the right for mat. In this sense, context-sensitivity is indexical. The main problem is to index proce dures for retrieving them in the appropriate situation. We claim that interface agents s hould provide appropriate clues for users to retrieve the most relevant procedure.
6 Logicism and Situatedness: How can they go together? Logicism has dominated the artificial intelligence part of computer science for a long time. In this view, knowledge has to be declarative, the procedural part is implemente d separately to run declarative knowledge. The fact that libraries are full of books writ ten in declarative form is not enough to justify this approach. It takes a long time for r eaders to understand very specialized books unless they are already trained in the corr esponding domain. Furthermore, declarative repositories of knowledge of how to use dynamic systems are not useful if users cannot access the right information at the right time and in the right format. 6.1 The Situation Recognition and Analytical Reasoning Model
Page 15
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
Using dynamic systems is a situated activity. Other researchers have already provided representations and logical formalisms of time. Figarol (1989) proposes a classificati on of time management by a human being into two cognitive processes: dynamic diag nosis and planning (or constant re-planning). We proposed a model for dynamic diagn osis after an experiment carried out at NASA in space fault management (i.e., the HO RSES experiment reported by Boy, 1986): the situation recognition and analytic reas oning model (SRAR) (Figure 4). This model has been applied to various dynamic situ ations and problems.
Situation Patterns
Beginner
Expert
Analytical Knowledge
s1
A1
s2
A2
sn
An
S1
a1
S2
a2
SN
aN
Figure 4. The situation recognition / analytical reasoning model. Beginners have small, static an d crisp situation patterns associated with large analytical knowledge chunks. Experts have large r, dynamic and fuzzy situation patterns associated with small analytical knowledge chunks. The number of beginner chunks is much smaller than the number of expert chunks.
When a situation is recognized, it generally suggests how to solve an associated probl em. We assume, and have experimentally confirmed in specific tasks such as fault ide ntification (Boy, 1986, 1987), telerobotics (Boy and Mathé, 1989) and information ret rieval (Boy, 1991b), that people use chunks of knowledge. It seems reasonable to envi sage that situation patterns (i.e. situational knowledge) are compiled because they are t he result of training. We have shown (Boy, 1987), in a particular case of fault diagnos is on a physical system, that the situational knowledge of an expert results mainly fro m the compilation, over time, of the analytical knowledge he/she relied on as a beginn er. This situational knowledge is the essence of expertise. "Decompilation", i.e. expla nation of the intrinsic basic knowledge in each situation pattern, is a very difficult task , and is sometimes impossible. Such knowledge can be elicited only by an incremental observation process. Analytical knowledge can be decomposed into two types: proce dures or know–how, and theoretical knowledge. The chunks of knowledge are very different between beginners and experts. The situat ion patterns of beginners are simple, precise and static, e.g. “The pressure P1 is less t han 50 psia”. Subsequent analytical reasoning is generally major and time–consuming
Page 16
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
. When a beginner uses an operation manual to make a diagnosis, his behavior is base d on the precompiled engineering logic he has previously learned. In contrast, when h e tries to solve the problem directly, the approach is very declarative and uses the first principles of the domain. Beginner subjects were observed to develop, with practice, a personal procedural logic (operator logic), either from the precompiled engineering l ogic or from a direct problem–solving approach. This process is called knowledge co mpilation. Conversely, the situation patterns of experts are sophisticated, fuzzy and dy namic, e.g. “During fuel transfer, one of the fuel pressures is close to the isothermal li mit and this pressure is decreasing”. This situation pattern includes many implicit vari ables defined in another context, e.g. “during fuel transfer” means “in launch configur ation, valves V1 and V2 closed, and V3, V4, V7 open”. Also, “a fuel pressure” is a m ore general statement than “the pressure P 1”. The statement “isothermal limit” inclu des a dynamic mathematical model, i.e. at each instant, actual values of fuel pressure a re compared f u z z i l y (“close to”) to a time–varying [ Pisoth limit = f(Quantity, Time)]. Moreover, experts take this situation pattern into account only if “the pressu re is decreasing”, which is another dynamic and fuzzy pattern. It is obvious that expert s have transferred part of analytical reasoning into situation patterns. This part seems t o be concerned with dynamic aspects. Thus, with learning, dynamic models are introduced into situation patterns. It is also c lear that experts detect broader sets of situations. First, experts seem to fuzzify and ge neralize their patterns. Second, they have been observed to build patterns more related to the task than to the functional logic of the system. Third, during the analytical phas e, they disturb the system being controlled to get more familiar situation patterns whic h are usually static: for example, in the ORS experiment, pilots were observed to stop fuel transfer after recognizing a critical situation. The following generalizations can be drawn from the HORSES experiments. First, by analyzing the human–machine interactions in the simulated system, it was possible to design a display that presented more polysemic information to the expert (e.g. a monit or showing the relevant isothermal bands). Polysemic displays include several types o f related information presented simultaneously and are readily understandable to expe rts because the presentation is derived from their situation patterns. This improved use r and system performance. Second, the HORSES assistant achieved a balance in the s haring of autonomy. The original system designer did not anticipate the way that the o perators would use the system, but letting them have indirect control over the assistant allowed them to utilize what they had learned to do well. SRAR can be used to design interface agents that would help human operators to antic ipate the evolution of dynamic systems. It is intended to provide help to find the best c ompromise in the design of interface and procedures. This model can be compared to other work such as Amalberti's schemas (Amalberti, 1988). Abbott introduces the pos sibility of multi-hypothesis management in aircraft cockpits (Abbott, 1989). 6.2 The Block Representation The SRAR model stressed the need for the development of an appropriate representati on of operation procedures. Some of them can be implemented as interface agents wh
Page 17
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
en sufficient situational knowlegde has been derived. Procedures can be represented a s knowledge blocks (Boy, 1989; Mathé, 1990). Blocks have been used in the modeling of cognitive reactive control in space telemanipulation (Boy & Caminel, 1989; Mathé , 1990). Mathé developed an extended appropriate formalism for the block representat ion in her Doctoral thesis. The inference mechnism associated with the block formalis m is independent of the content of the procedure base. A block includes: a name; a hie rarchical level; a list of preconditions; a list of actions; a list of goals with their lists of associated blocks; and a list of abnormal conditions with their lists of associated bloc ks (Mathé & Kedar, 1992). A block is graphically represented as in Figure 5. Figure 6 shows a block as a society of other blocks (context hierarchical level).
PreConditions
Procedure
Context
Abnormal Conditions
Goal
Figure 5. Graphical representation of a knowledge block.
The behavior of a society of blocks is based on the inference mechanism associated w ith the block representation. First sensory inputs from the environment need to match the preconditions of the blocks selected in the focus of attention (they are usually calle d current expectations). Second the most critical of the best matched expectations is se lected. Human selection has been observed to be frequency-based (Reason, 1986). Re ason talks about frequency gambling. Depending on the type of the selected expectati on, the corresponding block(s) is (are) ready for execution. If there is a contextual hier archy of blocks the actions of the terminal blocks are executed according to a given st rategy (usually a sequence). The block representation has been successfully used in two very different applications : telerobotics assistance (Boy & Caminel, 1989; Mathé, 1990; Mathé & Kedar, 1992), and computer integrated documentation (Boy, 1989, 1991). In the former, blocks were used to contruct procedures to help telemanipulation operators. In the latter, blocks h ave been implemented as contextual links to acquire users' preferences when they actu ally use the CID system.
Page 18
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
Figure 6. A block as a society of other blocks.
7 Conclusion and Perspectives This paper reports some of the work in the area of modeling and KA in dynamic syste ms. Researchers such as McDermott (1982) proposed the notion of linearity for past e vents, and the notion of branching of chronics for possible future events. Dynamics de als with several notions such as causality, sequentiality, possible futures, expectation, anticipation and intervals. Our contribution is not based on an axiomatic approach to t he dynamics concept. It is based on experience in the management of dynamic system s such as aerospace systems. We have tried to elicit a first set of features that can be re used in future KA work in such domains. Dynamics is perceived differently by different people according to their skills and exp ertise. This has many implications in the design of knowledge-based assistants. In part icular, dynamic knowledge acquired from experts would be difficult. Situation pattern s would be different. However, when one understands how these patterns were built, it is a major input to training courses. Designers of dynamic systems and associated assistant systems, and cognitive scientis ts have already started to create workable elicitation techniques ranging from intervie ws to field observation. Our contribution is in the design of appropriate mediating repr esentation that helps these people to better acquire dynamic knowledge. One of the m ain issues is certainly the representation of context. Context deals with time and hypot
Page 19
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
hetical features. It is often associated with the notion of point of view. Context include s the concept of persistence, i.e., when a parameter stays constant for a period of time it becomes part of the context. A better grasp of the concept of context will improve t he development of knowledge acquisition in dynamic system. This is because it will a llow simplification of huge amounts of knowledge that it would be necessary to acqui re otherwise. In any case, understanding how experts patterns were built can provide a major input to training. There is a need for a better development of the ontology of dynamics that we initiated in this paper. Some work should be devoted to testing such an ontology in real-world applications. The real-world is dynamic by nature. Its complexity is perceived differe ntly according to the observation tools that we have. The block representation has bee n very successful to date to represent real-world procedures. The use of this representa tion in a broader range of applications will certainly contribute to eliciting better conc epts about dynamic systems. This would be extremely useful in future designs and op erations.
Acknowledgements Many ideas described here benefited from discussions with Philippa Gander, Nathalie Mathé, Erik Hollnagel, Marc Pelegrin, Jeff Bradshaw and Alain Rappaport. I would li ke to thank Nathalie Nanard and Helen Wilson for their comments on an early draft of this paper.
References Abbott, K.H., (1989), "Human-Centered Automation and AI: Ideas, Insights, and Issu es From the Intelligent Cockpit Aids Research Effort", Proceedings of the IJCAI-8 9 Workshop Report on Integrated Human-Machine Intelligence in Aerospace Syst ems, Detroit, Michigan, U.S.A., August. Allen, J.F. (1985). Maintaining knowledge about temporal intervals. Readings in Kno wledge Representation. Brachman R.J. and Levesque H.J. (Eds), Morgan Kaufma nn Publishers. Amalberti, R. (1988). Savoir-faire de l'opérateur: théorie et pratique. XXIVème Congr ès de la SELF. Billings, C.E. (1991). Human-centered aircraft automation philosophy. Technical Me morandum 103885, NASA Ames Resarch Center, Moffett Field, CA. Boy, G.A. (1987). Operator Assistant Systems. Int. J. Man-Machine Studies, 27, pp. 5 41-554. Boy, G.A. (1989). The Block representation in knowledge acquisition for computer in tegrated documentation. Proceedings of the Fourth AAAI-Sponsored Knowledge A cquisition for Knowledge-Based Systems Workshop, Banff, Canada, October 1-6. Boy, G.A. (1991a). Intelligent Assistant Systems. Academic Press, London, U.K. Boy, G.A. (1991b). Computer Integrated Documentation. NASA Technical Memoran dum, NASA Ames Research Center, Moffett Field, CA.
Page 20
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
Boy, G.A. & Caminel, T. (1989). Situation pattern acquisition improves the control of complex dynamic systems. Third European Workshop on Knowledge Acquisition for Knowledge-Based Systems, Paris, July. Boy, G.A. & Delail, M. (1988). Knowledge Acquisition by Specialization-Structuring : A Space Telemanipulation Application. AAAI-88, Workshop on Integration of Kn owledge Acquisition and Performance Systems, St Paul, Minnesota, USA. Boy, G.A., & Gruber, T. (1990). Intelligent Assistant Systems: Support for Integrated Human-Machine Systems. Proceedings of the AAAI Spring Symposium on Knowle dge-Based Human Computer Communication, Stanford, March 27-29. Boy, G.A. & Nuss, N. (1988). Knowledge acquisition by observation: application to i ntelligent tutoring systems. Proceedings of the Second European Workshop on Kn owledge Acquisition for Knowledge-Based Systems, Bonn, Germany. Cypher, A. (1993). Watch What I Do—Programming by demonstration. The MIT Pre ss, Cambridge, MA. Dejong, G. (1981). Generalization based on explanation. Proc. IJCAI, pp. 67-69. Drummond, M., (1989), "Situated Control Rules", Proceedings of the First Internatio nal Conference on Principle of Knowledge Representation and Reasoning, Morga n Kaufmann Pub., Toronto, May. Falkenhainer, B.C. (1990). A unified approach to explanation and theory formation. I n J. Shrager & P, Langley (Eds.), Computational models of scientific discovery an d theory formation. Morgan Kaufmann, San Mateo. Figarol, S. (1989). Airline pilot's anticipatory knowledge. Masters thesis. Universié T oulouse Le Mirail, France (In French). Fisher, D. (1987). Knowledge acquisition via incremental conceptual clustering. Mac hine Learning, 2, pp. 139-172. Gentner, D. (1989). Mechanisms of analogical learning. In S. Vosniadou & A. Ortony (Eds.) Similarity and analogical reasoning. Cambridge University Press, London. Georgeff, M.P. & Ingrand, F.F. (1989). Decision making in an embedded reasoning s ystem. Proceedings of the Eleventh International Joint Conference on Artificial In telligence, Detroit, MI, pp. 972-978. Hollnagel, E. (1993). Requirements for dynamic modelling of man-machine interactio n. CRI paper. Nuclear Engineering and Design. Denmark. Hutchins, E. (1991). How a cockpit remembers its speed. Technical report, University of California at San Diego, Distributed Cognition Laboratory. Korf, R.E. (1985). Learning to Solve Problems by Searching for Macro-Operators. R esearch Notes in Artificial Intelligence, Pitman, Boston. LaFrance, M. (1989). The quality of expertise: Implications of Expert-Novice differen ces for knowledge acquisition. SIGART Newsletter, April, pp. 8-14. Langley, P., Simon, H.A. & Bradshaw, G.L. (1987). Heuristics for empirical discover y. In L. Bolc (Ed.), Computational models of learning. Springer-Verlag, Berlin. Laurel, B. (1991). Computer as Theatre: A dramatic theory of interactive experience. Addison Wesley, Reading, Massachusetts. Lenat, D.B. (1977). The ubiquity of discovery. Artificial Intelligence, 9, pp. 257-285. Leplat, J. (1985). The elicitation of expert knowledge. NATO Workshop on Intelligent Decision Support in Process Environments, Rome, Italy. September. Mathé, N. (1990). Intelligent Assiatnce for Process Control: Application to Space Tel eoperation. PhD Dissertation, ENSAE, Toulouse, France. Mathé, N. & Kedar, S. (1992). Increasingly Automated Procedure Acquisition In Dyn amic Systems. Proceedings of the Knowledge Acquisition Workshop for Knowledg
Page 21
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin
e-Based Systems, Banff, Canada, October. Also as a NASA Technical Report FIA92-23, June. McDermott, D. (1982). A temporal logic for reasoning about processes and plans. Co gnitive Science, 6, pp. 101-155. Mitchell, T.M. (1982). Generalization as search. Artificial Intelligence, 118, pp. 203-2 26. Mizoguchi, R., Tijerino Y. & Ikeda, M. (1992). Task Ontology and its Use in a Task Analysis Interview System. Proceedings of the Second Japaneese Knowledge Acq uisition for Kowledge-Based Systems Workshop, JKAW'92, Kobe, Japan. Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1, 1. Rappaport, A. (1993). Invariants, Context and Expertise in the Knowledge Milieu. Thi rd International Workshop on Human and Machine Cognition, Seaside, Florida, May 13-15. Rasmussen, J. (1983). Skills, rules, and knowledge: Signals, signs, and symbols and o ther distinctions in human performance models. IEEE Transactions on Systems, M an, and Cybernetics, 13, pp. 257-266. Reason, J. (1986). Decision aids: prostheses or tools ? pp. 7-14 in Cognitive Engineeri ng in Complex Worlds, eds E. Hollnagel, G. Mancini & D.D. Woods. Academic Pr ess, London. Rogoff, B. (1984). Introduction: Thinking and learning in social context. In Everyday Cognition: Its Developement in Social Context, Rogoff, B. and Lave J., Harward University Press Pub., Cambridge, MA. Shalin, V. & Boy, G.A. (1989). Integrated Human-Machine Intelligence. IJCAI'89, D etroit, MI. Shalin, V., Geddes, N., Bertram, D.,Szczepkowski, & DuBois, D. (1993). Expertise in Dynamic, Physical Task Domains. Third International Workshop on Human and Machine Cognition, Seaside, Florida, May 13-15. Sheridan, T.B., (1984), "Supervisory Control of Remote Manipulator, Vehicles and D ynamic Processes: Experiment in Command and Display Aiding", Advances in Ma n-Machine System Research, J.A.I. Press, Vol. 1, pp. 49-137. Suchman, L.A., (1987), "Plans and Situated Actions. The Problem of Human-Machin e Communication", Cambridge University Press. Toggnazzini, B. (1993). Principles, Techniques, and Ethics of Stage Magic and Their Potential Application to Human Interface Design. Proceedings of INTERCHI'93, ACM Press, New York, Conference held in Amsterdam, The Netherlands. Wielinga, B., Van de Velde, W., Schreiber, G. & Akkermans, H. (1992). The Commo nKADS Framework for Knowledge Modelling. Proceedings of the Seventh Know ledge Acquisition for Knowledge-Based Systems AAAI Workshop, Banff, Canada, October. Woods D.D. & Hollnagel E., (1986). Mapping cognitive demands and activities in co mplex problem solving worlds. Proceedings of the Knowledge Acquisition for Kno wledge-Based Systems AAAI Workshop, Banff, Canada, November.
Page 22