The Use of Executable Cognitive Models in Simulation ... - CiteSeerX

13 downloads 0 Views 35KB Size Report
The Use of Executable Cognitive Models in Simulation-based Intelligent Embedded Training. Wayne Zachary, Joan Ryder, James Hicinbothom, and Kevin ...
The Use of Executable Cognitive Models in Simulation-based Intelligent Embedded Training Wayne Zachary, Joan Ryder, James Hicinbothom, and Kevin Bracken CHI Systems, Inc. Lower Gwynedd, PA This paper defines a new role for expert models in intelligent embedded training -- guiding practice. The integration of problem-based practice with focused, automated instruction has long proven elusive in training systems for complex real-world domains. The training strategy of 'guided practice' offers a way to merge the approaches of traditional simulation-based practice and intelligent tutoring's knowledge tracing. The performance of the trainee is dynamically assessed against scenario-specific expectations and performance standards, which are generated during the simulation by embedded models of expert operators. This research developed an executable cognitive model capable of solving realistic simulation scenarios in an expert-level manner, identified and implemented modifications and extensions to this baseline model needed to generate dynamic and adaptive expectations of future trainee actions, and developed means of providing cognitive state information for use in (separate) diagnostic processes, without resorting to fullscale knowledge tracing methods. INTRODUCTION The integration of problem-based practice with focused, automated instruction has long proven elusive in training systems. Traditional large-scale simulation trainers provide little or no automated diagnosis and instruction, relying instead on human instructors to review the simulation results and provide feedback to the human. At the other extreme, intelligent tutors such as Anderson's LISP tutor (Anderson et al., 1990) provide focused automated instruction by applying highly detailed models of both domain expertise and student learning to trace the student's use and acquisition of each individual knowledge element in the domain. This microscopic knowledge tracing approach seems infeasible in highly complex work domains such as command and control. The training strategy of 'guided practice' offers a way to merge the approaches of traditional simulation-based practice and intelligent tutoring's knowledge tracing. In guided practice, trainees work realistic problems using high-fidelity simulations integrated with the operational workstations (i.e., embedded training simulations). The performance of the trainee is dynamically assessed against scenario-specific expectations and performance standards, which are generated during the simulation by embedded models of expert operators. Cognitive and behavioral diagnosis are dynamically performed based on the differences between trainee behavior and modelgenerated criteria, and are used to provide feedback to the trainee on focused areas of performance. The instructional feedback is focused so as to guide the trainee's practice, in the current and future scenarios. The Advanced Embedded Training System (AETS) is an intelligent embedded training system that has been designed and built to demonstrate the feasibility of conducting intelligent problem-based training using the guided practice strategy. One major goal of AETS is to create a capability to monitor the behavior of operators in Naval command and control environments (specifically the Aegis Combat Information Center or CIC) during embedded training exercises, and, using embedded generative or predictive models of operator performance, to diagnose operator performance. This diagnosis, conducted in real time, is used to provide on-

line feedback and coaching to operators and real-time assessment and diagnosis to shipboard instructor/trainers for improved post-exercise debriefs and selection/creation of additional training scenarios. Demonstration of the modeling and diagnostic process has focused initially on one operator in the anti-air warfare (AAW) team—the AAW coordinator (AAWC). This paper describes the research undertaken to design and implement an executable cognitive model and associated cognitive diagnostic technology needed to support the guided practice paradigm both in AETS and in other applications as well. METHOD There were three parts to the methodology used in this research: • developing an executable cognitive model capable of solving realistic simulation scenarios in an expertlevel manner; • identifying and implementing the modifications and extensions to this baseline model that would be needed to generate dynamic and adaptive expectations of future trainee actions; and • developing means of providing cognitive state information for use in (separate) diagnostic processes, without resorting to full-scale knowledge tracing methods. Developing the Baseline Executable Model The AAWC model was built beginning with a previous high-level analytical AAWC model constructed as part of the Navy's Tactical Decision Making Under Stress (TADMUS) program (Zachary et al., 1993; Zachary, Ryder & Hicinbothom, in press). The existing model had been developed using the COGnition as a NEtwork of Tasks (COGNET) framework (see Zachary et al., 1992), an integrated cognitive/behavioral model that represents procedural and declarative knowledge as well as observable actions. The executable cognitive model needed for AETS was developed from this existing pencil-and-paper model in two steps. First, the model was extended to provide full key-stroke level representations of all actions and correspondingly

fine-grained representations of all perceptual stimuli. The data for this model development included observation of team training sessions and recording of all interactions with the system, videotapes of the AAWC displays, and audio recording of the radio nets the AAWC uses. Extensive review and analysis of recorded data was supplemented by sessions with subject matter experts (SMEs) generating think-aloud protocols viewing taped performances, providing verbal 'walkthroughs' of additional problems, and answering detailed questions. Second, the model was translated into a more detailed executable representation using a software tool which was developed for that purpose. This tool, called BATON (Zachary, Le Mentec, Ryder, 1996), contains a software engine that is able to execute cognitive models expressed in a variant of the COGNET syntax to generate predicted operator actions in a given scenario. Adapting the Model to the Guided Practice Application In a standard intelligent tutoring system, the executable model would be used directly to specify the knowledge needed to solve any problem (thus creating a template that would be used by diagnostic algorithms to assess the student's knowledge acquisition and application). The role of the embedded model in AETS, however, is more subtle. Because command and control problems are long and complex, the model had be made able to adapt to the actions of the trainee, acting like an over-the-shoulder observer (a role commonly played by human instructors in the existing team-training simulator). The baseline operator model had to be modified to adapt its problem-solving strategy to the trainee's actions, both: • generating expectations of what the trainee should do at any given point in a simulation, and • adapt its reasoning to any unexpected (and presumably suboptimal) action that the trainee did take, so that it could predict what the operator should do after that unexpected action. Providing Cognitive State Data for Trainee In the overall AETS architecture, the embedded cognitive models need to observe trainee performance and generate information that could be used in diagnosing that performance in both behavioral and cognitive terms. This required two kinds of information. The first kind was the specifications of desired trainee behavior given the momentary state of the problem scenario, as discussed above. These data are used to drive a behavioral diagnosis of the trainee, based strictly on a comparison of observed behavior to desired behavior. A second kind of information was required to provide a cognitive diagnosis of the trainee, when the trainee's behavior diverged from the recommended path. This second type of information was a trace of the knowledge elements involved in producing each observable behavior desired of the trainee, beginning with the relevant perceptual stimuli. (The diagnostic processes are described in another paper within this symposium). The main problem in creating this knowledge trace was computational. Each perceptual stimulus received by the model could lead to an unknown number of possible future behaviors; the cognitive architecture had to somehow build and maintain each

ENVIRONMENT (external world or simulation)

SHELL Action Effectors Perceptual Demons Task Execution Blackboard Attention Management Cognitive Processing

BATON

Figure 1. COGNET Information Processing Architecture of these possible threads in a manageable real-time manner, and in a way that provided a valid trace of the cognitive processes involved. To solve this problem, multiple possible means of recording and tracing the reasoning threads were developed, and assessed both for theoretical reasonableness (in human information processing terms) and computational efficiency. RESULTS Executable Cognitive Model Development Figure 1 shows the architecture of an executable COGNET model. There are three separate processes organized into two levels of software, the 'reasoning kernel', which simulates the cognitive processes, and the 'shell' which simulates and physically manages the sensory and motor interactions with the external environment. The perceptual process, which internalizes sensed information, is instantiated as a set of perceptual demons that have been built into the model (Figure 2), and executes them as perceptual cues from the environment (i.e., model inputs) are received. When these demons ‘fire’, they post information on the blackboard, which is equivalent to the simulated operator’s mental model of the current problem (see Figure 3). In parallel to the perceptual process, a cognitive process applies the procedural knowledge in the cognitive tasks (Figure 4) to the (blackboard-based) mental model of the problem. In doing so, the simulated operator develops and executes plans for solving the problem and/or pieces of the problem, as the operator becomes aware of them. This awareness is the result of specific perceptual cues having been perceived by demons and specific information about the cue having been posted on the mental model blackboard. As a result of the cognitive operations, there will be changes to the mental model. There will also be the activation of one or more action processes. These processes generate specific operator acts in the external world, through the (also parallel) operation of the motor processing system.

Perceptual Cue new track displayed on basic display unit

Resulting DEMON Name New_Track

close control data displayed after track hooked

Individual_Track_Update

every 6 seconds (assume frequent scanning)

Tactical_Picture_Update_ ALL_TRACKS EW_Rpt IDS_Level1Warn_Rpt

Information Internalized platform ntds name bearing range altitude speed x_pos y_pos ... platform ntds name bearing range altitude speed x_pos y_pos ... platform ntds bearing range altitude speed course ... platform ntds bearing status sensor platform ntds bearing range course ...

Electronic Warfare (EW) report given Level 1 audio warning message from IDS Operator to aircraft (a/c) Level 2 audio warning message from IDS to a/c IDS_Level2Warn_Rpt platform ntds bearing range course ... Level 3 audio warning message from IDS to a/c IDS_Level3Warn_Rpt platform ntds bearing range ... voice report from IDS operator on modes 1,2, & IDS_IFF_Rpt platform ntds m1 m2 m3 3 IFF challenge voice report from AIC operator about combat air AIC_VID_Rpt platform ntds status iffmsg class patrol change in display settings (center, scale Display_Settings_Change center range ring_status or range rings) change in size of range rings Range_Rings_Change circ every 6 seconds (assume frequent scanning) Own-ship_Position_or_Status_Update name speed course x_pos y_pos ... Figure 2. Perceptual Demons in Executable Model

Geopolitical_Picture Political_Situation Nation_States Provinces_or_Regions Identifiable_Groups Individuals Boundaries_of_Interest Areas_of_Interest Weather_Reports/Forecasts Topographic/Geographic_Features Airlines Air_Traffic_Control_Measures Airfields Ports Cities/Towns Historic/Religious/Cultural_Sites Military_Installations Attended_Tracks

Tactical_Picture Conflicts Missions_and_Objectives) Threat_and_Movement_Axes Groups_and_Patterns DCA_Stations Engagements_Underway Warnings_issued Track_Histories ESM_Reports IFF_Reports Other_Track_Data_Reports Secure_Comms_with_Track Clear_Voice_Comms_with_Track Special_Points_on_BDU Ownship_launched_missiles Console_Display_Settings

Expectations

Killed_Tracks Engaged_Tracks Potentially_Threatening_Tracks Action_Tracks Interest_Tracks Pending_Tracks Dropped_Tracks

Order_of_Battle Enemy_Order_of_Battle Expected_Conflicts Expected_Threats Expected_Engagements Expected_Intercepts) Expected_Tracks Expected_Indicators

Systems_Characteristics

Air_Team_Coordination

Missiles Bombs Guns/Projectiles Launch_Platforms Fire_Control_Systems Sensor_Systems Countermeasures

Plans Posture Strategy Planned_Coordination Assignments Preplanned_Responses Plans_for_Engagements Plans_for_Illuminations Plans_for_Covering_Tracks Plans_for_Escorting_Tracks Plans_for_Intercepting_Tracks Plans_for_Warning_Tracks Plans_for_Tracking_and_Reporting Plans_for_Resetting_DCA_Station

Assets Ownship Sensor_Systems Fire_Control_Systems Weapon_Systems Countermeasures Other_Controlled_Assets Communication_Nets Computer_Systems

Rules_of_Engagement

Roles Communications Communications_Networks Actions_Performed_by_Model Hook/Unhook_Objects Action_Sequences_Begun Actions_Performed

Figure 3. AAWC Mental Model Blackboard

Protect_Ownship Engaging_Tracks Illuminating_Tracks Covering_Tracks Escorting_Tracks Intercepting_Tracks Warning_Tracks Track_and_Report

Attend_to_New_Tracks Monitor_Existing_Air_Tracks Attend_to_New_ESM_Report: Attend_to_New_Report_of_Warning_Issued Attend_to_New_IFF_Report: GTASP_AIC_Intercept_with_DCA Cover_Track Engage_track Initiate_engagement Kill_eval Engagement_expectation_monitor Correlate_missile_to_expectation_of_track Monitor_tracks_for_adherence_to_warnings Figure 4. AAWC Model Cognitive Tasks

TAO (Model)

Voice Report Scenario Event Scenario Event

System Event

Workstation Control Workstation Display

Action AAWC (Model)

Display Event

Shell Voice Report

Workstation Control Workstation Display

Display Event

IDS (Model)

Within BATON, the cognitive process includes a task execution process and an attention management process. One task always has the focus of attention, with the task performance process executing the goal-subgoal-operator hierarchy of that task as appropriate for the situation context. In parallel, the attention manager constantly monitors the blackboard contents to determine which of the tasks trigger conditions is satisfied, and "activates" those that are satisfied. Whenever more than one task becomes active, attention is given to the active task with the highest priority (the current task is considered one of the active tasks). Thus, the attention management process effectively handles attention conflict resolution. The model was tested and validated using an existing training simulator (Hodge et al., 1995) as the external environment, so that scenario events would stimulate the model and model actions would affect the environment, which in turn would stimulate the model in other ways. Model behavior was then evaluated by independent experts and refined until it generated realistic expert behavior, demonstrating face and construct validity. In the paper presentation, specific threads of reasoning through this model will be discussed.

either an immediate effect (e.g., display of new information on the screen) or some down-stream effect (e.g., change to some external resource such as an aircraft). Each such effect would enter the model's awareness as a perceptual demon. In the original model, the demon could be processed directly, becausethe model knows what actions it has taken. In this modified form, however, another layer of interpretive processing was required. The model had to determine if the perceived incoming information was consistent with some action the model expected to be taken or not. If so, the model assumes the operator took the action the model wanted, and resumes the suspended thread of reasoning. If not, the model adapts its mental model to the (unexpected action), terminates the suspended reasoning thread, and tries to determine what is the most appropriate thing to do in this (new) situation.

Adapting the Model to the Guided Practice Application

Providing Cognitive State Data for Trainee Diagnosis

Figure 5 shows how the model executes when performing the task by itself -- it receives perceptual cues from the workstation (displays) and other operators (encoded voice messages), and takes actions by manipulating controls (digitally) at the workstation. However, in AETS, this last step will not exist -- the human trainee will be in control of the workstation, not the model. The model may decide that a specific action is appropriate at a certain time, but it can not implement that action. Instead, it must note its expectation of that action, and then adapt to what action (if any) the trainee performs, and adjust its internal mental model accordingly. This capability was created in the executable model by carefully decoupling, in the various cognitive task procedures, the intention to take an action from the actual implementation of that intention. Thus, the model would get to the point that it intends that an action be taken, and then merely posts that intention on its blackboard and suspends that thread of reasoning. Each action taken at the workstation will have

The modifications described above resulted in a model that could not only solve the problems as an expert, but could also adapt its reasoning to determine what an expert would do, given any action taken by the trainee that was unexpected and/or suboptimal. As a final step, the model had to be able to report both the desired actions (at any point in time) and the cognitive processing that lead to each desired action to a separate diagnostic system, as shown in Figure 6. The reporting of desired actions was straightforward. When an action was recommended, it was merely reported to the diagnostic system rather than applied to the workstation. The system also had to be modified to trace the internal reasoning that lead to that recommended action, beginning with any perceptual cues that uniquely led to that action (as opposed to any other). This trace process was built to follow the path through which information proceeds in its processing from perception to action. Initially, a perceptual demon internalizes an external cue as an information object (technically termed a hypothesis) on the cognitive model's blackboard. This

Figure 5. Example Hypothetical Scenario, Display/Control and Operator Model

Other Operators

Events

Verbal cues

Scenario (Simulation)

Verbal actions

Audible & visual cues

Events

Trainee

Watchstation Manual actions

Model

Internal state outputs

Diagnositic subsystem

Recommended actions

Figure 6. Relationships Between Cognitive Model, Trainee, and other Subsystems hypothesis is then processed and transformed by the various cognitive tasks that are activated after it is posted on the blackboard. Eventually, a specific action will be recommended as part of the execution of the procedural knowledge in one of the tasks. The underlying cognitivearchitecture software was modified to trace each such path as it evolved -- from the original demon, through various tasks,and ultimately to an action recommendation. When an action was recommended, the software would report both the action and the information processing path that led to that action. This solution provided sufficient information to drive both the behavioral and cognitive diagnostic processes.

Hodge, K.A., Rothrock, L., Kirlik, A.C., Walker, N., Fisk, A.D., Phipps, D.A., & Gay, P.E. Jr. (1995). Training for Tactical Decision Making Under Stress: Towards Automatization of Component Skills (Technical Report HAPL-9501). Atlanta, GA: Georgia Institute of Technology, Human Attention and Performance Lab.

DISCUSSION

Zachary, W.W., Ryder, J.M., & Hicinbothom, J.H. (in press). Cognitive task analysis and modeling of complex decision making. In Cannon-Bowers, J.A., & Salas, E. (Eds.), Decision Making Under Stress: Implications for Training and Simulation.

This paper defines a new role for expert models in intelligent embedded training -- guiding practice. The ability of an embedded executable model to solve problems was demonstrated, but this is only an enabling condition to effective use of such models in this type of embedded training architecture. Additionally, it was necessary to conceptualize differences in cognitive process between task performance (workstation operator) and task observer (instructor watching workstation operator). This role of task observer is the new use of cognitive models in instructional contexts. REFERENCES Anderson, J.R., Boyle, C.F., Corbett, A.T., & Lewis, M.W. (1990). Cognitive modeling and intelligent tutoring. Artificial Intelligence, 42, 7-49.

Zachary, W., Le Mentec, J-C., & Ryder, J. (1996). Interface agents in complex systems. In Ntuen, C. and Park, E.H. (Eds.), Human interaction with complex systems: Conceptual Principles and Design Practice. Norwell, MA: Kluwer Academic Publishers.

Zachary, W.W., Ryder, J.M., Ross, L., & Weiland, M.Z. (1992). Intelligent computer-human interaction in realtime multi-tasking process control and monitoring systems. In M. Helander and M. Nagamachi (Eds.), Design for manufacturability (pp. 377-401). New York: Taylor and Francis. Zachary, W.W., Zaklad, A.L., Hicinbothom, J.H., Ryder, J.M., & Purcell, J.A. (1993). COGNET representation of tactical decision-making in Anti-Air Warfare In Proceedings of the 37th Annual Meeting of the Human Factors and Ergonomics Society (pp. 1112-1116). Santa Monica, CA: Human Factors and Ergonomics Society

Suggest Documents