Collaborating Cognitive and Sub-Cognitive Processes for ... - CiteSeerX

0 downloads 0 Views 211KB Size Report
aerodynamic model of an aircraft may indicate that. 2345.675389 litres of ... down the path toward creating human performance models within a .... Free Flight – CLARET and the instructor-agent are detached from .... AI Magazine, 16(1), 15-39.
Collaborating Cognitive and Sub-Cognitive Processes for the Simulation of Human Decision Making Clint Heinze, Ian Lloyd, and Simon Goss Air Operations Division Defence Science and Technology Organisation 506 Lorimer Street FISHERMENS BEND VIC 3207 Adrian Pearce Curtin University Kent Street BENTLEY WA 6102 Keywords: cognitive modelling; agents; architectures ABSTRACT: One of the difficulties in simulating human decision-making is the requirement to model the wide range of human behaviour from knowledge-based, through rule-based to skill-based. The beliefs-desires-intentions (BDI) model of rational agency has proved very successful in developing computer generated forces for simulation, but with limited complexity. In this paper a model of human cognition is presented that separates the skill-based and knowledge-based components in an effort to overcome some of these challenges. We use a BDI model as the provider of the knowledgebased component and machine learning to provide the skill-based is given. This split has some cognitive plausibility but is also computationally feasible – a fact demonstrated through the development of a proof-of-concept technology demonstrator. The model shows potential in easing the task of the software engineer and in the creation of psychologically credible models of human behaviour for use in simulation.

1

Introduction

Air Operations Division (AOD) conducts simulation for operations analysis in support of the Royal Australian Air Force (RAAF). Much of this operations research effort is in the development and support of simulations that incorporate models of air force personnel. Modelling human decision-making is a challenging task in an environment that is unpredictable, prone to rapid change, and difficult to analyse. Wide ranging simulation requirements mean that care must be taken when selecting appropriate technologies for modelling human cognition within this framework. There have been a number of different approaches to modelling human action within large simulations. The task can be performed by actual human operators – removing the need for computer generated forces. Such an implementation is flexible and powerful but is expensive in that it requires large numbers of trained staff to operate. JOUST, in the UK, is a good example of such a system. EPIC is a system that models human multiple-task performance in a symbolic architecture that includes perception and sensory models [1]. The SOAR architecture models higher level cognitive functions such as learning and problem solving [2]. SOAR has been used for modelling human decision making within military simulations [3]. A good summary of existing cognitive modelling architectures may be found in [1].

For reasons outlined below, intelligent software agents are our preferred technology for modelling human decision making. We propose an architecture for modelling human decision making that extends existing architectures [4] in a way that increases the fidelity of existing models whilst allowing substantial reuse of existing software. In Section 2 we describe the existing architecture and provide details of our previous developments. In Section 3 we propose an extension to the architecture that allows a separation of the cognitive and sub-cognitive aspects of human decision making. In order to demonstrate the computational feasibility of this work we constructed a proof-of-concept demonstrator. 1.1

Background

The use of intelligent agents for the modelling of human decision making has been very successful. Reasons for the success of this technology lie almost exclusively in selecting the correct solution to a problem. Agents – specifically BDI agents – have been well suited to the simulation problems that we have so far attempted. Previous use of these models has focussed on operations research. We do not wish the agents to make mistakes, to make bad decisions, or be overwhelmed by stress. We wish to test tactics and compare hardware in a controlled environment. For these types of systems, there is little requirement for incorporating sub-cognitive aspects of pilot decision making into our pilot models.

Present modelling is at the engagement and mission level (see Fig. 1). Future development will push the technology in several directions. Campaign modelling with larger scale (temporal, geographic, social) scenarios lies in one direction. In the other is the complex world of human-in-loop (HIL) simulation with requirements for higher fidelity place very different performance requirements upon implemented cognitive models.

Social Complexity (Number of agents)

Campaign Mission Models Engagement Models Existing developments

Practical computational limit

HIL Simulation

Reasoning Complexity (Fidelity of individual agent)

Figure 1 Complexity of Agent Systems in Air Combat Simulation These two development paths lie in opposite directions on a continuum. Simulations that involve large numbers of entities often have lesser requirements for the complexity of the behaviour. This is in part due to the difficulties of analysing, understanding, controlling, and defining such scenarios. At a more practical level, there will also be a hard limit due to the computational limits of the hardware available. This paper investigates an extension to current architectures that integrates machine learning with existing intelligent agent models. This addition provides the capacity for incorporating learning, memory, skill-based action and experience – things required for the sub-cognitive detail necessary to extend the capabilities of our existing agent models to the level required for human-in-loop simulation. This paper explores issues related to increasing the fidelity of the individual agents.

2

Data from the environment

BDI AGENT 1

Situation Awareness

2

Situation Assessment

3

Tactic Selection

Existing Agent Design

Since that time intelligent software agents have been used for modelling fighter pilots, controllers, and commanders in several operations research simulators developed by AOD [5]. They have also been proposed for human-in-loop simulation as computer generated forces [6]. Agents come in a variety of incarnations. AOD’s developments are examples of the beliefs-desiresintentions (BDI) agents formalised by Rao, Georgeff and Lansky [7],[8], and operationalised in the dMARS languagei. The BDI model of rational agency has been a valuable and productive tool for the development of cognitive models. Agents provide benefits of i

abstraction, modularity and autonomy. The BDI model has allowed easier knowledge acquisition and agent design and creates models of behaviour that are easier to validate, explain, and engineer [4]. Our view of human cognition has been shaped by a commitment to the BDI model of rational agency. This model continues to provide the basis of much of our research and development and is the basic framework for all of the large simulations of air-combat that we have reported on,[4],[5],[9]. A program supporting the development of these models is extending existing research in several directions. Work by Tidhar, Selvestrel, and Heinze [10] has shown how teams and team tactics are modelled within the BDI paradigm in an air combat environment. Rao and Murray [11] investigated the theoretical application of recognition of intention. Lloyd [12] describes an architecture for incorporating working memory and other human performance aspects into the BDI model. Our existing model of human cognition is operationalised within the intelligent agent software that accompanies our current simulations. In practice the folk-psychological notions of belief, desire, intention and other mental-states associated with the BDI model provide us with language suitable for describing human decision making in terms that can be understood by analysts, software engineers and the experienced decision makers who are the subject of the modelling. Fig. 2 shows the standard agent architecture for our existing implementation.

dMARS ® is a trademark and product of the Australian Artificial Intelligence Institute. Details may be found at www.aaii.com.au/

4 Standard Operating Procedures

Cognitive (reasoned) Action Figure 2 Existing Agent Design Details vary from model to model but the basic concepts stay the same. Architectural details can be found in [9] and a description of the implementation of teams and team-tactics in [10]. The operation of the agents is defined by the four processes:

1.

Situation Awareness Data entering the agent from the environment is processed and higher level concepts are derived from the data. This is a translation from data to belief – from raw numeric data to the concepts that real people believe. For example. The aerodynamic model of an aircraft may indicate that 2345.675389 litres of fuel remain. This piece of data may then be modified by the model of the fuel gauge to limit the number of significant figures to which the data is made available to the pilot agent. The actual data that is sent to the agent may be 2340 litres. Pilots tend not to use such numbers in their tactical deliberations. Pilots convert these numbers into higher order concepts. Fighter pilots actually have code words like JOKER and BINGO that describe fuel states and so it is into this type of language that the raw data is converted. 2. Situation Assessment The situation at a given instant in time is based on the symbolic descriptions of the situation that emerge from the previous step. This assessment of the situation involves the recognition of enemy tactical activity and 3. Tactic Selection the situational descriptions are used to select a tactic to employ. This stage involves scanning the factors involved in making a decision. Current tactics must be considered, along with the goals of the larger team. Various options may be considered, and than rejected, suspended, or accepted. 4. Standard Operating Procedures this module implements the tactical repertoire available to the agent. The rigid doctrine of military standard operating procedures means that often the detail of the implementation of the tactics is straightforward to implement. These models of decision making can be implemented explicitly with a high-level agent-oriented languages that support the BDI model, such as dMARS or JACKii. We provide suites of plans [13] that allow the agents to reason about their environment, their own goals and the goals of other agents and perform in an attempt to successfully complete their mission.

3

Proposed Architecture

Our existing BDI agents do not provide all of the required complexity for proposed applications – specifically human-in-loop computer generated forces. Rasmussen [14] differentiates between knowledge, rule and skill levels in the analysis of work. Our existing system accounts for the knowledge and rule-based behaviour. The proposed architecture adds skill based behaviour in the form of : (1) skill-based perception – the recognition of situations based upon previous experience and (2) skill-based action – the ability to perform complex sequences of behaviour with little cognitive effort. ii

JACK ® is a product of Agent Oriented Software details may be found at www.agent-software.com.au/

The BDI model does not capture skill-based behaviour well. It is difficult and time-consuming to integrate it directly into the existing agent design. This is due in part to the whole notion of deliberative reasoning being distinct from (though related to) the skills of an agent. As Bratman [15] states “…plans will typically be at a level of abstraction appropriate to my habits and skills”. The BDI model treats skill as atomic behaviours that are performed without resort to planning – they are performed automatically by the agent once they have been decided upon. A strength of the existing BDI agent modelling is the capacity to abstract the detail of the behavioural functionality and allow the analyst to consider the agent design at the tactical level. To now resort to the level of detail necessary to capture skill-based behaviour – with a model not well suited – seems counter productive. There are other candidate systems for capturing this skill-based behaviour that promise to increase the fidelity of the resulting models whilst simultaneously allowing the reuse of existing software and easing the software engineering task. One such system, CLARET [16], was used to implement the concept demonstrator discussed in Section 5. Lloyd [12] has provided a model for implementing human performance within a BDI agent system. To exploit this to model human factors such as workload, stress, ability, currency, experience, attention, and fatigue requires a separation (or at least a definition) of activities requiring cognitive effort from those that don’t. As a first-order assumption we can assume that skill-based behaviour is sub-cognitive. This is one step down the path toward creating human performance models within a BDI agent framework. Fig. 3 shows the proposed architecture. The addition Data from the environment

Rule/Knowledge Based Module (BDI Agent) Plans Beliefs Goals Intentions Cognitive

Cognitive (reasoned) Action

Framing

Recognition

Skill Based Module Experience Memory Recognition Learning Sub-cognitive

Sub-cognitive (automatic) Action

Figure 3 Proposed Architectural Design

of the skill-based module will provide the perception and action that is required. The Rule/Knowledge based Module is our existing BDI agent. Both modules receive data from the environment. The BDI agent functions in much the same way as described in Section 3 with two modifications. The BDI agent now also receives data from the skill-based module. This data is in the form of situational descriptions that indicate that the skill-based perception module has recognised an event, situation, or relationship. This recognition can be factored into the reasoning of the agent. The agent provides the skill-based module with information concerning its current mental state. This information allows the skill-based module to limit the search space in which recognition is attempted. Consider a model of a fighter pilot. The skill-based module is capable of recognizing ten different types of manoeuvre that are performed by an enemy aircraft. Knowing the manoeuvre that is being performed is important to the agent when deciding upon a countertactic. Deciding exactly which manoeuvre is being performed is not an easy task for the skill-based module and it has difficulty in selecting between some quite similar manoeuvres. The BDI agent has knowledge of the tactical situation and reasons that it is not possible that the enemy aircraft will perform manoeuvre X or manoeuvre Y. This piece of information may be valuable to the skill-based module as it can remove these from the manoeuvres that it is trying to recognise and concentrate on the others. We refer to this as the process of framing. The agent provides a frame of reference in which the skill-based perception module will attempt recognition. Let us now consider the skill-based module in more detail (Fig. 4). Data from the environment enters the system, as does framing data from the agent. The framing data conditions the nature of the pattern recognition undertaken upon the data from the environment. The skill-based perception sub-module examines this data – along with previously acquired data – in a search for patterns. If recognition occurs, transmission of this fact to the agent follows. Internally the perception sub-module also provides data to the reaction sub-module. The reaction sub-module contains a mapping between skill-based perception and skill-based action. We envisage that this module would be relatively simple – though for applications rich in sub-cognitive detail it could be as sophisticated as required. It will contain automatic reactions to events in the world and trigger the skill-based actions. The skill-based action sub-module is a library of learned behaviours. These can be executed independently of the agent and hence involve no cognitive overhead. Exactly how this module is implemented will depend upon the application but machine learning and more explicitly behavioural cloning [17] seems to offer much of the required functionality. This architectural design description has been necessarily high-level. Much of the detail will depend upon the specific implementation. In Section 4, we provide details of a prototypical system developed to explore some of the issues presented here.

Data from the environment

Framing

SKILL BASED MODULE Skill Based Perception (Machine Learning)

Skill Based Reaction (Simple Mapping)

Skill Based Action (Behavioral Cloning)

Sub-cognitive Perception

Sub-cognitive Action

Figure 4 Skill-Based Module Design

4

Example Implementation

When learning to fly, student pilots use a set of rules to determine when to perform certain manoeuvres. “A medium level turn from down-wind leg onto base leg is made when the touchdown point on the runway lies approximately 30 degrees behind … In a strong wind the turn should be commenced earlier to keep the base-leg closer to the aerodrome boundary.” [18] page 13-7. In the cockpit a student pilot will frequently look over her shoulder, judging the angle to the runway and guessing the moment to turn. This often takes considerable cognitive effort on the part of the pilot requiring considerable attention. As the pilot become more experienced these rules – whilst still valid – tend to be lost in the sub-conscious and the pilot just knows when to commence the turn. She perceives the spatial relationships within her field of view and turns without being aware of the angles between her aircraft and the runway. The rule-based reasoning of the student pilot has been replaced with skill-based perception in the expert. It is exactly this migration of behaviour – from rule-based to skill-based that will be demonstrated by this experiment. This experiment will attempt to show that skill-based perception can be captured and then incorporated into rule or knowledge-based reasoning within a real-time human-in-the-loop simulation. In this experiment, the sub-cognitive module is modelling experience and the ability to recognise situations that have previously been encountered. We

are adding a model of skill-based perception to our rulebased BDI agent. A proof of concept demonstration was constructed to test some of the concepts highlighted in previous sections. The experimental setup integrates three components (see Fig. 5): (1) the FSIM flight simulator flown by a human pilot with a mouse or joystick; (2) a BDI agent implemented in dMARS that acts as a flying instructor – we refer to this as the Instructor-Agent;

FSIM

Instructions to pilot

Instructor Agent

Data from the flight simulator

4.3

CLARET Situation Recognition

Figure 5 Technology demonstrator configuration (3) a skill-based module that can be trained to recognise the actions of the pilot through spatiotemporal relationships between variables output from the flight simulator. This module is implemented in CLARET [19]. Component details follow. 4.1

The FSIM Flight Simulator

FSIM is a flight simulator under development by Curtin University. FSIM can be flown by a joystick or mouse and implements a flight dynamic model of a current RAAF training aircraft developed at DSTO. The flight simulator provides a three dimensional outof-cockpit view and a generic set of instruments. Atmospheric effects such as wind and turbulence are modeled. Input files specify geographic features such as airstrips, mountains, buildings and trees. 4.2

The FSIM simulator is enhanced to provide descriptions of the out-the-window world view in the data trace as well as dynamic knowledge of the world, the positions of objects, dynamic entities in the three dimensional flight course relative to the pilot, and pilot motion. The system dynamically binds to different manoeuvres as they occur in the trajectories of input time series. The technique used in our simulator is based on statistical pattern matching and learning techniques, currently used for on-line handwriting and gesture recognition. A matching system, called CLARET, has been specifically adapted for recognising manoeuvres based on real-time trajectory information [19]. Recognition is applied to low level instrumentation and aeroplane data to bind these manoeuvres as they occur in traces of pilot behaviour. In the CLARET algorithm an unknown segmented and labelled trajectory case is presented to the system together with examples of known trajectories using a simple polygonal approximation technique. For details of the CLARET algorithm see [16].

CLARET

In descriptive recognition, an expert pilot explains the relationships between event types specifying a decomposition of high-level manoeuvres – “tell me about these manoeuvres”. For example, flying circuits can be decomposed into take-off, crosswind, downwind, base-leg and final-approach manoeuvres. Recognition is then used to bind these manoeuvres to traces of simulator activity.

Instructor-Agent

A model of a flying instructor, the Instructor-Agent, was implemented in the dMARS language. The knowledge required by the agent to fly a circuit and respond to engine failures and sudden wind gusts [18] was provided in the form of plans. Plans in the dMARS system are graphical representations of the procedural knowledge required to complete a task [13]. These plans rely on external information for execution. These data take two forms. (1) Data from the flight simulator. Typical examples are altitude, heading, roll angle, pitch angle, engine revs. These data are typical of the types of data that a pilot would normally acquire from the flight instruments. (2) Data from the machine learning module. Typical examples are; “finished base-leg”, “finished takeoff roll”, and “engine failure”. These data are examples of the types of data that a pilot infers from examining his environment and complex relationships between entities. As these plans execute, they follow the progress of the pilot. The agent-instructor issues messages to the pilot guiding him through the circuit and advising about the manoeuvres to employ. 4.4

Integration

Fig 5 shows the system architecture. The system is implemented on a high-end Silicon Graphics server running IRIX6.3. Data transfer between the processes is via a custom protocol. The system is capable of operation in three modes: Free Flight – CLARET and the instructor-agent are detached from the flight simulator and the pilot is free to practice flying the aircraft; Training – CLARET is attached and can be trained to recognise phases of flight. Instructor –the instructor-agent and CLARET are attached. A human pilot flies the simulator under the

instruction of the instructor-agent. This is the normal mode of operation.

The pilot will continue obeying the Instructor Agent until the aircraft has safely landed.

5

5.4

Demonstration

In order to demonstrate the correct functioning of the system a suitable series of tasks was established. CLARET was trained with examples of the relevant flight conditions and the complete system was trialed. 5.1

Task Environment

Circuit flying is a standard part of learning to fly, it involves taking off after, flying a roughly rectangular course and landing, see Fig. 6. The procedures for flying circuits [18] are straightforward and are standard across many types of aircraft and airport locations. The pilot commences on the ground, taxis to the end of the runway, and then flies a standard circuit. The pilot receives instructions from the Instructor-Agent through a dialogue-box that appears on screen. A typical instruction may be “Turn left onto base leg – heading 170” or “Commence climb to 1500 feet”.

(4) Downwind Leg

(3) Crosswind Leg

(5) Base Leg (6) Final Leg

(7) Landing

(1) Take-off

(2) Climb

CLARET was trained to recognise eight flight conditions. The number of training examples was limited to five. Three different pilots were used to train CLARET and the same three flew the aircraft under the control of the Instructor-Agent. The pilot who performed the training was not always the pilot who flew the circuit. Indeed, every possible permutation of trainer and pilot was attempted. CLARET successfully identified the eight flight conditions without error during every instance of accurate circuit flying. If the circuit was flown poorly then CLARET was unable to match to the trained examples. The errors necessary to cause CLARET to misidentify flight conditions were such that a conscious effort needed to be made to fly poorly. The success of CLARET, whilst important to the success of the demonstration, was secondary to the more general aim of testing the computational feasibility of connecting a model of skill-based behaviour and a model of rule/knowledge-based behaviour within the constraints of a real time system. The flight conditions that CLARET was trained with were sufficiently different that the task of discriminating the phases of flight was not difficult when compared with other applications to which CLARET has been put [20]. The real success of the experiment was in demonstrating the potential of machine learning packages such as CLARET to model skill-based behaviour in way that can be linked to rule and knowledge-based models in real-time human-inloop simulations.

6 Figure 6 Circuit Flying 5.2

Training

The CLARET module was trained to recognise eight different flight conditions that might arise circuit flying; take-off, left-turn, right-turn, touch-down, straight-andlevel, climb, descent and engine-failure. The training amounted to flying the aircraft and providing examples of the flight condition. In practice the number of examples was limited to five. During training CLARET stores the examples of each of the manoeuvres building a stored experience that it will use to provide recognition of those flight conditions in the future. 5.3

Instructed Flight

During instructed flight the pilot flies the simulator through a circuit under the guidance of the InstructorAgent. During this flight conditions will be encountered that map onto the experience of CLARET. If CLARET recognises one of these flight-conditions it announces this recognition to the Instructor Agent. The Instructor Agent processes this information together with information about its current position in the circuit and advises the pilot about what task to perform next.

Results

Discussion

A model of human cognition that separates the cognitive and sub-cognitive into rule/knowledge and skill-based behaviour respectively has been proposed. Design requirements for the implementation of this model were that it should reuse existing architectural designs and code, model skill-based behaviour, and operate within the computational constraints of a realtime system. The technology demonstrator successfully showed the potential for such a system to be successfully deployed. There are many further potential advantages of employing machine learning for the modelling human decision making in the manner described. Agent systems of increasing higher fidelity pose software-engineering challenges. By capturing skillbased behaviour with machine-learning it is possible to better manage the increasing the complexity of the system. Rather than laboriously define and code the required behaviour it can be demonstrated and captured. BDI agent systems have provided a significant advantage in modelling human decision-making. There has been significant investment in research and development and several large successfully implemented systems. The proposed architecture leverages against existing development by extending current architectures in a modular fashion.

Computer generated forces that must fight against (or with) human participants present a more substantial challenge. Computer opponents are often accused of being too predictable, too single-minded, or simply not realistic. The designer of a system that incorporates human players must produce computer-generated forces capable of responding to the complex and unpredictable actions of the human. The increase in fidelity obtained by adding skill-based perception and action to existing models takes a step toward truly credible computer generated forces. Rao and Murray identified inference of intention as an important part of the decision-making process [11]. The task of inferring the intention of another agent based upon observations of their actions is difficult. The task may be made more complex by limited observational information, by tactical complexity and by the deliberately deceptive action of adversaries. If computer generated forces are to be credible opponents in human-in-loop simulations they must predict the actions of the human – even if that human is actively attempting to deceive the computer. Inference of intention is fundamental to this type of performance. Skill-based perception is a big part of the problem of binding situational representations with inferred mental states. The modelling techniques proposed by this research have the potential to provide a solution The most technically challenging aspect of representing human decision making in military simulations is modelling situation awareness and assessment. Human factors are important in situation awareness. In order to model human factors within a BDI agent framework, we need to be able to separate the cognitive from the sub-cognitive. The proposed architecture supports this split. Situation assessment, by experienced decision-makers is often skill-based. They recognise situations that have occurred before and can immediately propose a course of action [21]. Machine learning of the type discussed in this paper has the potential to allow expertise of this kind to be captured.

7

Conclusions

The complexities of human reasoning, and the difference between skill, rule and knowledge-based behaviour mean that selecting a single modelling methodology is difficult. BDI agents have proved successful in modelling at the rule and knowledgebased level but fail to capture the sub-cognitive aspects of human decision making that occur in skill-based behaviour. By selecting a technology well suited to capturing skill-based behaviour and integrating it with our existing models we have been able to demonstrate an architecture for modelling human decision-making that combines cognitive and sub-cognitive models within a real-time system. Further testing should consider more complex recognition in an environment that allows multiple agents acting adversarially or in concert. Extending and advancing this research has the capacity to: increase the fidelity of the behaviours exhibited by intelligent agents

whilst decreasing the software engineering effort required to construct them This experiment addressed only skill-based perception. Work by Sammut et al. [17],[22] provides insights into methods for capturing skill-based action in a manner suitable for integration into this architecture.

8

Acknowledgments

The authors would like to Gil Tidhar for his thoughtful comments regarding the possible future of this work, Mike Skinner for his comments about the cognitive validity of architecture, and Graeme Murray for his continued support for the research program.

9

References

[1] National Research Council. Modeling Human and Organisational Behaviour – Applications to Military Simulations. R. W. Pew and A. S. Mavor, editors, National Academy Press, Washington D.C., 1998. [2] Newell, A., Unified Theories of Cognition. Cambridge. MA: Harvard University Press, 1990. [3] M. Tambe, W. L. Johnson, R. M. Jones, F. Koss, J. E. Laird, P. S. Rosenbloom, & K. B. Schwamb, Intelligent agents for interactive simulation environments. AI Magazine, 16(1), 15-39. [4] C. Heinze, B. Smith, and M. Cross. “Thinking Quickly: Agents for Modeling Air Warfare”. In Proceedings of the Australian Joint Conference on Artificial Intelligence AI ’98, Brisbane, Australia, 1998. [5] S. Steuart, G. Murray, G. Tidhar and A. Rao. Air Mission Modelling-An Application for Artificial Intelligence’, In Proceedings of The Simulation Technology and Training Conference (SimTecT), 1996. [6] D. McIlroy, C. Heinze, D. Appla, P. Busetta, G. Tidhar and A. Rao. Towards Credible ComputerGenerated Forces. In Proceedings of The Simulation Technology and Training Conference (SimTecT), 1997. [7] Georgeff, M. P. and Lansky, A. L., ‘Procedural Knowledge’, Proceedings of the IEEE Special Issue on Knowledge Representation, volume 74, pages 1383-1398, 1986. [8] d'Inverno, M. and Kinny, D. and Luck, M. and Wooldrige, M., ‘A Formal Specification of dMARS’, In proceedings of the Fourth International Workshop on Theories, Architectures and Languages, ed. Wooldrige, M. and Rao, A.,1997. [9] D. McIlroy and C. Heinze. Air Combat Tactics Implementation in the Smart Whole AiR Mission Model (SWARMM). In Proceedings of The Simulation Technology and Training Conference (SimTecT), 1996. [10] G. Tidhar, M. Selvestrel, and C. Heinze. Modelling Teams and Team Tactics in Whole Air Mission Modelling. In Proceedings of the Eighth International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, G. Forsyth, & M. Ali, editors,

Gordon and Breach Publishers, 1995, pages 373381, Melbourne, Australia. . [11] A. Rao and G. Murray. Multi-agent mental state recognition and its application to air-combat modelling. In Proceedings of the First International SimtecT Conference, Melbourne Australia, March, 1996. [12] I. Lloyd. Simulating Human Characteristics for Operational Studies. DSTO Research Report DSTO-RR-0098, 1997. [13] Rao, A. S. , A Unified View of Plans as Recipies, Contemporary Action Theory, ed. HolmstromHintikka, G. and Tuomela, R., Kluwer Academic Publishers, Netherlands, 1997. [14] J. Rasmussen, A. Pejtersen, and L. Goodstein. Cognitive Systems Engineering. John Wiley & Sons, New York, 1995. [15] Bratman, M. E., ‘Intentions, Plans, and Practical Reason’, Harvard University Press, Cambridge, MA, 1987. [16] Pearce, A. R., and T. Caelli. Interactively Matching Hand-Drawings Using Induction. Computer Vision and Image Understanding 73(2) (1999) [17] Behavioural cloning of control skill, Bratko, I., Urbancic, T., Sammut, C., Machine Learning and Data Mining: Methods and Applications, Michalski, R.S., Bratko, I., Kubat, M., Wiley, 1996, pp335-351. [18] T. Thom. The Flying Training Manual – Pre-flight Briefings and Air Exercises. Aviation Theory Centre Pty. Ltd. Williamstown, Australia, 1993. [19] Pearce, A. R., and T. Caelli and S. Goss. On Learning Spatio-Temporal Structures in Two Different Domains. Lecture Notes in Artificial Intelligence. 1352-II:551-558 (1998). [20] Pearce, A. R. Relational Evidence Theory and Spatial Interpretation Procedures. Ph.D. Thesis. School of Computing, Curtin University, Perth Australia. 146 pages (1997). [21] G. Klein. The Recognition Primed Decision Model. In Naturalistic Decision Making, C. Zsambok and G. Klein eds., pages 285-292, Lawrence Earlbaum Associates, Publishers, 1997. [22] G. M. Shiraz and C. Sammut. Learning to Pilot an Aircraft. The Joint Pacific Asian Conference on Expert Systems/ Singapore International Conference in Intelligence Systems, Singapore, 1997.

10 Author Biographies Clint Heinze graduated from the Royal Melbourne Institute of Technology (RMIT) with an Aerospace Engineering Degree (Hons) in 1990. He joined the Defence Science and Technology Organisation (DSTO) in 1990 and works primarily in methodologies in support of cognitive modelling. Recent work has centred on: the development of models of fighter pilots for simulation; research into methodologies for the engineering of multi-agent systems; and research at the boundaries of cognitive science, agency, and software engineering. He is currently completing a Ph.D. in the Department of Computer Science and Software Engineering at the University of Melbourne. Ian Lloyd is Head of Mission and Campaign analysis in the Air Operations Division of DSTO. He has extensive experience of military and civil aviation, including some time as an Airline pilot. He has also worked on analysis of missile guidance, developed a manned simulator for rapid prototyping of cockpit displays and a terrain-referenced navigation system. For the last five years, he has been conducting operational analysis in support of the Airborne Early Warning and Control acquisition. His current research interest is the incorporation of valid human cognitive performance models in intelligent agents. Dr Simon Goss is a Senior Research Scientist at the Air Operations Division, Aeronautical and Maritime Research Laboratories, DSTO. He leads the enabling research in AOD in agents and modelling. He has chaired international workshops on situation awareness and plan recognition, and the national cognitive science conference. His interests include Knowledge Acquisition, Knowledge Modelling, Agent-Oriented Simulation and Expert Systems and Operational Simulation. He is a member of the ACM, ACS, AAAI, IEEE and AORS. Dr. Adrian Pearce is a lecturer in the School of Computing at Curtin University in Western Australia. He completed a Postdoctoral Research Fellowship at Curtin in 1997, funded by the Air Operations Division, Aeronautical and Maritime Research Laboratories, DSTO, using machine learning in operational flight simulation. He is currently engaged in a three year collaborative research contract with the Maritime Operations Division, AMRL, DSTO Stirling Western Australia. Dr Pearce's interests lie in the areas of machine learning, interpreting real-time behaviour and agent-oriented simulation.

Suggest Documents