Nov 19, 1999 - flight simulation, and we present a method for learning action plans in real-time. .... of visual displays including helmet mounted displays.
DRAFT COPY: 19/11/99 17:16
Recognising User Intentions in a Virtual Environment Simon Goss and Clint Heinze, Air Operations Division Defence Science and Technology Organisation 506 Lorimer Street FISHERMENS BEND VIC 3207 Adrian Pearce Curtin University Kent Street BENTLEY WA 6102 Keywords: Agents, Virtual Environment, user interface, user intention, simulation, plan recognition
ABSTRACT: Human-centred design in the adaptive virtual interface is about supporting the intentionality of the user. An operator has a purposeful motivation in undertaking activity in the virtual environment. This can range from taskfocused activity in a vocational sense, such as telepresence surgery or rehearsing mission tactics in a flight simulator, to the purely recreational exploration of a cyberspace. In each case, the adaptive virtual interface needs to monitor the user, and compare user behaviours to models of possible actions. A method is demonstrated for constructing procedures from the spatio-temporal data that describe action plans of agent/entities in a virtual environment. These are required for testing candidate operator intentions against operator action history. This is explored experimentally in the context of flight simulation, and we present a method for learning action plans in real-time. Three components are required: an appropriate ontology (model of operator task performance), an appropriate virtual environment architecture (accessibility of data and image generation databases) and a learning procedure (which relates the data stream to the domain ontology). Relational machine learning methods are applied to traces of pilot behaviour in flight tasks. The simulator is enhanced to provide descriptions of the out-the-window world view in the data trace as well as operator control actions and simulation internal state variables. Control actions are relatable to the achievement of world goal states. Further work has generalised this from user intention modelling, to the modelling of other simulated entities.
1
Introduction
The operator has a purposeful motivation in undertaking activity in the virtual environment. This can range across a number of task-focused activities including mission rehearsal in a simulator where the intentions relate to the achievement of mission goals to the purely recreational exploration of a cyberspace. All of these use the computer as a transformative technology for immersion. The cockpit and computer screens become the experience of flying, the gloves and the screen put the surgeon's hand in the digital patient of an anatomy trainer. Furthermore, the same interface may provide an augmented reality in the actual environment such as the surgeon performing telepresence surgery, or in the electronic cockpit where alternate sensor views are superposed in a fused information device to assist with traversal of physical space and the avoidance of real physical threats. Particular information is required at particular times in response to particular contingencies, and the display device filters and shapes the information appropriately for the context. The context includes the pilot's intentions. In our view, a virtual interface requires an explicit representation of intentionality in the internal representation of agent implementation. We subscribe to the folk psychologic belief, desire and intentionality notion of agency in the construction of an interface
agent, and in its interpretation of the actions of entities in the virtual environment. Here, an agent emulates rational behaviour in that it has intentions which it forms according to its beliefs and goals. An agent uses pre-defined plans, which are applicable to the situation, to fulfil its intentions (long term persistent goals). Such an agent is differentiated from an object-oriented entity in that it is reflective rather than immediately reactive to environmental sensor input. For a description of the agent formalism see (Georgeff & Lansky, 1986; Rao & Georgeff, 1991). An intention-facilitating virtual interface recognises the plans of the user. The level of delegation and authority given to the interface agent (assistant, associate or even supervisor) in taking actions having recognised the plans of the system user by observation of the user’s actions and the sensed environment is a current issue in system construction. (For example in the degree of delegation given to the virtual personal assistant in communication space, or the amount of autonomous authority given to an electronic crew member embedded in the avionics of the cockpit of the future. (Miller & Goldman, 1997). The interface agents need to recognise the situation and service the user intentions; all require recognition of intention of the user. Rao and Murray, working in the domain of pilot agents in an operations research simulation system, indicated that one way to implement recognition is to introspect upon one’s own behavioural repertoire (the
plans one knows about ) and ascribe these to other agents (Rao & Murray, 1994). Intention recognition becomes a search through plan space for plans which match the observed actions of the other entity. This has been demonstrated in a limited capacity in a prototype system (Tidhar & Busetta, 1996) that shows a dramatic change of outcome when agents reason about what other agents might be doing. In military simulations where agents provide artificial players problems of coordination (Tambe et al., 1995) have been found to be due to failure to recognise intentional situations in teams (Kaminka & Tambe, 1997; Tambe, 1997). The (non trivial) issue confronting model-based planning systems as interface agents is the recognition of plans of the user whilst in execution. The problem is harder than identifying an action upon its completion. To be of practical assistance an interface agent needs to know what is happening before that event is over. We explore this in the context of flight simulation, and present a method for learning action plans from spatiotemporal data which describe action plans of agent/entities in a virtual environment. These are required for testing candidate operator intentions against operator action history, and are interpretable as partial instantiations of (operator/agent) intentionality. Our method of constructing procedures requires three components: (a) an appropriate ontology (model of operator task performance), (b) an appropriate virtual environment architecture (accessibility of data and image generation databases), and (c) a learning procedure (which relates the data stream to the domain ontology). In simple terms, we are looking at the domain of circuit flight. We have a task analysis for circuit flight. The flight simulator has an authentic flight model for a PC9 aircraft, and a cockpit with generic throttle and stick controls. It also has a particular software architecture conferring special data recording properties. A relational learning technique is used to relate the data from the flight simulator to the task analysis. We build relations which describe generalised flight plan segments. In practise these run in real time and announce attributed plan segments while the pilot is executing them. This is a compelling demonstration of the feasibility of real-time recognition of intention in a user interface to an immersive virtual environment task. We assert that our results have wider significance and may form part of the foundation for the construction of agent-oriented simulations, and more broadly, virtual environments
2
Ontology Acquisition
In order to recognise the intentions of the user across the virtual interface we need to understand the activities the user is undertaking. To relate plan level description of pilot activities to the detailed data observable in the world of the flight simulator requires explicit representation of the activity goal structure. An ontology is a set of terms, their definitions, and axioms
relating them; terms are normally organised in a hierarchy (Noy & Hafner, 1997). Ontology acquisition involves task analysis and knowledge engineering. 2.1
The Flight Domain
Pilot skills are comprised of hierachic competencies. Some of these involve the ability to recover from abnormal situations. Our design choice of an agentoriented implementation of the interface agent views these as a nested hierarchy of goals. The training method involves acquisition of concepts then application of the skills these describe to build , first a set of part task skills, then more complex combinations and refinements of these. This process is paralleled in the machine learning method here where a domain ontology is acquired, then procedural ontological elements acquired. Pilots first learn the effects of the controls, taxying, straight and level flight, climbing descent and turning (level, ascending, descending and to selected headings). Stalls are practised mainly as a preventative measure. Competency must be demonstrated in recognition and recovery from stalls and incipient spins. These are combined in the complex exercise of circuit flight. A circuit consists of four legs in a box shape, with the runway in the centre of one of the long legs. Take off and landing are into the wind. After take off an aircraft climbs to a height of 500 ft above the aerodrome level. On the crosswind leg the aircraft climbs to 1000 ft above the aerodrome. Height is maintained at this level on the downwind leg which is parallel to the runway. The aircraft descends on the base leg to a height of 500600 ft above aerodrome level and turns to fly the final leg directly along the line of the runway. Competency must be demonstrated in a range of sequences. For example in a Cross-Wind leg, although the wind may be blowing straight down the runway during the take-off, winds can change without notice. Other tasks are learned once the pilot has achieved first solo prior to graduating as a pilot: steep turns ( which require a different control strategy); recovery from unusual attitudes, low level flight, forced landings without power, and formation flight (keeping station, change between formation patterns and to fly formation circuits). 2.2
The Ontology
Our ontology is small and incomplete. We present it as a task hierarchy in Figure Three. Ontology is an arena in which psychology and computer science “interface”. Some elements are to be found in the architecture of the simulator software and interface, some in the descriptions of activities in the simulator. For instance the ontology of circuit flight involves control actions to achieve navigation and flight control goals. These goals are part of the task hierarchy; the controls are part of the interface.
Flaps & Throttle Lift Off Take-off Climb (Flaps) Climbing Turn Crosswind Leg
Climb Medium-Level Turn
Downwind Leg Circuit Flight
Straight and Level Medium Level-Turn
Base Leg
Descend (No Flaps) Descend (Flaps)
Final Leg
Medium Descending Turn Descend (Full Flaps)
Landing
Round Out Touch Down
There is the complication that contingency plans are executed in parallel, and that plans at several levels can be currently under execution and interleaved. For example the goals of safety are concurrent with goals of navigation and communication. However the hierarchical representation of gaols is useful for navigating the knowledge structure and organising training sessions in the simulator.
3
The Virtual Environment Architecture.
The architecture of the virtual environment constrains the interactional possibilities. Paraphrasing Boden, it is not how the world is, but how it is represented as being that is crucial with regard to the truth of intentional statements (Boden, 1978) . In order to make sense of the actions of the user, and of other virtual agents in the virtual environment in intentional terms we need to be able ask and answer questions of the environment. The architecture we describe arose from attempts to use flight simulators as knowledge acquisition tools to get rules of pilot performance with the eventual aim of creating agent rule bases to provide artificial agents as opponents and allies in human-in-the-loop simulation, and to represent crew behaviour in the operations research models of air engagements. The insight driving this was that we can have access to the image generator object data base and include in it labels for objects as well as rendering information for visual displays. We can then create a data record, on a frame by frame basis if required, which relates the user actions to symbol level descriptions of the virtual world presented to the user as imagery. This is the raw material of descriptions and perception of intentional acts in a virtual environment.
3.1
Our Virtual Environment
In addition to the status of navigation instruments and past actions, pilots use knowledge of the world, both in own-ship (egocentric or view-dependent) and map-view (exocentric or view-independent) representations. Such information is critical in control and trajectory planning in the visual flight regime. We refined a workstation based flight simulator used in machine learning of control strategies (Sammut, 1992) by rewriting it to provide the world view and by including a verasic flight interface and model. Our flight simulator has a significant difference in veracity, interface and data architecture. It has the ability to not only record the actions of pilots, instrument and simulation internal status variables, but also dynamic knowledge of the world; the positions of objects, and dynamic entities in the three dimensional flight course relative to the pilot, and pilot motion (Goss, 1993; Goss, Dillon, & Caelli, 1996) The flight controls for the simulator include a control column, throttle, brakes, rudder, flaps, in a generic single-seat cockpit with rudder pedals, stick, throttle and stick mounted switches. The switches are used for viewpoint controls, the autopilot, and mode switching and cuing of the flight simulator software. The cockpit is trolley mounted with a seat. A monitor in the trolley is used for instruments. The out the world view is projected onto a wall in ain a darkened booth with a video projector. The simulator runs using (possibly many) video projectors in a projection room. (We are in fact using the work up and part-task psychometric simulator facilities of the DSTO Air Operations Simulation Centre which has a variety of wheel-in cockpits for fixed and rotary-wing aircraft, and a variety of visual displays including helmet mounted displays and 200 degree by 100 degree partial dome). For remote demonstration purposes we have implemented a desktop flight simulator throttle and stick. The workstation monitor provides the out the window view. A Flyboxi provides throttle and stick and switches. A general view in the simulation community is that this level of interface is sufficient for ancillary players whose purpose is to provide agency for other players in the virtual environment such as wingmen or incoming targets for experimental subjects. The simulation centre facility was most acceptable to pilots. There is also a mouse and keyboard interface. In each case we can monitor the control actions of the pilot. The displayed instruments include airspeed, direction, an artificial horizon, rate of climb, throttle position, and flaps position indicators. Movement through the virtual environment is based on a six degrees of freedom flight model, which uses a database validated from windtunnel experiments. The flight model is authentic to a particular class of aircraft, in our case the PC9, a high performance single-engine propeller driven aeroplane. We record the virtual world in our data structures and are able to relate operator activity to goals in the outside i
Described http://www.bgsystems.com/products/FlyBox.html
world. The main additional requirements to a typical flight simulator are for recording the visual and geographic positions of objects and to determine their visibility. We accomplish this with a Symbolic Description Generator (SGD), which is analogous in operation to the image generator in a virtual environment. Its function however is to render a description of the component of the scenery and their mutual relations rather than render the pixel image from the object database The SDG consists of two levels. The first generates raw data which describes the positions and visibility of target points on each object in the simulation. The second, and most important level, converts these data into a rich symbolic description of the visual scene. The simulated world is a large area containing natural features such as mountains and rivers, and cultural features like buildings and runways. Objects can be static or dynamic, such as moving vehicles. The world is described through a number of object databases which are loaded via a command file. For recording descriptions of imagery, issues include the frame update rate and the rate of update of description. There are many permutations of imagery that would correspond to a single symbolic description. The scene description is insensitive to small changes in scale and ranging. The data recording rate is variable, depending on the underlying hardware and the current scene complexity. Time is calculated in absolute terms, so a varying visual update rate does not affect the subsequent symbolic processing. Side, rear and map views are available, and viewing mode changes are also logged. The object database contains a set of objects, each of which may have a number of trackable or target points on them. These points are used to give quantitative relative relationships when referencing objects. For example, when referencing a mountain, it is useful to reference the peak of the mountain, a set of points around the base, and possibly the volumetric centroid of the mountain. Multiple target points on an object also allows the observation of higher order properties such as relative rotation of an object. The output is in the form of time-series relational statements, which can be illustrated with periodic inplace images as required. An example is shown in Figure Four. The symbolic statements refer to the visibility of objects, their spatial relations, their relationship to the centre of visual flow (COVF), the pilot controls, and the absolute position of objects and the simulated aircraft. Each variable or variable pair can be controlled to different levels of quantisation, or different thresholds for noticing change. The output can be in a linguistic variable form, or in a numeric form.
4
Learning Plans
In our flight simulator work we build on the work of Sammut and co-workers (Sammut, 1996; Sammut, 1992). Their work in machine learning concerned the construction of behavioural clones from traces of operator action on a workstation running a flight simulator program. A consensus view (generalisation across a number of operator traces) is constructed as an auto pilot. This autopilot can then fly the simulator. As it encompasses general tendencies rather than recording a particular episode it captures the underlying strategy and reduces the effect of episodic variation. The behavioural clone is a characterisation of the control strategy of the operator at the task. This work represented a significant departure for machine learning from dealing with static data and classifier tasks. From our point of view the ability to reproduce behaviour is not the same as being able to recognise it. We use relational learning techniques to relate task descriptions to the control actions and available information from the displays (instruments and external world view). Relational representation is a powerful technique for interpreting visual and temporal information (Bischof & Caelli, 1994; Bischof & Caelli, 1997; Pearce, Caelli, & Bischof, 1996) including the on-line interpretation of traffic intersections for legal turns and right of way (Dance & Caelli, 1993). CLARET (Consolidated Learning Algorithm based on Relational Evidence Theory ) utilises the constraints present in time series data; those of states and their continuous valued, attributed relationships (the scenery), and actions or designs (the scenario). Relational rules are generated which explicitly depict actions and relationships between states of the general form shown below in Figure 2. Here While is a bounded temporal condition upon if. Learning While structures is a significant step in machine learning in dynamic domains. This is the key to learning intentional structures in our method, and the heart of the symbol binding relating explicit coded rules for an interface agent to parametric bounds over the multiscaled feature space of the user interface. The data structures of the flight simulator permit the logging of both pilot actions and object positions over time, over a large number of time intervals for different flight scenarios, for coding spatio-temporal sequences, representational hierarchies must integrate the notion of what a state is together with relative location to other states.
The General Form WHILE interpreting (or intending to achieve) goal_i IF this state and that state have these relationships in space AND this action and that action have those relationships in time AND ... THEN describe (or prescribe) sub-goal_j at time t.
An instance
While in context of FLYING-CIRCUIT if LEVEL-LEFT-TURN before CRUISING-ALTITUDE and if LEVEL-LEFT-TURN overlaps WIND-AT-RIGHT-ANGLES ... THEN intention is TURN-TO- CROSSWIND-LEG
Figure 2: CLARET Rules, the General Form and an Autopilot Rule
4.1
Recognising Learned Plans
In supporting intentionality in a virtual environment interface agent we anticipate user action by recognising intentions instantiated as plans under execution. In the flight simulator the aim is to predict future pilot actions, in a given time interval, from pilot actions, instrument settings, object co-ordinates and near-object characteristics at previous time intervals. What makes this problem one for machine learning is that more traditional time series can not be applied. This is because of the complexity of the data and the fact that very different variable states can require similar pilot actions over a flight path. CLARET uses a relational evidence network to deterministically order the search resulting in an admissible strategy. Real-time performance in the flight simulator is made tractable by reducing the cardinality of the search space using dynamic programming principles and relational evidence measures to prune the computation involved. Queries are delineated from indexing operations such as attribute-based lookup in that the data is relational. This is reflected in the form of input data used, whether it be in the form of lists of attributes or relations which encode specific instantiations (combinations) of states. For example in describing flight, an approach to land manoeuvre is defined by the subsequence of different roll-pitch-yaw states of the aeroplane and different actions on the control yoke. The system can now be used to interactively query partially enacted sequences in predictive mode or describe sequences in descriptive mode. This now opens the possibility of adaptive response according to ascription of user intention by observation of system state and intentional interpretation of user behaviour. For example, the system can either advise
the pilot during the mission, and adapt visuals and the activity of other entities in the scene accordingly. In the case of the wind changing direction, other aeroplanes may be re-routed, ground vehicles may be redeployed and the ground lighting on the runway adjusted accordingly. A parser converts spatio-temporal rules obtained by the learning procedure into descriptions that are consistent with the syntax of the target BDI agent formalism. The output is also human-readable, allowing for interactive validation. A human-readable summary of activity which occurred during the simulation is also available for analysis by pilots and trainers.
5
Discussion
The possibility of the recognition of plans of system users for interface agents has been demonstrated. The construction of plan recognisors from labelled spatio temporal sequences of previous user behaviour has been described. This section addresses issues of the virtual environment interface 5.1
Task Validity and Fidelity Issues
Fidelity and face validity of task are significant issues in the provision of synthetic experiences for experts at the real world activity. The fidelity of the flight model (to aircraft class) was at least as important to pilots as the fidelity of the cockpit. Providing synthetic experience is simpler in areas such a vehicle movement through space where some interposed means is the norm in the natural world between the user and the environment, and actions are effected through controls such as throttles, rudder pedals and control yokes. If these are electronic rather than mechanical in nature such as the fly by wire cockpit then so much the better. Simulators that are adequate for computer science research into modelling
spatio-temporal expertise are not necessarily adequate for aviation related virtual experiences. Former general aviation and airline crew did not like early versions of our system in which the flight model was unrealistic. For them the environment was not immersive. It was adequate, particularly with increasing sophistication of cockpit and controls, with a low-fidelity flight model to simulation technology and aeronautical engineers with some flight training due to their familiarity with a range of fidelity in simulators and the difference in their expectations of a “flight” simulator. In work by others it has been found that task performance in maintaining flight profiles was degraded in a simulator compared with the natural world until the pitch changes in engine noise was rendered authentically (Oldfield, Meehan, & Goss, 1995). 5.2
Architecture Issues
With appropriate architecture and labelling we have an environment in which computer processes can act upon “high level percepts”. If objects, actions and relation labels can be read from a database, it is not necessary to execute low level perceptive processes. The same formalism that is used for the description of human operator behaviour at the cognitive goal level, can be used for the implementation of computational process in an agent oriented system. The intentional structure gives us a description of what the user does as a template for the model of the virtual interface agents. A benefit of agent oriented design is that new plans may be incrementally entered without interruption to the system. 5.3
Ontology Revisited
The purpose of the ontological engineering endeavour is to identify the range of possibility of user interaction. The interface agent, and system architecture should support that. The user may have experiences outside the design of the system builder, but the system will only sense that which is built into its model of the world. There is not a cogent theory of the construction of these systems on an application scale. Ad-hoc programming idioms are used. The engineering development of the internal mechanisms of agent oriented simulation environments is in advance of theory. In knowledge based systems such as KADS, (Schreiber et al, 1993) structures for the organisational context are in the conceptual schema, but employed as “fix-its” rather than as the cornerstones of systems design. The least well developed aspect of the work-in progress reported in this chapter lies in the ontology acquisition. The task analysis methods of traditional HCI are lacking. Most work to date is in the interaction of a single user interacting across a single interface. Teamwork and co-ordination are the metaphors needed in creating virtual interface agents. The intentional structure required for the provision of a virtual team member to work with a user, such as an artificial wingman working in concert with a pilot against teams of antagonists needs to accommodate shared intentions and dynamic co-ordination and role allocation. Agents need to model the intentional structure of other agents in order to achieve team behaviours. Modelling of
deception and competitive behaviour is even more complex, involving recursive I believe that he believes that I believe... situations. The same structure is needed for the digital personal assistant in the commercial cybersphere. The statement of hierachic plans has some similarity to those in Air-Soar (Pearson, Huffman, Willis, Laird, & R. M. Jones, 1993). Here the Soar architecture is used to control a workstation-based flight-simulator through the successive application of operators within a series of sub-goals and problem spaces. These are hierachic in nature and increasing from homeostatic goal achievement (maintain altitude), and hetrostatic goals achievement (ascend, descend). Air-Soar has two views; one centred in aircraft parameters (roll, pitch, yaw, and control settings of ailerons, elevator and throttle). The world centred problem space has operators which achieve positioning and orientation (in Cartesian space) goals. These operators which give changes in rates of climb, turn and velocity. As the domain is the same and Soar is an agent architecture this is not surprising. Our work however has some significant differences in the fidelity of simulation and task, and in the architecture of the simulator. The agents we construct can not only undertake activities, but can report on the actions of other agents while undertaking activities, before completion. This is the important aspect for the design of interface agents.
6
Conclusions
The virtual environment comprises a logical map, the database, the human interface, and the stimulation, principally optical and auditory rendering channels. As designers we control the image databases and can label objects, entities and even behaviours. Our results are preliminary and do not yet achieve fully achieve our project vision. They do however offer some useful insights in the recognition of plans, and in the construction of plan recognisors. The process of constructing a system has alerted us to deficiencies in current theory in task analysis and in the design and construction of simulators. A focus on the data record and techniques for its creation, analysis and manipulation as part of the design for the technologies of sensate immersion allows the possibilities of simulator systems and virtual environments as a new observational environment for human factors and psychology. The results we have achieved are a significant step toward to use of the virtual environment for the capture of procedural expertise, and affirm our belief in the utility of (computational) cognitive folk psychology in constructing systems. They demonstrate a method of recognising intentional actions whilst in the process of execution.
7 1.
References Bischof, W., & Caelli, T. (1994). Learning structural descriptions of patterns: A new technique for conditional clustering and rule generation. Pattern Recognition, 27, 689-698.
2.
3. 4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
Bischof, W. F., & Caelli, T. (1997). Visual Learning of Patterns and Objects. IEEE Trans on Systems Man and Cybernetics Part B, 27(6), 907918. Boden, M. (1978). Purposive Explanation in Psychology: The Harvester Press. Georgeff, M. P., & Lansky, A. L. (1986). Procedural Knowledge. Proceedings of the IEEE (Special Issue on Knowledge Representation), 74, 1383-1398. Goss, S. (1993). An Environment for Studying Human Performance and Machine Learning. Paper presented at the Second Australasian Cognitive Science Conference (Cog Sci-93), Melbourne. Goss, S., Dillon, C., & Caelli, T. (1996). Producing Symbolic Descriptions of Generated Imagery. Paper presented at the First International Meeting in Simulation Technology and Training (Simtect), Melbourne. Kaminka, G. A., & Tambe, M. (1997). Towards Social Comparison for Failure Detection. Paper presented at the Fall Symposium on Socially Intelligent Agents, MIT. Miller, C., & Goldman, R. (1997). "Tasking" Interfaces; Associates that Know Who’s the Boss. Paper presented at the 4th Joint GAF/RAF/USAF Workshop on Human Computer Teamwork, Hotel zur Post, Kreuth, Germany 23-26 Sept 1997. Noy, N. F., & Hafner, C. D. (1997). The State of the Art in Ontology Design. AI Magazine, 18(3), 53-74. Oldfield, S., Meehan, J., & Goss, S. (1995). Manned Simulation - Research Challenges and Issues for Aviation Psychologists. Paper presented at the 3rd Australian Aviation Psychology Symposium, Sydney, Australia. Pearce, A., Caelli, T., & Bischof, W. (1996). CLARET: A new Relational Learning Algorithm for Interpretation in Spatial Domains. Paper presented at the Fourth International Conference on Control, Automation, Robotics and Vision (ICARV'96), Singapore. Pearson, J., Huffman, S. B., Willis, M. B., Laird, J. E., & R. M. Jones. (1993). A symbolic solution to intelligent real-time control. IEEE Robotics and Autonomous Systems, 11(3-4), 279-91. Rao, A., & Murray, G. (1994). Multi-agent mentalstate recognition and its application to air-combat modelling (50). Melbourne: Australian Artificial Intelligence Institute. Rao, A. S., & Georgeff, M. P. (1991). Modeling rational agents with a BDI-architecture. Paper presented at the Second International Conference on Principles of Knowledge Representation and Reasoning. Sammut, C. (1996). Automatic construction of reactive control systems using symbolic machine learning. The Knowledge Engineering Review, 11(1), 27-42. Sammut, C. H., S. Kedzier, D. and Michie, D. (1992). Learning To Fly. Paper presented at the Proceedings of the Ninth International Conference on Machine Learning, Aberdeen.
17. Schreiber, G., Weilinga, B. and Breuker, J (Eds) (1993) KADS: A Principled Approach to Knowledge-Based System Development, Academic Press. 18. Tambe, M. (1997). Towards Flexible Teamwork. Paper presented at the Fall Symposium on Socially Intelligent Agents, MIT. 19. Tambe, M., Johnson, W. L., Jones, R. M., Koss, F., Laid, J. E., Rosenbloom, P. S., & Schwamb, K. (1995). Intelligent Agents for Interactive Simulation Environments. AI Magazine, 16(Spring). 20. Tidhar, G., & Busetta, P. (1996). Mental State Recognition for Air Mission Modeling. Paper presented at the TTCP X-4 Workshop in Situation Assessment, Naval Research Laboratory, Washington DC.
8
Author Biographies
Dr Simon Goss is a Senior Research Scientist at the Air Operations Division, Aeronautical and Maritime Research Laboratories, DSTO. He leads the enabling research in AOD in agents and modelling. He has chaired international workshops on situation awareness and plan recognition, and the national cognitive science conference. His interests include Knowledge Acquisition, Knowledge Modelling, Agent-Oriented Simulation and Expert Systems and Operational Simulation. He is a member of the ACM, ACS, AAAI, IEEE and AORS. Dr. Adrian Pearce is a lecturer in the School of Computing at Curtin University in Western Australia. He completed a Postdoctoral Research Fellowship at Curtin in 1997, funded by the Air Operations Division, Aeronautical and Maritime Research Laboratories, DSTO, using machine learning in operational flight simulation. He is currently engaged in a three year collaborative research contract with the Maritime Operations Division, AMRL, DSTO Stirling Western Australia. Dr Pearce's interests lie in the areas of machine learning, interpreting real-time behaviour and agent-oriented simulation. Clint Heinze graduated from the Royal Melbourne Institute of Technology (RMIT) with an Aerospace Engineering Degree (Hons) in 1990. He joined the Defence Science and Technology Organisation (DSTO) in 1990 and works primarily in methodologies in support of cognitive modelling. Recent work has centred on: the development of models of fighter pilots for simulation; research into methodologies for the engineering of multi-agent systems; and research at the boundaries of cognitive science, agency, and software engineering. He is currently completing a Ph.D. in the Department of Computer Science and Software Engineering at the University of Melbourne.