Institute for Simulation and Training, University of Central Florida, Orlando, ... of conveying an agent's internal reasoning processes to non-technical users and,.
In Proceedings of the 11th Conference on Computer Generated Forces and Behavioral Representation, pp 157-167. Institute for Simulation and Training, University of Central Florida, Orlando, Florida. 2002.
VISTA: A Generic Toolkit for Visualizing Agent Behavior Glenn Taylor Randolph M. Jones Michael Goldstein Richard Frederiksen Robert E. Wray, III Soar Technology, Inc. 3600 Green Court Suite 600 Ann Arbor, MI 48105 734-327-8000 {glenn, rjones, mgoldst, rdf, wray}@soartech.com Keywords: CGFs, Agents, Visualization, Behavior Explanation. ABSTRACT: In order for agent-based technologies to play an effective role in simulation and trainingsynthetic force applications, they must carry outgenerate realistic and believable behaviors that are virtually indistinguishable from what a human actor might do under the same circumstances. However, systems that are capable of such complex behavior necessarily include complex internal representations and The complicated nature of agent-based systems and of their interactions with their environment.s, however, often These complexities makes it difficult to examine and evaluate an agent's internal reasoning processes without extensive knowledge of the technical details of the agent's implementation. The difficulty of conveying an agent's internal reasoning processes to non-technical users and, consequently, of demonstrating the accuracy of the resulting behaviors is a large hurdle in the acceptance of agentbased systems. To address this challenge, we have developed the Visualization Toolkit for Agents (VISTA), which can be used to build visualization tools that provide insight into an agent's internal reasoning processes. Such tools allow agent developers, subject-matter experts, and training supervisors to verify the correctness of an agent's behavior without delving into the technical details of its implementation. VISTA is a generic infrastructure that tries to makes as few commitments as possible to a particular problem domain or agent architecture. VISTA aims to make it easy for agent developers to construct visualization tools for their particular agent technology. In this paper we describe VISTA and illustrate its usefulness by presenting the Situation Awareness Panel (SAP), a particular instantiation of the VISTA framework, that we used to examine the behavior of Soar agents operating in the tactical air combat domain. This paper presents the motivation for creating a 1. Introduction generic toolkit for agent visualization, describes the framework we have developed, and illustrates the This paper describes a toolkit, called the Visualization toolkit with a prototype that works with a fairly Toolkit for Agents (VISTA), for building tools to complex intelligent agent system. visualize the internal reasoning processes of intelligent agents, and a prototype visualization tool built from 2. Background this toolkit. A key goal of this research is to develop a visualization framework that supports, with minimal effort, a variety of intelligent agent architectures, knowledge representations, and task domains. For this reason, this project focuses on identifying representational structures that are common to most or all agent architectures. On that basis, we have constructed a component-oriented framework for run-time (and postrun-time) visualization of the internal processing of generic agents in interactive, multi-agent applications. Understanding that we cannot hope to represent all the nuances of each agent type and application area, we have also emphasized creating a toolkit that is easily extensible.
We entered into this project with significant experience with a particular type of intelligent agent, a particular agent architecture, and a particular domain. Our primary agent system is TacAir-Soar [13], based on the Soar architecture [16], which serves the dual purposes of a constrained cognitive architecture, as well as a high-performance, real-time expert system. A TacAirSoar agent operates as a human-like "synthetic pilot" in the Joint Semi-Automated Forces (JSAF) simulation environment [8], a system that provides very realistic simulation of vehicles, weapons, and sensors, and that supports large-scale distributed training exercises and experiments. A typical application includes perhaps dozens to hundreds of instances of TacAir-Soar agents, each performing a different mission in a virtual
environment that includes multiple interacting agents, as well as human participants. Any human participant in the simulation is not supposed to know whether the agents they are interacting with are real people in real aircraft, real people in flight simulators across the country, or autonomous intelligent agents. If the intelligent agent behavior is not believable, the results will be judged invalid. Because it is incumbent on TacAir-Soar to demonstrate human-like behavior, these are not "light" agents [5][17]. Each individual agent includes a large amount of knowledge, and its decision-making at any point in time can be complex. Indeed, TacAir-Soar agents can fly some 30 different types of simulated aircraft performing a variety of missions including airto-air, air-to-ground, refueling and tactical controller missions. This domain knowledge is encoded in over 7500 Soar production rules. A key part of expert behavior for human or synthetic combat pilots is to build situational awareness [9]. The need for situational awareness arises from the fact
correlation of sensing over time to form internal assessments of situations. Thus, an important part of this task domain involves being able to build and maintain as accurate an internal representation of the environment as possible. The task of maintaining situational awareness is made more difficult by the fact that air combat is a dynamic task involving a large number of interacting participants. A large part of building situational awareness involves reasoning about the many other agents within the environment. In the tactical air combat domain, an agent's situational awareness includes information such as waypoints and routes, the position of the aircraft's radar, the orientation of the aircraft, current weapons available, and visual and radar contacts, as well as contacts the agent has only been told about. TacAir-Soar agents also have intentions they are pursuing that represent different stages of their planned mission, as well as reactions to pop-up threats. Another important aspect of flying aircraft in the tactical air combat domain is keeping a mental history of milestones along the course of the mission, such as takeoff, landing, identification
Figure 1: An early prototype of the Situational Awareness Panel showing the agent's goals, important milestones, vehicle information, and other agents in the environment. that it is impossible to sense the environment perfectly. An agent cannot make the most intelligent decision for a situation, unless it understands, as accurately as possible, its current situation. This understanding requires not just sensing, but coordination and
of contacts, missile firings, etc. One of the worst things that can happen is for trainees to receive negative training due to unbelievable behavior on the part of the agents. To gain acceptance
in these situations, the agents' behavior must be transparent enough for domain experts to understand what the agents are doing. This requirement, in large part, drove our interest in visualization tools. When an agent generates behavior that seems "wrong" or "unbelievable", we need to determine whether the behavior reflects a bug in the system. If we determine the behavior does not represent a bug, we need to convince the domain experts that the agent is behaving appropriately. A typical example involves an agent flying to intercept an enemy contact. It may happen that the agent gets within weapons range, but does not fire its weapons. An observer may complain that the behavior is incorrect. Without visualization tools, we must use debugging techniques and source code to analyze the agent behavior and figure out what is going wrong. With visualization tools, the cause of seemingly aberrant behavior can be easily diagnosed. In the above case, we may discover that the agent simply has not been granted permission to fire, signified by a milestone appearing on the visualization display, and a graphical annotation of the target. In this case, we can easily convince the users and customers that the agents are actually behaving appropriately. The issues of situational awareness, multiple agents, and human-like competence and interaction raise a number of challenges for building successful intelligent agents. TacAir-Soar represents our answer to these challenges. However, creating and fielding such intelligent agents also raises challenges beyond those of knowledge representation and behavior generation: How can an agent developer diagnose faulty agent behavior unless the developer has access to the agent's current internal model of situational awareness? How can an operator judge, at any particular moment, whether an agent is performing the correct actions for the correct reasons? How can a potential customer judge whether the agent model generates realistic behavior, and will not lead to negative training? Our initial attempt to answer these questions led to a proof of concept version of an agent visualization tool, called the Situational Awareness Panel (SAP), tailored to visualize the reasoning processes of TacAir-Soar agents [12]. The visualization contains much of the situational awareness information mentioned above in various forms. A snapshot of the early prototype appears in Figure 1. The SAP indicates the currently active agent's goals in the upper-left, and time-stamped milestones such as takeoff and the detection of contacts in the lower-left. The display on the right shows current flight indicators such as airspeed. The large panel on
the right displays all the other agents the agent knows about, including representation of their direction, speed, and whether they are friendly or hostile. The prototype SAP demonstrated in a very tangible form the effectiveness of such a visualization approach. However, its initial implementation was very much wedded to TacAir-Soar. In particular, we discovered that building an SAP for agents in other domains would leverage little of the actual experience and software we had developed for the TacAir-Soar SAP. Therefore, we began to concentrate on a general framework for supporting visualization across domains and architectures. The following section details some specific goals for a general framework to support such visualization.
3. Project Goals First and foremost, the goal of this project is to develop a framework for visualizing the internal state and representations of an intelligent agent. As mentioned previously, we expect visualization tools to be helpful to a variety of technical communities, including developers, users, and customers. To users of agent-based systems, intelligent agents are often black boxes. The acceptance of agent-based systems is often based on the ability of a non-technical user to understand the behaviors of the agents. In our experience in the military modeling and simulation community, agents that operate as participants in training exercises are more likely to be used and accepted if they can be shown to be performing the right behaviors for the right reasons. Similarly, if someone is being trained in a simulation environment that includes intelligent agents, that trainee is more likely to receive positive training if he or she is confident in the abilities and correctness of the participating agents. Visualizing an agent's external behavior in an environment is necessary (and compelling), but not sufficient for users to understand and accept that behavior as correct. Allowing a user to interact with the agent, to ask it questions as to its purposes, what it knows and how it knows it, are also important aspects of a visualization tool. To these ends, a visualization tool ought to answer the following types of questions: Why is the agent doing X? Why isn't the agent doing Y? Why is X the right thing to do? Depending on the task domain, the agent architecture, and the particular questions being asked, there will be different ways to best provide the answers. The agent visualization framework should support as many
potential paradigms as possible, in order to anticipate these different types of answers. This requirement entails a number of more specific goals for the visualization toolkit. 3.1 Architecture Independence. The visualization framework should be generic enough to interoperate with most existing agent systems, each with its own different assumptions about what an agent is and does. We cannot generalize over all possible agent architectures. We have instead attempted to identify a level of abstraction common to a broad range of agent architectures, but also left room for developers to use the core framework and add their own representations as they see fit. Thus, although we describe a particular visualization tool in a later section, we are primarily developing a generic toolkit from which a family of visualization tools can be developed. The toolkit, VISTA, provides agent developers with a set of components and widgets that correspond to a core set of representations that should be applicable to a wide variety of agents. The framework also provides a measure of extensibility to unique or special-purpose styles of agent architecture. VISTA therefore provides an Application
beliefs, goals, intentions, perceptual-motor systems, and internal representations of other agents. However, we also recognize that each architecture may give these constructs different names, or implement them in very different ways. Therefore, the job of the toolkit is to provide a clearly defined abstract knowledge representation (AKR), to which particular agent architectures can map their own representational constructs. Additionally, some architectures may include unique or special-purpose representational constructs, so the AKR must be extensible. To determine an initial set of components for the AKR, we analyzed a number of different agent architectures, including BDI-style architectures [18], Soar [16], ACTR [2], COGNET [21], GOMS-style architectures [7], and finite-state machines [6]. Despite some important differences in each of these performance architectures, there is a great degree of overlap in the types of knowledge representations these architectures use. Based on our analysis, we have been able to construct a core library of abstract knowledge components, which provide the basis for visualization in the prototype tool that we describe later. These knowledge components include goals, milestones, self-knowledge, and other agents, among others.
Agent Architecture
Task Knowledge Amplified Task Knowledge
Interface to AKR
Abstract Knowledge Representation
Knowledge View Interface
Data Loggers
Graphical Viewers
Domain-Specific View Components
VISTA Infrastructure Figure 2: A schematic of the generic visualization toolkit (VISTA).
Programmer's Interface (API) for connecting to various types of agents, as well as a clear set of constraints for which components of the toolkit must be adapted to accommodate new agent architectures. A key part of architecture independence is representation independence. Most agent architectures will include a common set of constructs, such as
3.2 Domain Independence. Another goal of VISTA is to support agents that operate in a wide variety of application areas and task domains. It seems clear that the best visualization techniques for some domains will not be the best for other domains. Therefore, the toolkit must provide methods to view components of the AKR in domain-
dependent ways. Our approach to this requirement is to impose a model-view-control [15] architecture on the AKR. AKR components are abstract entities that encapsulate an agent's internal representations. Each AKR component may have one or more associated view components that allow visualization in the most useful ways. As the toolkit is more widely used, we anticipate collecting a library of view components (as well as AKR components) that should foster reuse, and reduce overhead in new applications of the toolkit. 3.3 Traceability. In order to provide full understanding of the behavior of an agent system, a user or developer needs to know not only what the agent is thinking, but also why the agent is thinking it. This means that VISTA must support the ability to trace active internal representations to the processes or knowledge that created them. For a fully documented system, it would also be desirable to trace the actions of the agent back to some sort of source documentation or requirements. This is not to say that every agent system must provide such traceability information. Rather, we simply recognize that VISTA ought to support such functionality. The need for traceability imposes a requirement for the AKR to include components representing reasons, alternatives, and source material. The specific content in instances of these components will be dependent on the task domain and agent model. However, any visualization tool that wishes to provide traceability information will need something like these constructs, so we make them a fixed part of the toolkit's AKR. As with all AKR components, the traceability components can be associated with a variety of different types of view components, to provide various modes of interactivity and question answering. There is a question whether the toolkit ought also to provide automatic methods for generating explanations from knowledge bases (e.g., [11]). Methods explaining agent behavior that have been investigated so far seem rather closely tied to particular styles of representation and processing, and we are not ready to limit the applicability of the toolkit by committing so strongly to any particular agent paradigm. Certainly, particular visualization tools developed within our framework may include automated methods for generating explanations and links to source documentation. They merely need to create appropriate instances of the reason, alternative and source components at the right times. 3.4 Log and Replay.
The final specific goal we for the visualization toolkit is to provide the facility to visualize agent behavior both in on-line and off-line modes. It is useful for users to interact with agents while they are running. However, it is also useful to be able to log an agent's reasoning and behavior in order to replay and analyze the reasoning later. Thus, VISTA offers a log-andreplay facility, which provides a variety of control components for navigating the trace of an agent's past performance, including the ability to replay portions of an agent trace even while the agent continues to act and log its actions.
4. Toolkit Structure Figure 2 shows a schematic of the various pieces of the generic visualization toolkit. It begins with a placeholder for the intelligent agent architecture. An agent model will include some task-dependent knowledge that the architecture will execute to produce intelligent action. These two pieces, architecture and agent knowledge, are independent of the toolkit itself, but they have an impact on any application built with the toolkit. The next component includes amplified task knowledge. This includes additional agent knowledge not strictly necessary for the agent to perform its task, but that enables the agent give to extra insight into its decision-making. It is entirely up to the agent developer to decide what (if any) amplified task knowledge an agent may require. However, the goal of visualizing agent behavior in detail usually requires the engineering of some additional supporting knowledge. Our decision to use agent task knowledge to provide status feedback is in contrast to systems that attempt to deduce agent state from observations of behavior such as OVERSEER [14]. OVERSEER provides a "monitor agent" that observes agent communication events to recognize a team member's current plan. Our approach requires no inference from observation to determine an agent's internal state; in effect, agents use the visualization tools to provide direct inspection of their goals, plans, and state. However, unlike OVERSEER, our approach does require additional agent knowledge and additional communication overhead. The AKR interfaces a particular agent model to a visualization tool built in VISTA. Every agent that wishes to interface with the toolkit will require an interface layer that translates internal knowledge representations into the abstract knowledge representation defined by VISTA. To maximize the ease of interfacing with the toolkit, we have developed a relatively simple computer network command language for creating, manipulating, and destroying abstract knowledge components inside the visualization
tool. This interface requires that the agent architecture include minimal capabilities for networking and adhere to an object-oriented knowledge-instantiation language that we have created. The command language for instantiating and activating abstract knowledge components is similar to Java's Remote Method Invocation (RMI) package [10]. Although VISTA is written in Java, we choose not to use Java RMI because that would require integration of the Java Virtual Machine into any agent architecture that wished to interface with the toolkit. To maximize ease of interfacing with the toolkit, we chose an option that is perhaps less standardized, but imposes less of a burden on agent developers. The knowledge-instantiation interface accepts strings of text that represent commands for creating and configuring AKR components. As an example, suppose that an air-combat intelligent agent is currently reasoning about another agent in its environment. Based on sensed information, the system decides that the other agent is an enemy (red) MiG-29 aircraft at a range of 30000 meters, heading 30 degrees, bearing 170, altitude 5000 meters, flying at 500 meters/sec. To enable this information to be visualized, the agent model must translate its internal representation into a series of abstract knowledge commands similar to the following: root create agent R1 R1 setRange 30000 R1 setHeading 30.3 R1 setBearing 170 R1 setSpeed 500 R1 setAltitude 5000 R1 setType fwa R1 setSubtype mig-29 R1 setForce red This example illustrates the relative simplicity of the AKR. Every abstract knowledge structure is represented by a single component or object. To activate a structure, the agent merely needs to send a command to create an instance of that component, and then instantiate that instance by setting values for its attributes. The example begins by creating a knowledge instance of type agent, and then sets the particular known properties of that agent. VISTA's view components (described later) determine how this information ultimately is presented to a user viewing the agent. The Abstract Knowledge Representation (AKR) itself consists of a library of typed components or objects, each with a well-defined collection of attributes. These components are in many ways merely repositories of "activated" data, which the agent is currently reasoning about. The toolkit comes with a small set of very
general knowledge components, such as goals and agents. Clearly, the attributes of these components are likely to be dependent on the agent architecture and the application domain. Similarly, different application domains will include different types of knowledge constructs that are worthy of display. For example, in the air combat domain it is useful to be able to view what the agent believes to be true about its cockpit radar. This may entail the creation of AKR components for radar and radar-blip. Because each of these components is a self-encapsulated collection of data, independent of any view or control components, adding any necessary new knowledge components is a relatively trivial matter. As the toolkit is used for a wider variety of agent types and domains, the reusable library of AKR components will grow until it satisfies many needs. Every AKR component also defines an interface to a set of view components, which determine the various ways that a user can visualize that particular type of knowledge unit. For many applications, there may be only one viewer for each type of knowledge component; others may have multiple viewers. As a simple example, in the air combat domain it is sometimes useful to view the agent's environment from an "aircraft-centric" view, where everything is drawn relative to the nose of the aircraft. In other cases, it is useful to use an "absolute view", where the top of the screen always represents "north". Similarly, for multiagent applications, it is often useful to view the environment from the perspective of different agents at different times (or together, at the same time, using multiple simultaneous views). External agents should be drawn differently, depending on what types of agents they are (e.g., aircraft, tanks, soldiers, ships, etc.). The object-oriented nature of the toolkit's knowledge and view representations easily supports all of these possibilities. Using the interfaces defined by the AKR components, a developer is therefore free to create whichever view components are most useful. In addition, not all "view" components need to draw things on the screen. It is also possible to create viewers that generate sounds, or that simply log data as it changes. Underlying all of these configurable components is the basic infrastructure provided by the visualization toolkit. This infrastructure consists of a set of Java classes that handle various features that do not need to be configured for new agents and task domains. These features include: The network interface for accepting knowledge instantiation and maintenance commands The command interpreter for creating and instantiating AKR components
A fixed set of button and menu widgets for controlling an instantiation of the toolkit The log and replay facility (including data-logger components and playback control components) A debugging console for viewing objects internal to the toolkit A set of data objects and views that are generic enough to provide a basic level of functionality across a wide range of domains
5. Visualization Case Study The previous sections lay out the structure VISTA. This section describes a particular visualization tool that we have developed using the toolkit. This tool is, in fact, a new iteration of the Situational Awareness
We designed the initial prototype based on early experience with JSAF users and domain experts interacting with the TacAir-Soar agents. To maximize the usefulness of the new SAP, as well as to flesh out the generality of the generic visualization toolkit, we wished to ensure that this version displays the right kinds of information from the perspective of a user, and presents that information in the most effective way. To determine the breadth of information users might want to find out about agents in the tactical air domain, we gathered requirements from those users who had experience using the first SAP prototype, as well as other general JSAF and TacAir-Soar users. Using the initial prototype, we were able to run a number of usability studies to help gather feedback. Frank Ritter's Usability Laboratory at The
Figure 3: The new SAP prototype is a full instantiation of the generic visualization toolkit.
Panel (SAP) prototype described previously. The role of this new prototype is two-fold: Provide a useful visualization tool that TacAirSoar users can use to better understand the agents Provide an extensive example for agent developers who wish to adapt our visualization toolkit to their own agents, either directly or with some tailoring to their particular agent architecture or application domain
Pennsylvania State University supported us in this effort. Dr. Ritter's group performed formal studies using the SAP to ensure that it displays the chosen information in usable ways [19]. The studies involved experiments with users of various skill levels and familiarity with aircraft, such as a former commercial airlines pilot, domain experts such as former military pilots and controllers, and users of TacAir-Soar. For additional feedback, we were able to field the initial SAP prototype together with TacAir-Soar in a number of DoD simulation experiments.
The feedback from these studies was immensely helpful, both in identifying valuable improvements to the new SAP and in maximizing the usefulness and adaptability of the generic visualization toolkit. The new SAP represents a substantial use of VISTA that provides all the features of the original SAP, with a broader coverage of situational awareness information (including weapons information, more detail on other agents, and waypoint and route information), a logand-replay capability, and traceability of agent behavior. This SAP example can be easily extended to other behavior architectures and task domains. A snapshot of the new SAP prototype working with TacAir-Soar appears in Figure 3. To demonstrate its generality, an earlier version of VISTA was used to build a visualization tool for Finite State Machine-based entities in the JSAF simulation [6]. These entities differ from TacAir-Soar agents in their internal representations about the world and even about their own reasoning. However, the domain was the same tactical air domain as for the TacAir-Soar agents, so many of the perceptions and actions were the same, and so we were able to reuse many of the visualization aspects provided by the TacAir-Soar prototype. The lion's share of the work involved mapping concepts in the FSM representations into those offered by VISTA.
6. Related Work Agent testbeds often include visualization components as well as tools for instrumenting and evaluating agent performance. Gamebots, a testbed based on a multiplayer computer game, includes many visualization tools for the domain [1]. The information presented by the Gamebot visualization tools is similar to that provided by the visualization toolkit. However, unlike VISTA, their goal has not been to provide any kind of domain generality for the Gamebot visualization tools. Such specificity of tools to domains seems to be most often the case for domainspecific testbed systems. An exception is the Sensible Agent infrastructure, which includes tools for visualizing the internal state of Sensible Agents (the Sensible Agent Viewer) and a VRML simulation interface (SARTE) that allows an interactive view of the progression of the overall system [3]. VISTA allows a developer to configure views of an agent that integrate the communication of internal state information (e.g., currently active goals) together with external information (e.g., the location of other agents). In addition, our toolkit is not meant to provide a simulation environment interface (as SARTE does), but rather to provide an agent-centric view/interpretation of the events and situations as they
develop for any agent implemented in a range of architectures. Another agent system that has a visualization component is dMARS [20], which employs a graphical plan approach both for specifying agent behavior and for visualizing an agent's progress through its plans. Aside from the fact that VISTA makes no effort in agent specification, this effort differs from that approach in a few significant ways. First, VISTA provides a level of abstraction above the details of the internal representation of the underlying agent system, and in doing so attempts to move away from particulars of agent programming languages. Additionally, in developing the Situation Awareness Panel example in particular, we have focused on displaying information in terms familiar to the expected users (people familiar with the military domain). However, given the extensibility of VISTA, developing displays for showing plan execution would be straightforward, so long as the agent itself had such knowledge.
7. Future Efforts Although we feel we have developed a strong framework for visualization of intelligent agent reasoning in the form of VISTA, there is certainly room for improvement and future work. Here we touch on some outstanding issues. One particular problem with this type of visualization is the possibility that it can introduce extra overhead to agent processing. For example, in developing the interface between TacAir-Soar and the SAP prototype, we had to determine which types of knowledge instantiation commands to send, and how often to send them. Too much information sent too often can certainly be a bottleneck in system performance. We have accepted that the use of an SAP will have some impact on the performance of the agent: the agent must take the time to send information to the SAP. However, the level of acceptable impact is dependent upon the domain and the agent in question. We have attempted to mitigate this problem, in part, by introducing the networked interface between the agent architecture and the toolkit. Aside from providing a desirable separation of components that fosters modularity, the networked interfaces also allows the SAP to run independently of the agent application and even on a separate machine if necessary. However, it still takes time for the agent to communicate the knowledge that needs to be visualized, and we must investigate further methods for keeping the system usable even in highly dynamic task domains. As mentioned previously, we are also interested in investigating potential methods for automating the generation of explanations for particular behaviors.
Again, the primary concern here is that such automated methods will be highly dependent on the type of agent architecture, which could defeat our goals of maintaining an architecture-independent framework. However, a possible compromise would be to create a library of tools or even design patterns for automating traceability and explanation in knowledge bases. Among our other activities, we intend to build more prototype visualization tools with VISTA. We hope to provide a representative cross-section of agent architectures and application domains, both to demonstrate the versatility of the framework, and to create a library of examples and documentation to ease the efforts of future developers. Finally, we wish to explore further the ability to create abstract representations of knowledge and procedures within agent architectures. The Abstract Knowledge Representation within our toolkit is an initial attempt to unify the wide variety of agent and cognitive architectures within a common frame of reference. We are committed to the notion that, although these architectures are different in significant and important ways, there is a great deal of common representations and behavior that agents must share simply because they are agents, and that notion in itself entails a number of firm consequences. We hope to investigate further agent-oriented tools that abstract across agent architectures, by pursuing a common set of constraints and representations. This interest is in the same spirit as the United States Defense Modeling and Simulation Office's effort to create a Common Human Representation Interface Standard [4].
8. Conclusions We have presented VISTA, a generic toolkit that enables the visualization of internal reasoning and knowledge in intelligent agents. The intent of the toolkit is to be applicable, across a variety of agent architectures and application domains, and easily extensible. To this end, the framework consists of a modular collection of components and interfaces. As a proof of principle, as well as an illustrative example, we have developed an instantiation of the toolkit, the Situational Awareness Panel, a visualization tool for TacAir-Soar intelligent agents. Experiments and field studies with this prototype demonstrate the usefulness of visualization for interactive intelligent agents. In addition, we have explored extensions of the prototype to alternative task domains and agent architectures, to verify the visualization toolkit's versatility.
9. Acknowledgements This research effort was supported in part by contract N61339-99-C-0109 from the Naval Air Warfare
Center, Training Systems Division. We gratefully acknowledge the interest and encouragement from Dr. Denise Lyons and Dr. Harold Hawkins. Thanks to the many air combat domain experts who have helped us in the development and testing of TacAir-Soar and the visualization toolkit, especially Steve Bixler, Mark Checchio, and Don Smoot. Many thanks to Dr. Frank Ritter for his usability studies on multiple SAP prototypes. This work could not have been completed without the parallel efforts of the entire TacAir-Soar team and Soar Technology, Inc.
10. References [1] Adobbati, R., Marchall, A. N. , Schoeler, A., Tejada, S., Kaminka, G., Schaffer, S., and Sollitto, C., 2001. Gamebots: A 3D virtual world testbed for multi-agent research. In Second Workshop on Infrastructure for Agents, Multi-agent Systems, and Scaleable MAS at the 5th International Conference on Autonomous Agents, 47-52, ACM Press. [2] Anderson, J. R., and Lebiere, C. 1998. Atomic Components of Thought. Hillsdale, NJ: Lawrence Erlbaum. [3] Barber, K. S., McKay, R., MacMahon, M., Martin, C. E., Lam, D. N., Goel, A., Han, D. C. and Kim, J. 2001. Sensible Agents: An implemented multiagent system and testbed. Proceedings of the 5th International Conference on Autonomous Agents, 92-99. ACM Press. [4] Bjorkman, E.A., and Blemberg, P. 2001. Review of the Defense Modeling and Simulation Office Human Behavior Program, Spring Simulation Interoperability Workshop. [5] Busetta, P., Roennquist, R., Hodgson, A., and Lucas, A. 1999. Light Weight Intelligent Software Agents in Simulation. Technical Report no. 99-03. Agent Oriented Software, Ltd. Cambridge, UK. [6] Calder, R., Smith, J., Courtemanche, A., Mar, J., Ceranowicz, A. 1994. ModSAF Behavioral Simulation and Control. In Proceedings of 3rd Computer Generated Forces and Behavioral Representation, 347-356, Orlando, FL: University of Central Florida. [7] Card, S.K., Moran, T.P., and Newell, A. 1983. The psychology of human-computer interaction, Hillsdale, NJ: Lawrence Erlbaum. [8] Ceranowicz, A., Nielsen, P.E., Koss, F.V. 2001. Behavioral Representation in JSAF. In Proceedings of the 9th Conference on Computer Generated Forces and Behavioral Representation, 501-512,Orlando, FL: University of Central Florida. [9] Endsley, M. R. 1988. Design and evaluation for situation awareness enhancement. In Proceedings of the Human factors 32 Annual Meeting, 97-101, Santa Monica, CA: Human Factors Society.
[10] Grosso, W. 1994. Java RMI. Cambridge, MA: O'Reilly, 2001. [11] Johnson, W. L. 1994. Agents that learn to explain themselves. In Proceedings of the Twelfth National Conference on Artificial Intelligence. [12] Jones, R.M. 1999. Graphical Visualization of Situational Awareness and Mental State for Intelligent Computer-Generated Forces. Proceedings of the Eighth Conference on Computer Generated Forces and Behavioral Representation, 219-222. [13] Jones, R. M., Laird, J. E., Nielsen, P. E., Coulter, K. J., Kenny, P., and Koss, F. V. 1999. Automated intelligent pilots for flight simulation. AI Magazine 20(1): 27 - 41. [14] Kaminka, G. A., Pynadath, D. V., and Tambe, M. 2001. Monitoring deployed agent teams. In Proceedings of the Fifth International Conference on Autonomous Agents, 308-315. [15] Krasner, G.E., Pape, S.T. 1988. A cookbook for using the model-view controller user interface paradigm in Smalltalk-80. Journal of ObjectOriented Programming, 1(3): 26-49. [16] Laird, J. E., Newell, A., and Rosenbloom, P. 1987. Soar: An architecture for general intelligence. Artificial Intelligence, (33), 1-64. [17] Lange, D.B., and Oshima, M. 1998. Programming and Deploying Java Mobile Agents with Aglets. Boston, MA: Addison-Wesley. [18] Rao, A.S., and Georgeff, M.P. 1991. Modeling rational agents within a BDI-architecture. In Proceedings of the 2nd International Conference on Principles of Knowledge Representations and Reasoning, Allen, J., Fikes, R., and Sandewall, E., eds. San Mateo, CA: Morgan Kaufmann Publishers. [19] Avraamides, M., & Ritter, F. E. 2002. Using multidisciplinary expert evaluations to test and improve cognitive model interfaces. In Proceedings of the 11th Conference on Computer Generated Forces and Behavioral Representation. Orlando, FL: University of Central Florida. [20] Wooldridge, M. 2000. Reasoning about Rational Agents. Boston, MA: MIT Press. [21]Zachary, W., Ryder, J., Ross, L., and Weiland, M.Z. 1992. Intelligent computer-human interaction in real-time, multi-tasking process control and monitoring systems. In Human Factors in Design for Manufacturability, M. Helander and M. Nagamachi, eds. New York, NY: Taylor and Francis.
Author Biographies GLENN TAYLOR is a Research Engineer at Soar Technology, Inc. He is involved with continuing development on TacAir-Soar and several other Soarbased agent systems. He received his BS in computer science in 1994, and his MS in computer science and engineering in 1996, both from the University of Michigan. RANDOLPH M. JONES, PhD, is a Senior Scientist and Vice President for Soar Technology, Inc., and an Assistant Professor of Computer Science at Colby College. He has previously held research positions at the University of Michigan, the University of Pittsburgh, and Carnegie Mellon University. His general areas of research include computational models of human learning and problem solving, executable psychological models, automated intelligent actors, and improved usability and visualization in intelligent information systems. He earned a B.S. (1984) in Mathematics and Computer Science at UCLA, and M.S. (1987) and Ph.D. (1989) degrees from the Department of Information and Computer Science at the University of California, Irvine. MICHAEL GOLDSTEIN is a Software Engineer at Soar Technology, Inc. Since joining Soar Technology, he has been developing Java-based applications to support the company's agent-based systems. He earned a B.A. from Williams College in Computer Science and Chemistry in 1999 and an M.S. in Computer Science and Engineering from the University of Michigan in 2000. RICHARD FREDERIKSEN, PhD, is a Software Engineer at Soar Technology, Inc. His work at Soar Technology, Inc., has centered on the application of game technology to various advanced simulations, and development of new agent management tools. He earned his PhD. in Aerospace Engineering in 1996 from the University of Michigan. ROBERT E. WRAY, III, PhD, is a Senior Scientist at Soar Technology. Dr. Wray received a Ph.D. in computer science and engineering from the University of Michigan in 1998. His research and development experience includes work in agent-based systems, machine learning, knowledge representation and ontology. Prior to joining Soar Technology, he held the position of assistant professor of computer science at Middle Tennessee State University.