Document not found! Please try again

Enhancing Teamwork Through Team-Level Intent

3 downloads 0 Views 67KB Size Report
In this paper, we present AUTOS, an approach to the implementation of individual and team intent inference. AUTOS uses observable contextual clues to infer ...
Enhancing Teamwork Through Team-Level Intent Inference Jerry Franke Lockheed Martin Advanced Technology Laboratories Camden, NJ, U.S.A

Scott Brown Air Force Research Labs Dayton, OH, U.S.A

Benjamin Bell Lockheed Martin Advanced Technology Laboratories Camden, NJ, U.S.A

Henry Mendenhall Lockheed Martin Advanced Technology Laboratories Camden, NJ, U.S.A

Abstract Applications that have access to user intent and task context can support better, faster decision-making on the part of the user. In this paper, we present AUTOS, an approach to the implementation of individual and team intent inference. AUTOS uses observable contextual clues to infer current operator task state and predict future task state. Guided by the concepts of activity theory, AUTOS task models can be hierarchically organized to infer team intent. To illustrate some of the properties of AUTOS technology, we describe our ongoing work in team intent inference with a prototype system embedded in a demonstration application for a military domain. Keywords: Intent Inference; Intelligent Interfaces; Agent Systems; Military Applications; Task Modeling

1 Introduction Lockheed Martin Advanced Technology Laboratories (ATL), in conjunction with academic and military research partners, has developed technology for automated task context management to support a new information-on-need paradigm in command and control operations. This technology follows human progress through work tasks by monitoring human-system and humanhuman interaction within a command center. By tracking task progress, such systems can offer multiple benefits, including information

pre-fetch, workload monitoring and balancing, and automated team coordination. Our task context management approach results in a significant improvement to operator and team performance and decisionmaking.

2 Problem The need for more rapid deployment of military assets and for more responsive and efficient operations has led to renewed interest in reducing the time required to execute typical Observe-Orient-Decide-Act (OODA) sequences [1]. Within Command and Control (C2) operations, the requirement for increased performance highlights the need for highly efficient information gathering and decision-making in an environment that must also contend simultaneously with manning limits and an ever-expanding set of information sources. The command center of the future needs to support such “better, faster” decisionmaking by facilitating improved communication and coordination within the command center and by allowing rapid access to critical information. These requirements will shift the command operations paradigm from information-ondemand to information-on-need — relying

less on reactive operator requests for new information and more on proactive systeminitiated provisions of information. This shift will require the development of new methods to automate information collection and dissemination within the command center. One problem that must be addressed within command centers is that of crew coordination. A command center is composed of a group of individuals that work as a team. Teams are vulnerable to coordination failures especially in light of coordination that must occur in support of fast, reliable, high-quality decisions. Coordination failures can lead to immediate disaster. Any information-on-need automation must take into account team-level priorities in addition to individual user priorities. Our team has demonstrated technologies designed to address these needs and has used these technologies to produce a proof of concept in the Theater Air and Missile Defense (TAMD) domain.

3 Team Intent Inference Intent inference involves the analysis of an operator’s actions to identify the operator’s possible goals [2]. Automated intent inference offers three important advantages. Given knowledge of an operator’s current task, that operator’s workload can be assessed and the focus of the operator’s attention identified. In addition, if a system can anticipate that operator's future actions, it can proactively support those actions, automatically preparing to handle the corresponding information and tools needs so that the appropriate information resources can be at hand when needed. When observed and predicted task state do not match, subsequent diagnosis can result in model learning and adaptation or in an alert of user error. We have demonstrated automatic tasking of mobile intelligent agents through intent

inference as a means to significantly reduce time-to-completion for information-gathering tasks [3]. This work is described in section 4. Intent inference can also be used in cases involving teams where one operator’s activity influences the work of others. Team intent inference, the analysis of operators’ actions with respect to the goals of the overall group, supports two major goals. First, since in most cases individuals in a command center will be working to further the goals of the team, understanding the intent of the team can aid understanding the intent of individual members of the team. There is a mutual reinforcement positive feedback loop between individual and team intent. Observations of individual operators’ actions can be used to infer the goals of the entire team. Knowledge of the entire team’s goals can then be used to predict the actions and information needs of other individuals in the team. Second, knowledge of team intent can be used to facilitate stronger coordination among command center personnel and support improved workload balancing among team members. Individuals who appear to work toward goals that are counter to the goals of the team can be identified and alerted. Individual operators’ actions and upcoming information needs can be communicated automatically among team members. Our implementation strategy for team intent inference was guided by concepts derived from Activity Theory (AT) [4,5]. The basic unit studied by AT is the activity, a goal-directed endeavor by one or more actors aimed at a particular objective. Team intent inference involves collaborative activities, in which multiple actors take part in the attempt to reach the objective. Each actor performs one or more actions, intentional acts that contribute to approaching the activity’s objective. These actions are realized through a set of operations, procedural components

Operations (squares) and Context (circle) Actions

Coordinated activities (goals inferred from individual actions) Collaborative activities Cooperative activities (goals inferred from collective actions)

Figure 1 - Composition and organization of collaborative activities

that, taken alone and without context, provide no intentional information. Collaborative activities can be classified by a three tiered hierarchy. In the first tier, coordinated activities occur when individuals acting independently to achieve individual goals are not directly influenced by other individuals in the team. The objective of the activity, as perceived by the individual, is not affected by the perceived objectives of others. Cooperative activities include actions taken by individuals who maintain a consideration of the actions of others with respect to the group objectives. The goals of an individual are necessarily defined with respect to the goals of the group as a whole. The final tier contains co-constructive activities, in which team members reassess their organization and interrelationships, and possibly redefine their goals. Because the command center operations that we wish to model are largely coordinated or cooperative in nature, our current work involves the first two layers of the hierarchy, and our intent inference architecture does not yet address co-constructive activities. See Figure 1 for an illustration of the composition and categorization of collaborative activities.

4 AUTOS ATL is developing Automated Understanding of Task and Operator State (AUTOS), a technology to support the information-on-need paradigm. AUTOS tracks progress through tasks in a command center setting and acts to facilitate those tasks, aiding operators without interrupting their work. The AUTOS system collects a number of observations about operator activity in real time, including keystrokes, content of information queries, and spoken dialogue among operators. These observations are coordinated with operator task models to help the system generate inferences about the operator’s current tasks and goals, as well as to predict the operator’s next actions. These inferences and predictions can then be used to help the operator in multiple ways, including making proactive information queries, focusing attention by informing the operator of upcoming tasks, communicating information needs to other operators, diagnosing operator errors, and reducing operator workload by balancing workload among the crew. This streamlining of C2 tasks can support the military’s need for manning reduction while improving execution in the kinds of critical situations characteristic of C2 operations.

Proactive Queries

Application

Text Information Analysis

Info Interaction Observables Task Updates, System Messages

Task State Information Task State Information

Task Model

Speech Observables

Direct Observables, Information Queries

Speech Analysis

Figure 2 - Generic AUTOS architecture

ATL has successfully demonstrated AUTOS technology in a prototype system, Observer, situated within a console application for a simulated TAMD cell environment. Observer applies direct observation of its host application and task inferences to generate proactive information queries and for early task signaling. Preliminary testing demonstrated a 20% performance improvement from the beginning of a test scenario to decision generation, demonstrating the benefits of this approach to C2 applications [3]. AUTOS technology is also currently being implemented for demonstration with the workload management component of the Navy’s Multi-Modal Watch Station (MMWS) project for the SC-21 Surface Ship of the 21st Century program.

5 Technologies employed in AUTOS AUTOS technology consists of three components: the task model, direct observation mechanisms, and indirect observation mechanisms. See Figure 2 for an illustration of AUTOS top-level structure. The task model characterizes the discrete activities that are expected to occur while performing a given task. By progressing through the model as activity is observed, a

system with AUTOS capabilities can optimize information flow to best serve the operator’s intent. In the Observer demonstration system, Lockheed Martin ATL employed Argus [6], a finite state machine (FSM) task-tracking tool1. The FSM model allows for easy reading of task progress while still providing flexibility in task flow. An alternative task modeling mechanism, Core Interface Agent (CIA) architecture, a decision-theoretic, multi-agent architecture [7,8], is being used for the MMWS demonstration. We are adopting the CIA architecture in part because its Bayesian approach offers the ability to utilize uncertainty reasoning to make task tracking more robust in ambiguous situations. The purpose of the CIA architecture is to provide assistance to an operator by maintaining an accurate model of the operator’s interaction with AUTOS. The operator's interaction with AUTOS is reported to CIA architecture as observations which are used to infer what a user is doing within AUTOS. Based on the knowledge of the overall environment and the current interactions with the system, CIA determines the operator’s goal with the highest expected 1

Argus, Watson, and Jabberwocky are experimental systems developed at Northwestern University’s InfoLab.

utility and offers a suggestion to the user via the AUTOS interface. The user model2 consists of three components: a user profile, a Bayesian network model, and a utility model. The user profile is used to store the static knowledge about the user, including his demographic data, and skills. A Bayesian network model captures the uncertain, causal relationship between the goals and actions. While the Bayesian network model can capture the uncertain factors of the operator’s actions and goals, the utility model is needed to capture the utility for having the architecture perform an action autonomously to achieve a goal. The additional information captured in the utility model allows the architecture to not only determine what goals are most probable (via the Bayesian network user model) but also with which goals the CIA should assist the operator. Each level of the activity theory hierarchy is implemented with a separate CIA. The coordinated hierarchical level contains those goals an operator can carry out without direct intervention of other operators (e.g., cell analyst's request for strike assets). The "coordination" between one operator and another occurs via external observations (i.e., contextual observables per AUTOS terminology) and is represented as preconditions to other operators' CIAs. The cooperative level contains those goals shared by more than one crew member. That is, a goal relies on the explicit, timely actions of more than one crew member to achieve the goal. The goal cannot be achieved by a simple coordination of action. A CIA is implemented for team goals where actions that satisfy the team goals are pre-conditions to individual operator's CIAs. See Figure 3

2

The CIA architecture user model serves as the task model for the generic AUTOS architecture (see Figure 1).

for an illustration of this hierarchical integration. Direct observations are gathered by examining the actions the operator is taking: the various button clicks, mouse movements, and keystrokes that occur as the operator performs the task. AUTOS uses task context to reconcile the semantics of these operations, each of which has meaning that can vary from task to task. For example, in the context of reading an e-mail message, CTRL-D might signal the operator’s intent to delete the message, while in the context of browsing a web page with the same application, CTRL-D might signal the operator’s intent to create a bookmark. An AUTOS system uses this information to help recognize task progress. Indirect observations arise from an examination of the semantic content resulting from the operator’s actions, including analyses of information that the operator displays and analyses of the verbal communication among operators within the command center. Lockheed Martin ATL has leveraged Northwestern University’s Watson [9] and Jabberwocky [10] systems for context-based text and speech analysis, respectively. Watson and Jabberwocky use current task context in their analyses and both provide input to the task model component for task state inference. Implementing crew intent inferencing using the CIA architecture involves a development paradigm shift. CIA has historically been used to model an individual user interacting with a system. The concept of teams was not originally envisioned. However, we have begun work to implement team context management using the CIA architecture. Using activity theory as a basis for the behavior of the command center crew, we describe our on-going implementation.

Observation Layer

Observation Layer

(Actions)

(Actions)

• UI Interaction • Information Flow • ...

• UI Interaction • Information Flow • ...

Intent Analysis

Intent Analysis

(Individual/Coordinated Goals)

(Individual/Coordinated Goals)

“Crew Intent” Analysis (Cooperative Goals)

Application

Application

Figure 3 - AUTOS intent inference hierarchy

6 Benefits The benefits of AUTOS technology are illustrated by the 20% task speedup measured in our prototype TAMD scenario demonstration [11]. By tracking task progress and predicting upcoming task steps, proactive information searches can be launched so that critical data can be made available immediately when needed. For example, in our TAMD scenario, three of four mobile agent searches were launched automatically by the system just prior to the operator noticing a need for the resulting information. Our approach also allows the operator to start new tasks earlier than would be possible without the AUTOS technology because the system makes the information needed to start the new task available immediately. Similarly, by observing and tracking task progress across all operators within the command center, greater teamwork and

coordination can be achieved because AUTOS can recognize both team task goals and individual operators’ roles in those tasks. For example, in the TAMD scenario, one operator’s actions cause the system to notify another analyst of an upcoming task shift and to launch agent searches in support of the upcoming task. Such capabilities result in more rapid task completion and decisionmaking, as well as in improved coordination among team members.

Acknowledgments The authors would like to thank the members of Lockheed Martin ATL’s Task Context Management project: Jim Allen, Tom Geigel, Dan McFarlane, Brian Satterfield, and Frank Vetesi. We would also like to thank the past and present members of Northwestern’s InfoLab who have helped us in these efforts, including Kristian Hammond, Larry

Birnbaum, Jay Budzik, David Franklin, and Chris Johnson. Finally, we would like to thank Dr. Gene Santos, Jr. and Hien Nguyen of the University of Connecticut for assisting in the integration of CIA 2.0 into AUTOS.

[7]S.M. Brown. Decision Theoretic Approach for Interface Agent Development, Ph.D. Dissertation, Department of Electrical and Computer Engineering, Air Force Institute of Technology, 1998.

References [1] J.R. Boyd. A discourse on winning and losing (Report No. MU43947). Air University Library, Maxwell AFB, AL, 1987. An unpublished briefing. [2] N.D. Geddes. The use of individual differences in inferring human operator intentions. In Proceedings of the Second Annual Aerospace Applications of Artificial Intelligence Conference, 1986. Dayton, OH. [3] J.L. Franke, H. Mendenhall, and B.L. Bell. Enhancing teamwork through teamlevel intent inference. Paper presented at the IUI 2000 Workshop on Using Plans in Intelligent User Interfaces, Jan. 2000, New Orleans, LA. [4] J.E. Bardram. Collaboration, Coordination, and Computer Support: An Activity Theoretical Approach to the Design of Computer Supported Cooperative Work, Ph.D. Dissertation, Computer Science Department, Arhus University, 1998. [5] J.E. Bardram. Designing for the Dynamics of Cooperative Work Activities. In Proceedings of Computer Supported Cooperative Work 1998, Nov. 1998, Seattle, WA. [6] C. Johnson, L. Birnbaum, R. Bareiss, and T. Hinrichs. Integrating Organizational Memory and Performance Support. In Proceedings of Intelligent User Interfaces 1999, Jan. 1999, Redondo Beach, CA.

[8] S.M. Brown, E. Santos, Jr., and S.B. Banks. Active User Interfaces for Building Decision-Theoretic Systems. In Proceedings of the 1st Asia-Pacific Conference on Intelligent Agent Technology, Hong Kong, 1999. [9] J. Budzik, and K.J. Hammond. User Interactions with Everyday Applications as Context for Just-in-time Information Access. In Proceedings of Intelligent User Interfaces 2000, pp. 44–51, Jan. 2000, New Orleans, LA: ACM Press. [10] D. Franklin, S. Bradshaw, and K.J. Hammond. Jabberwocky: You don’t have to be a rocket scientist to change slides for a hydrogen combustion lecture. In Proceedings of Intelligent User Interfaces 2000, pp. 98–105, Jan. 2000, New Orleans, LA: ACM Press. [11] B.L. Bell, J.L. Franke, and H. Mendenhall. Leveraging Task Models for Team Intent Inference. In Proceedings of the 2000 International Conference on Artificial Intelligence, June 2000, Las Vegas, NV.