Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
Integrating Field Data with Laboratory Training Research to Improve the Understanding of Expert Human-Agent Teamwork Stephen M. Fiore University of Central Florida
[email protected]
Florian Jentsch University of Central Florida
[email protected]
Irma Becerra-Fernandez Florida International University
[email protected]
Abstract Because the execution of many complex tasks is increasingly relying on human-agent teams it is critical that we understand the processes arising from such interaction and the specific conditions affecting them. Despite findings surrounding effective interaction and coordination for expert teams in general, little is known about what is important in expert human-agent teams. Paramount to the effective utilization of human-agent teams is the appropriate blend of research to investigate the boundary conditions within which training must be tailored and delivered. In this paper we describe a representative framework through which the research community can investigate human-agent teams. Our framework involves a blending of laboratory and field research methods with training research. We describe how coordination demand analysis in conjunction with lessons learned systems can be used to capture critical incidents and data from expert human-agent teams performing in context and how this information can form the foundation for effective human-agent team training.
1. Introduction A long line of research in team performance suggests that expert teams develop a shared understanding or shared mental model utilized to coordinate behaviors by anticipating and predicting each other’s needs and adapting to task demands [10, 12, 45]. Further, for expert teams, both tacit and explicit coordination strategies are important in facilitating teamwork processes. Explicit coordination occurs through externalized verbal and nonverbal communications, whereas tacit coordination is thought to occur through the meta-cognitive activities of team members who have shared mental models of what should be done, when, and by whom [21]. A team’s shared mental models thus allow the team members to coordinate their behavior depending upon situational demands [49] and to effectively monitor teammate behaviors during time-stressed events. Despite findings surrounding effective interaction and coordination for expert teams in general, little is known
Eduardo Salas University of Central Florida
[email protected]
Neal Finkelstein U.S. Army Simulation & Training Technology Center
about what is important in effective human-agent team coordination. For this paper, agents shall be defined as ranging from computer-based intelligent decision-support systems with no or minimal anthropomorphism, to highly anthropomorphic machines, such as android robots, robotic animals, and robotic swarms or packs which display group behaviors. Our argument is that it is unclear what information human-agent teams use in order to effectively perform a given task, and how the presence of non-human team members alters what are traditionally considered to be requirements of effective teams. It may be that human-agent teamwork critically effects the development and utilization of shared mental models. Further, coordination efforts may be stifled because of the difficulty associated with conveying and/or processing information critical to performance, due, for example, to a lack of understanding of agent team members. Specifically, given the attenuation of task and team artifacts in human-agent teams, such teams may vary their interaction such that coordination behaviors are more explicit and these modifications may decrease performance under high workload conditions [9]. Because the execution of many complex tasks is increasingly relying on human-agent teams (e.g., law enforcement, emergency response, military operations), it is critical that we understand the cognitive and coordinative processes arising from such interaction and the specific conditions affecting them. Further, while this important new capability is being developed, it is imperative that our understanding of its utility not be overshadowed by the rapid rate of technological development. Thus, we need to fully understand the degree to which theory and methods applicable to human teams are appropriate to human-agent team environments. Although the utility of human-agent teams seems promising, such systems have been applied at a rate that vastly outpaces the research necessary to fully capitalize on their strengths while limiting their weaknesses. Because of this, the overarching problem facing the training and operational communities is that much ambiguity exists about how and when to best implement such teams. Paramount to the effective utilization of human-agent teams is the appropriate blend of research to investigate
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
1
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
the boundary conditions within which training must be tailored and delivered. As such, research must investigate questions such as the form of the information shared within such teams, as well as the manner in which it is shared. Only than can we better understand how such information management within human-agent teams has a cascading effect on expert team performance. These effects can range from attenuation of workload and metacognition to more global changes in performance. Given this impending need to understand the issues surrounding human-agent teams, and because these environments allow for investigations of both individual and team cognition [45], research must investigate the intra- and inter-individual factors involved in human-agent collaboration and performance [48]. In this way the research community can better guide the development of technologies, procedures, and training for successful human-agent teams.
1.1. Purpose of this Paper In this paper, we describe a framework through which the research community can investigate the design, performance, and evaluation of human-agent teams. Our framework involves a blending of laboratory and field research methods. Specifically, because understanding learning in the context of expert teams adds a level of complexity over and above that of learning with novices, we propose a framework of field-laboratory integration. This framework allows researchers to examine multiple levels of learning in complex training environments via simultaneous considerations of the attitudes, behaviors, and cognitions [32] of human-agent teams. In sum, despite the recognized importance of understanding human-agent teams there is a surprising paucity of research addressing how it is that such teams interact. What the community knows about human-agent teamwork has been extracted from the study of teams in general or is based only upon laboratory studies that used ad-hoc groups. Although we acknowledge the important role laboratory experiments play in theory development, the processes created within such environments are often at a novice level, whereas team processes in operational settings require the development and utilization of a welldeveloped form of expertise. Capturing the expert nature of experienced human-agent teams performing in operational contexts is essential because such teams may rely upon decision and/or behavioral strategies that differ substantially from novices [10, 29, 30, 38, 44]. Further, the performance metrics necessary to evaluate this form of expert team need to be not only sensitive to the subtle differences emerging from these complex environments, but also broad enough to capture the range of situations faced by such teams. Thus, there is much to be learned
about how human-agent teams engage in successful teamwork in complex operational environments. We suggest that, in order to fully capture an understanding of human-agent teams, qualitative and quantitative analyses of such teams in operational contexts need to be conducted. Research in the field is essential because human-agent teamwork in context significantly differs from what is witnessed in the laboratory. Specifically, human-agent teamwork in operational settings occurs in rapidly evolving and ambiguous situations, with much time pressure and the potential for information overload [10]. By including such analyses in our framework, our goal is to converge on a better understanding of the core characteristics of human-agent teamwork. In this paper we first briefly describe our metascientific perspective and how we pursue a science of learning and training having both theoretical and practical value with respect to understanding teamwork in humanagent teams. We then describe the methods within our framework that allows us to integrate field and laboratory research so as to better understand human-agent teamwork and develop a foundation of knowledge necessary for the training of such teams.
1.2. A Meta-scientific Perspective Our overall framework is devised to facilitate the interaction of training and operational researchers so as to better integrate field and laboratory research. We suggest that, through appropriate partnerships and collaborations, findings in field setting can be more rapidly brought into the laboratory for controlled testing and findings from laboratory studies can be more rapidly tested in the field. This overall notion is predicated on the belief that fundamental gains in a science of learning and performance associated with human-agent teams can be made by leveraging the methodologies used in the study of expert teams. In particular, studies of experts and expert teams have effectively utilized both laboratory and field studies to examine exceptional performance. Laboratory studies rely on tasks that can repeatedly reproduce the superior performance of experts under standardized conditions. These require controlled conditions that must also be representative of the contexts in which experts usually perform and their superior performance is consistently demonstrated. Expertise researchers [19] suggest that the challenge is to “develop a collection of standardized laboratory tasks that capture the essential aspects of a particular type of expert performance” (p. 281). Nonetheless, researchers have also argued "there is no sense in which we can study cognition meaningfully divorced from the task contexts in which it finds itself in the world... the experiment is an essential tool, but it must answer questions raised by
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
2
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
nature, and its answers must be tested against nature" [34] (pp. 19–20). Using this overarching meta-scientific perspective, in this paper we describe one instantiation of how this can be realized within the context of expert teams consisting of humans collaborating with intelligent agents. We integrate scientific methodologies to enhance the necessary complementarities of laboratory and field research. Following Hoffman and Deffenbacher [27], we argue "research must be conducted in various settings, ranging from the artificial laboratory, through the naturalistic laboratory, to the natural environment itself" (p. 343). This involves collaboration between scientists with differing backgrounds and specializations with the goal of fostering a synergistic combination of approaches. From the scientific standpoint, the benefits of approaches can be leveraged so as to improve upon hypothesis generation and testing along with theory development in a broader context. Figure 1 [adopted from 22] illustrates our framework showing, at its foundation, the two distinct field methods that are proposed to capture the knowledge of expert human-agent teams. The first is an observational technique known as coordination demand analyses (CDA), developed in the training sciences. This method is similar to cognitive task analysis, but is devised to capture a broader spectrum of team information. CDA can elicit both the cognitions and behaviors of expert teams through the appropriate observation and interview of team members engaged in dynamic interaction. The second is a method devised to more specifically capture particular episodes or events critical to effective and ineffective human-agent teamwork. This aspect of our framework leverages the knowledge management techniques coming out of the information and organizational sciences [2, 5, 52, 53]. Specifically, we suggest that lessons learned systems are a viable means with which to collect the incidents expert teams experience in their dynamic environments. The knowledge gained from these methods supports a multiprocess training space in which human-agent team research can proceed. In sum, our framework leverages the skills of researchers having differing disciplinary backgrounds and therefore differing expertise in various scientific methodologies. This approach allows, not only for leveraging the transfer from theory to applications, but also provides an environment in which theoretical advancements can be tested and applied in complex domains. Thus, conceptual and pragmatic advancement can proceed more quickly through the interaction of scientists and the integration of methods. Only in this way can a science of learning and training associated with human-agent teams have broad power and scope.
In-process Training Preprocess Training
Human-Agent Team KSAs
Postprocess Training
Information Management
Coordination Demand Analysis
Human-Agent Team Lessons Learned Capture
Figure 1. Proposed research framework for human-agent teams.
2. Field Data Collection In the next section we describe how coordination demand analysis can be used in the context of humanagent teams. We follow this with a description of knowledge management and how lessons learned systems can be used to facilitate our understanding of expert human-agent teams. In the final section of our paper we describe how these methods can be integrated within a training space [22] where the knowledge gained from our field methods can be tested in more controlled settings.
2.1. Coordination Demand Analysis Coordination demand analysis is a common method for the identification of the coordination needs in teams [7, 8]. For example, an assessment of coordination and workload demands experienced by expert pilots using CDA [8] revealed that the highest demands for coordination were associated with tasks that imposed the highest amount of mental workload. One recent framework that appears to hold some promise for understanding human-agent teams in this regard is the construct of team competencies [13]. Team competencies are a constellation of knowledge, skills, and attitudes (KSAs) that foster effective team interaction behaviors and performance. Furthermore, CannonBowers and colleagues [13] suggest that there are different types of team competencies. Some competencies are required in every team situation. Regardless of mission or organization, these team-generic competencies are required by individual team members. They include skills such as communication and leadership. Conversely, it is suggested that other competencies are team-specific. These skills are only meaningful in specific team
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
3
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
situations. These include issues such as knowledge of other team members’ abilities and team cohesion and these vary from team to team. Cannon-Bowers and colleagues also suggest that team competencies are influenced by task characteristics. Some competencies are task-generic. That is, certain competencies are required in all team situations. Other competencies are taskspecific. For example, certain competencies that are required in a search-and-rescue task may not be relevant to a logistics transport task. Consequently, one can posit that a CDA of humanagent team tasks using the team competency framework represents a viable means with which to understand expert human-agent teamwork. It may allow the identification of competencies that are generic, and therefore required in every training situation. However, it also allows the flexibility to identify specific skills with which to tailor training to the specific needs of various organizations, platforms, and, indeed, specific teams. Thus far, however, the competency framework has been used only to describe a relatively general universe of skills. To be truly useful, the framework must be utilized in specific training contexts and yield guidance for team training that would not have emerged using traditional approaches. In the present context, we are specifically looking at the coordination demands in human-agent team interactions. Human-agent team interaction introduces a number of issues with respect to both team and individual cognition that are distinct from traditional team interactions. For example, individuals operating in human-agent teams may have to deal with an increased level of abstraction that may place unique demands on their information processing. At the team level, interaction behaviors may become similarly opaque. As we indicated above, team coordination traditionally involves the use of explicit and tacit cues interpreted through a shared mental model possessed by the team [10, 18]. In human-agent teams, it is unclear how team coordination proceeds, given that differing communication patterns may be necessary to share cues (e.g., in traditional teams gestures may be used but in human-agent teams explicit idiosyncratic communication may be required). Within the aforementioned context, through CDA, the goal is to garner a descriptive understanding of the ways in which human-agent teams accomplish both interdependent cognitive processes (e.g., team coordination, communication) and independent cognitive processes (e.g., problem recognition, attention allocation). To be effective, CDA would need to focus on expert teams confronted with decisions requiring input from human and non-human team members and under varying task conditions. This would involve the study of the performance of real operators in situations emphasizing not only decision-making, but also team interaction
processes utilized by the experts when interacting in human-agent teams. Nonetheless, what is additionally required are the tools that enable the collection of a wide variety of critical incidents from human-agent teams with much operational experience. In the next section we briefly describe one such technique, which relies on knowledge capture technologies that allow for the elicitation of critical lessons learned from experienced human-agent teams. We discuss how lessons learned systems, originating out of the organizational and information sciences, can be used to capture the critical incidents that are characteristic of effective and ineffective human-agent teamwork.
2.2. Knowledge Lessons Learned
Management
-
Capturing
Learning from the past to ensure future success is a principle long espoused in military and industrial settings. Indeed, this idea gets at the core of the field of knowledge management in general and lessons learned in particular. Knowledge management (KM) is described as performing the necessary activities involved in discovering, capturing, sharing, and applying knowledge in terms of resources, documents, and people skills, so as to enhance, in a costeffective fashion the impact of knowledge on the unit’s goal achievement [6]. Knowledge capture systems support the process of eliciting either explicit or tacit knowledge that may reside in people, artifacts, or organizational entities [6]. Even though this was the subject of early artificial intelligence initiatives, computer and cognitive scientists are still developing mechanisms and technologies to effectively capture knowledge, so it can then be shared with others - see Chapter 14 in [6]. Perhaps the earliest mechanisms for knowledge capture dates to the anthropological use of stories, the earliest form of art, education, and entertainment. Storytelling is the mechanism by which early civilizations passed on their values and their wisdom, from one generation to the next. Organizations such as IBM and the WorldBank have made substantial investments in support of using organizational storytelling [17, 47]. Increasingly, organizations have turned to the use of lessons learned systems (LLS) as a means to capture and reuse lessons that can benefit employees facing situations that closely resemble other past situations. LLS are devised to support the gathering, storage, and use of what is described “experiential working knowledge” [52]. For example, NASA has created a “Lessons Learned Information System” to gather and disseminate expert knowledge associated with the space program [3]. Lessons learned systems are designed to preserve knowledge that may be lost when experts leave or retire from an organization. From the organizational standpoint,
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
4
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
the “goal of LL systems is to capture and provide lessons that can benefit employees who encounter situations that closely resemble a previous experience in a similar situation” [52]. More specifically, a currently adopted definition [40] states that: A lesson learned is knowledge or understanding gained by experience. The experience may be positive, as in a successful test or mission, or negative, as in a mishap or failure… A lesson must be significant in that it has a real or assumed impact on operations; valid in that it is factually and technically correct; and applicable in that it identifies a specific design, process, or decision that reduces or eliminates the potential for failures and mishaps, or reinforces a positive result. Although KM and LLS are increasingly being recognized as important organizational tools, a persistent criticism is that the information gathered is not used as frequently or as effectively in helping meet organizational needs [1]. Specifically, reviews of KM programs often note that “lessons are not routinely identified, collected, or shared by programs and project managers” (p. 3) [24]. Notably, recent recommendations from such reviews suggest that one way to improve upon such shortcomings is to better communicate the lessons learned and to enhance the means with which these lessons are linked as well as to ensure “appropriate training for employees in order to maximize lessons learned.” (emphasis added, p. 7) [24]. Following these recommendations, we next discuss how including recent advances in the science of training, can bring forth increased understanding and use of LLS beyond their traditional use.
of human-agent teams could interact with a Web-based data capturing system in order to provide descriptions of incidents containing critical content. This, in turn, would form the foundation for later training research. The goal would be to develop a means with which the expert’s operational experiences that supported and did not support effective human-agent teamwork can be more easily described, categorized, and made available for research and training. The accumulated experiences would be collected in a repository of lessons learned that can be called upon to guide developmental activities. ILLS can facilitate the transition to digital approaches for training as well as take advantage of emerging technologies that advance our capabilities in supporting human-agent teamwork. Following recent advances in resource-based learning environments [25], one could devise a methodological framework around which situationally-relevant human-agent teams incidents can be coded for use in training scenarios [1]. Resource-based approaches to learning attempt to integrate the content that supports understanding in a given domain so that targeted concepts can be better trained. Using Web-based technologies, this content could be classified such that reuse for later training is facilitated [25]. We next describe how these methods can be used to support research into understanding human-agent teamwork and the type of training necessary to support such teams. Specifically, information gathered from the ILLS and the CDA can support research by increasing the speed at which critical issues can be investigated through systematic elicitation and categorization of lessons learned in the field.
3. Training Spaces and Human-agent Teams 2.2.1 Capturing Lessons Learned in Human-agent Teams We suggest that lessons learned systems and associated technologies and methods, have, heretofore, been underutilized due to their lack of integration with training research. Specifically, from the standpoint of training, KM and LLS have not been explicitly considered within the training research community. By utilizing Intelligent Lessons Learned Systems (ILLS) in conjunction with training system design, we argue that significant gains in learning efficacy can be achieved. Conceptually, such a system could be described as a database designed to allow for input by members of the operational community. ILLS and associated knowledge capture techniques [1] may be an effective means with which to compile training content that can populate eventbased approaches to training. Such a system would integrate the capability for recently deployed human-agent teams to share their pertinent experiences. For example, the human members
In this final section, we illustrate how the knowledge gained from the aforementioned methods can be used in investigations of human-agent teamwork. As illustrated in the upper portion of Figure 1, two recent developments in the behavioral and information sciences are used to facilitate our understanding of training research and practice for human-agent teams – for a full discussion see [22]. The first involves a global assessment of how it is that variables within a given “space” interact and are interacted with by an individual or team. The second involves inclusion of the often overlooked dimension of “time” as a consideration in training theory. First, in order to aid human-agent team training, we propose to conceptualize training requirements for complex environments as a mastery of elements in a training space. A training space can be considered the analog to a “problem space” [41, 50]. The trainee moves through this space as he interacts with the numerous variables associated with a complex system. Only a subset of these
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
5
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
elements is relevant to a given individual or a particular team within a training space. Nonetheless, considerable information is often available to the trainees as they operate within this space. The question is how can information within this space be systematically managed and conveyed so that learning is facilitated. Second, to fully capture the factors associated with learning complex tasks, we adopt recent theoretical work that recognizes the need to view training across time. Such approaches consider not a series of discrete learning events, but interconnected events where pre- in- and postprocess factors are considered. For example, in a review of methodologies to facilitate learning and transfer, researchers have argued that “pre-practice” conditions (e.g., advance organizers) can be tailored and applied to differing learning environments [11]. More recently, researchers have argued that, to understand team performance, interaction must be conceptualized within a coordination space where pre-, in-, and post-process coordination is systematically integrated and managed [22]. We use these conceptualizations of time and space to argue that events within the training space need be woven together such that the learner is able to experientially and cognitively link learning concepts. The findings from the aforementioned fieldwork can form the foundational content for controlled research efforts into the development of human-agent team training. The critical incidents generated could be used to determine crucial task and interaction components for use in more controlled testing environments. Further, these would provide a rich set of data from which researchers could generate hypotheses regarding training interventions to be implemented in later experimentation. Specifically, the task and team components identified through the CDA as well as the incidents from the ILLS could be integrated into simulations that mimic the complexities associated with human-agent team operational environments. In sum, the field methods can aid in our understanding of which aspects of individual and team cognition are most critically affected in human-agent teams. In this way we are better able to describe the requirements needed to accurately make decisions when team members are both human and agent and we can identify the critical interaction behaviors of expert human-agent teams. Such information can enable the research community in outlining key training issues for later study in controlled environments. Thus, using the overall training space framework, such research would utilize event-based scenarios based upon both the CDA and the ILLS.
3.1. Information Management in the Humanagent Team Training Space As discussed, recent research has described how teamwork in complex environments can be conceptualized within a distributed coordination space where teams interact, evolve, and mature over time and space [22]. In particular, as illustrated in the upper portion of Figure 1, the activities important to the development of expert teams occur not only during in-process interaction, but also during pre- and post-process interaction. Whereas inprocess interaction occurs during actual task execution, pre-process interactions involve preparatory pre-task behaviors (e.g., planning session) where initial shared expectations are created in anticipation of team interaction – see also [21]. Post-process interactions (e.g., afteraction review) include post-task reflection on performance [46]. These antecedent and/or consequent behaviors are critical to expert team development and we next briefly describe how training interventions for human-agent teamwork can be viewed within this approach. Based upon the field methods and the successful and unsuccessful human-agent teamwork behavior and processes identified, we can begin to conceptualize a dynamic human-agent team training space. Through this, we can begin to understand how teams best navigate that space by investigating how navigation can be facilitated. Essentially, in human-agent teams, the technological components may alter and limit the interaction processes creating the need for training interventions. Within this context, one productive line of investigation would be research with human-agent teams examining how the overall delivery of information can best be managed. Research can examine the degree to which information management impacts this process and, by varying the amount of information managed by the agents and humans team members, we can examine methods that may impact human workload. In particular, the utility of computer-mediated interaction is most notable in the fact that idiosyncratic data can be delivered dependent upon the user of a particular system without affecting other team members. Research must address the appropriate management of data by determining who should receive what content in order to maximize inprocess interaction. This could also be examined during pre- in- and post-process interaction. Specifically, the management of information can occur, not only during task execution, but also at differing stages of team interaction. The objective of such research would be to develop an understanding of how to best utilize computer-mediated data flow such that information can be managed with agent technology. As an illustration, in dynamic team environments (e.g., emergency response teams), research
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
6
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
could address how information can be parsed to maximize adaptability [31]. These forms of information management differ due to distribution across space and team members. For example, from the standpoint of preprocess information management, described as information parsing prior to actual interaction, research could examine how pre-briefs varying the amount of description of agent tasks would alter later coordination. When considering in-process information management, described as parsing information flow during task execution, one could investigate offloading the role of the “knowledge manager” to agent technology. The knowledge manager is a team member designated to appropriately distribute information during the operational task [26]. To the degree an agent is able to monitor team process, it may be able to effectively and accurately disseminate information in a timely fashion. Research within the training space could examine a type of “just-intime” information management by agent team members to determine its impact on coordination and overall performance. Last, post-process information management, described as information disseminated following interaction, could be examined through partitioned feedback and how it alters comprehension of training performance. Partitioned feedback disentangles how information is parsed for review after-the-fact and a line of inquiry could investigate whether human-agent teams should receive full feedback (i.e., knowledge of all team members performance) or only partial feedback (e.g., knowledge of their results). This research would facilitate an understanding of how information management in general helps or hinders the communication process within human-agent teams. For example, from the communication and comprehension standpoint, for effective human-agent interaction to occur, both the agent and the human must be able to gauge the intent of their teammate [48]. This is a behavioral component in that verbal and non-verbal communication is used to assess intent and the appropriate level of knowledge sharing must be determined for the correct intent to be conveyed. Through techniques such as the CDA, we can gain an understanding of the human-human communication using standard terminology and gestures to facilitate comprehension as well as how this may proceed with non-human team members. Without an understanding of the appropriate common-ground, that is, “a common language for describing tasks” (p. 386), [35] referential communication will not be feasible [14, 23] and expert human-agent teams are likely to adopt alternative approaches to information exchange. Additionally, current technologies provide fairly broad views of operational environments (e.g., a “gods-eye view”). An important issue with respect to information management is how such information should be apportioned within a human-agent team. Last, post-
process information management needs to be specified in order to maximize, for example, the influence of feedback data. For example, researchers note [11] that post-process factors can be utilized to reinforce and sustain the learning that occurs as a result of in-process efforts. Within this context, post-process factors during, for example, structured debriefs or after-action reviews, would look at feedback timing and quantity in addition to exploring who receives what feedback [16]. In short, integrating pre- and post-process elements into human-agent team training would increase an understanding of how agents can be more effectively used. In order to fully understand how systems, data, agents, and individuals interact, research must investigate how information can be effectively managed within humanagent teams.
3.2. Measurement within the Human-agent Team Training Space In order to better understand the many ways agents impact teamwork, the dependent measures used to capture realistic human-agent interactions and performance would have to be taken at two levels. The first involves traditional metrics addressing measures of performance and measures of effectiveness. Nonetheless, in order to improve on diagnosticity so as to better capture the superior nature of expert teams in such environments, these techniques could be extended using additional measures. Thus, for the second level, our framework augments objective performance measures with subjective assessment of performance and workload derived directly from the human operators. Such techniques have been used to analyze the interaction between cognitive processes and cognitive products in order to examine human and system interaction in a more diagnostic manner [15]. Instructional efficiency is one such measure used to describe the relationship between a trainee’s subjective assessment of workload and overall task performance [43]. Within this context, we argue that well designed human-agent teams should reduce the cognitive load on working memory and attention. In turn, a better comprehension of the operational content should result, freeing up resources for other task demands. This, then, should produce superior performance. By viewing manipulations of information within human-agent teams from the framework of easing extrinsic cognitive load – see [28] – we can illustrate how information management may alter workload and performance, and thus be measurable. Theoretical approaches relying on measures of instructional efficiency have been used to illustrate how training system design can alter cognitive processes and
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
7
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
the resulting learning products. This has involved, for example, instructional design manipulations and measures of cognitive efficiency involving subjective assessments of task workload in combination with objective measures of performance [15, 28, 37]. Overall, one could argue that the result of this is a more efficient system from the standpoint of the operator’s information processing [39]. To analyze such interventions, the “instructional efficiency score” [42] is calculated using standardized scores of workload (subjective assessment of mental effort) and performance. Researchers have utilized this approach and adapted it [28] to show how these scores can be represented as the perpendicular distance from a line representing a level of zero efficiency with the formula as E = [z_Perf–z_Wrkl]/sqrt(2). A positive value would be a point above the line and viewed as efficient (higher performance but lower workload) and a negative value would be a point below the line and viewed as inefficient (lower performance but higher workload). We suggest that such approaches can add a level of diagnositicity by allowing for training designers to consider the relative efficiency of human-agent team manipulations. Lastly and along these lines, assessment of metacognitive processes during task performance would be needed to ascertain the degree to which higher level cognition may be differentially taxed while interacting within human-agent teams. This would involve subjective assessment of operator performance in combination with actual performance. In this way, measures of metacognitive bias can be determined and allow researchers to determine the degree to which operators are able to accurately monitor their performance while engaged in complex tasks. By standardizing these values and calculating the difference between performance and prediction, a positive score would suggest over confidence and a negative score would suggest under confidence. In sum, from the instructional sciences, approaches such as these are taken to analyze differing training formats from the standpoint of their effectiveness (i.e., learning gains) as well as from their efficiency (i.e., learning in proportion to mental effort exerted). Following such theorizing within human-agent teams may be particularly beneficial in the context of training because the environments in which such teams must operate produce situations that require full cognitive resources. For example, operational environments (e.g., search and rescue) increase the workload due to the amount of stress present in such situations. Any system augmentation that produces more efficient performance (i.e., lower workload associated with higher performance) or more accurate metacognitive processes could be expected to benefit operational effectiveness. We suggest that this approach is useful to consider not only with instructional systems but also in human-agent team
training as it allows researchers to better diagnose manipulations involving agent technology.
4. Concluding Remarks As the availability of sophisticated agent technologies continues to grow, human-agent teams will become ubiquitous. As such, they represent, not just an important subcategory of teams, but what may become a heavily utilized form of teamwork. Therefore, as their prevalence grows, it becomes critical that we better understand what facilitates or hinders human-agent teamwork. In this paper we have begun to outline a sample of ways in which field and laboratory research can be integrated to begin the investigation of the factors pertinent to interacting in human-agent teams. The more we understand currently successful and unsuccessful human-agent teamwork through methods and tools such as CDA and ILLS, the better able we are to inform the training research community. For example, the more we understand how information sharing and communication takes place in human-agent teams, the more systematic our training research for managing information using agent technology. Our goal with integrating the field methods of CDA and ILLS with training research in human-agent teams was to illustrate how differing disciplines can be utilized to improve understanding and use within a given domain. These techniques can be effectively utilized to help researchers broadly conceptualize, not only the nature of such teamwork, but also identify the particular problems and the successful interactions of such teams. To conclude, through effective field research, the deeper will be the thinking, and the better specified will be the training research. In this way the training community will be more likely to develop accurate principles and guidelines as well as effective tools and interventions that facilitate interaction within human-agent teams. In sum, we provide a prototype for integrating field and laboratory research in human-agent teams. We hope this motivates others to begin to, not only consider human-agent teamwork, but also to begin the requisite empirical examination of the manner in which such teams interact.
5. References [1]
[2]
D.W. Aha, and R. Weber, (Eds.). “Intelligent Lessons Learned Systems: Papers from the AAAI 2000 Workshop”, Technical Report WS-00-008, AAAI Press Menlo Park, CA, 2000, pp. 63-67. I. Becerra-Fernandez, “The Role of Artificial Intelligence Technologies in the Implementation of People-Finder Knowledge Management Systems”,
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
8
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14] [15]
[16]
Elsevier Knowledge Based Systems, 13(5), 2000, pp. 315-320. I. Becerra-Fernandez, and D.W. Aha, “Case-Based Problem Solving for Knowledge Management Systems”, Proceedings of the Twelfth International Conference of the Florida Artificial Intelligence Research Society, Orlando, FL, AAAI Press, 1999, pp. 219-223. I. Becerra-Fernandez, and R. Sabherwal, “Organizational Knowledge Management Processes: A Contingency Perspective”, Journal of Management Information Systems, 18(1), 2001, pp. 23-55. I. Becerra-Fernandez, and J.M. Stevenson, “Knowledge Management Systems and Solutions for the School Principal as Chief Learning Officer”, Education, 121(3), 2001, pp. 508-518. I. Becerra-Fernandez, A. Gonzalez, and R. Sabherwal, Knowledge Management: Challenges, Solutions and Technologie, Prentice Hall, Upper Saddle River, NJ, 2004. C.A. Bowers, D.P. Baker, and E. Salas, “Measuring the Importance of Teamwork: The Reliability and Validity of Job/Task Analysis Indices for Team- Training Design”, Military Psychology, 6, 1994, pp. 205-214. C.A. Bowers, B.B. Morgan, E. Salas, and C. Prince, “Assessment of Coordination Demand for Aircrew Coordination Training” Military Psychology, 5, 1993, pp. 95-112. C. Bowers, C. Thornton, C. Braun, B.B. Morgan, and E. Salas, “Automation, Task Difficulty, and Aircrew Performance”, Military Psychology, 10, 1998, pp. 259274. J. Cannon-Bowers, and E. Salas, Making Decisions under Stress: Implications for Individual and Team Training. Prentice Hall, Washington, DC, 1998. J.A. Cannon-Bowers, L. Rhodenizer, E. Salas, and C.A. Bowers, “A Framework for Understanding Pre-practice Conditions and their Impact on Learning”, Personnel Psychology, 51, 1998, pp. 291-320. J.A. Cannon-Bowers, E. Salas, and S.A. Converse, “Shared Mental Models in Expert Team Decision Making”, In N. J. Castellan, Jr. (Ed.), Current Issues in Individual and Group Decision Making, Lawrence Erlbaum Associates, Hillsdale, NJ, 1993, pp. 221-246. J. Cannon-Bowers, S.I. Tannenbaum, E. Salas, and C.E. Volpe, “Defining Competencies and Establishing Team Training Requirements”, In R. A. Guzzo and E. Salas (Eds.), Team Effectiveness and Decision Making in Organizations, Jossey-Bass, San Francisco, CA, 1995, pp. 333-380. H.H. Clark, and D. Wilkes-Gibbs, “Referring as a Collaborative Process”, Cognition, 22, 1986, pp. 1-39. H.M. Cuevas, S.M. Fiore, and R.L. Oser, “Scaffolding Cognitive and Metacognitive Processes: Use of Diagrams in Computer-Based Training Environments”. Instructional Science, 30, 2002, pp. 433 – 464. H.M. Cuevas, S.M. Fiore, E. Salas, and C.A. Bowers, “Virtual Teams as Sociotechnical Systems”, In S. H. Godar and S. P. Ferris (Eds.), Virtual and Collaborative Teams: Process, Technologies, and Practice”, Idea Group Publishing, Hershey, PA, 2004, pp. 1-19.
[17]
[18] [19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
S. Denning, “The Springboard: How Storytelling Ignites Action in Knowledge-Era Organizations”, Boston, Butterworth-Heinemann, 2000. E. Entin, and D. Serfaty, “Adaptive Team Coordination”, Human Factors, 41, 1999, pp. 312-325. K.A. Ericsson, A.C. Lehmann, “Expert and Exceptional Performance: Evidence on Maximal Adaptations on Task Constraints”, Annual Review of Psychology, 47, 1999, pp. 273-305. S. M. Fiore, H.M. Cuevas, and R.L. Oser, “A Picture is Worth a Thousand Connections: The Facilitative Effects of Diagrams on Task Performance and Mental Model Development”, Computers in Human Behavior, 19, 2003, pp. 185-199. S.M. Fiore, E. Salas, and J.A. Cannon-Bowers, “Group Dynamics and Shared Mental Model Development”, In M. London (Ed.), How People Evaluate Others in Organizations: Person Perception and Judgment in Industrial/Organizational Psychology, Lawrence Erlbaum, Mahwah, NJ, 2001, pp. 309-336. S.M. Fiore, E. Salas, H.M. Cuevas, and C.A. Bowers, Distributed Coordination Space: Toward a Theory of Distributed Team Process and Performance, Theoretical Issues in Ergonomic Science, 4, 3-4, 2003, pp. 340-363. S. Fussell, and R. Krauss, “The Effects of Intended Audience on Message Production and Comprehension: Reference in a Common Ground Framework”, Journal of Experimental Social Psychology, 25, 1989, pp. 203219. GAO, NASA: “Major Management Challenges and Program Risks”, Report Number GAO-03-849T, Government Accounting Office, Washington DC, 2002. M.J. Hannafin, J. Hill, and J. McCarthy, “Designing Resource-Based Learning and Performance Support Systems”, In D. Wiley (Ed.), The Instructional Use of Learning Objects, Association for Educational Communications & Technology, Bloomington, IN, 2002, pp. 99-129. K.P. Hess, E.E. Entin, S.M. Hess, S.G. Hutchins, W.G. Kemple, D.L. Kleinman, S.P. Hocevar, and D. Serfaty, “Building Adaptive Organizations: A Bridge from Basic Research to Operational Exercises”, Proceedings of the 2000 Command and Control Research and Technology Symposium, June 26-28, Naval Postgraduate School, Monterey, CA, 2000. R.R. Hoffman, and K.A. Deffenbacher, “An Analysis of the Relations of Basic and Applied Science”, Ecological Psychology, 5, 1993, pp. 315-352. S. Kalyuga, P. Chandler, and J. Sweller, “Managing Split-Attention and Redundancy in Multimedia Instruction”, Applied Cognitive Psychology, 13, 1999, pp. 351-371. G. Klein, “A Recognition Primed Decision (RPD) Model of Rapid Decision Making”. In G. Klein, J. Orasanu, R. Calderwood, and C. Zsambok (Eds.), Decision Making in Action, Ablex, Norwood, NJ, 1993, pp. 138-147. G. Klein, “The Recognition-Primed Decision (RPD) Model: Looking Back, Looking Forward”, In C. E. Zsambok and G. Klein (Eds.), Naturalistic Decision
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
9
Proceedings of the 38th Hawaii International Conference on System Sciences - 2005
Making, Lawrence Erlbaum Associates, Mahwah, NJ, 1997, pp. 285-292. [31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40] [41] [42]
[43]
[44]
[45]
G. Klein, and T.E. Miller, ”Distributed Planning Teams”, International Journal of Cognitive Ergonomics, 3, 1999, 203-222. K. Kraiger, J.K. Ford, and E. Salas, “Application of Cognitive, Skill-Based, and Affective Theories of Learning Outcomes to New Methods of Training Evaluation”, Journal of Applied Psychology, 78, 1993, pp. 311-328. R.M. Krauss, and S.R. Fussell, “Constructing Shared Communicative Environments”, In L. B. Resnick, J. M. Levine, and S. D. Teasley (Eds.), Perspectives on Socially Shared Cognition, American Psychological Association, Washington, DC, 1991, pp. 172-2000. T.K. Landauer, “Relations Between Cognitive Psychology and Computer System Design”, In J. M. Carroll (Ed.), Interfacing Thought: Cognitive Aspects of Human-Computer Interaction, MIT Press, Cambridge, MA, 1987, pp. 1-25. D. Liang, R. Moreland, and L. Argote, “Group Versus Individual Training and Group Performance: The Mediating Factor of Transactive Memory”, Personality & Social Psychology Bulletin, 21, 1995, pp. 384-393. N. Marcus, M. Cooper, and J. Sweller, “Understanding Instructions”, Journal of Educational Psychology, 88, 1996, pp. 49-63. R.E. Mayer, and R. Moreno, “A Split-Attention Effect in Multimedia Learning: Evidence for Dual Processing Systems in Working Memory”, Journal of Educational Psychology, 90, 1998, pp. 312-320. M. McNeese, E. Salas, and M. Endsley, (Eds.), New Trends in Collaborative Activities: Understanding System Din Complex Environments, Human Factors and Ergonomics Society, Santa Monica, CA, 2001. S. Mousavi, R. Low, and J. Sweller, “Reducing Cognitive Load by Mixing Auditory and Visual Presentation Modes”, Journal of Educational Psychology, 87, 1985, pp. 319-334. National Aeronautic and Space Administration (1997) envnet.gsfc.nasa.gov/ll/definition.html. A. Newell, and H.A. Simon, Human Problem Solving, Prentice Hall, New Jersey, 1972. F. Paas, and J. Van Merriënboer, “The Efficiency of Instructional Conditions: An Approach to Combine Mental Effort and Performance Measures”, Human Factors, 35, 1993, pp. 737-743. F. Paas, J.J.G. Van Merrienboer, and J.J. Adam, “Measurement of Cognitive Load in Instructional Research”, Perceptual and Motor Skills, 79, 1994, pp. 419–430. E. Salas, and G. Klein, (Eds.) Linking Expertise and Naturalistic Decision Making, Lawrence Earlbaum Associates, Hillsdale, NJ, 1994. E. Salas, and S.M. Fiore, (Eds.), Team Cognition: Understanding the Factors that Drive Process and Performance, American Psychological Association, Hillsdale, NJ, 2004.
[46]
[47]
[48]
[49]
[50]
[51]
[52]
K.A. Smith-Jentsch, R.L. Zeisig, B. Acton, and J.A. McPherson, “Team Dimensional Training: A Strategy for Guided Team Self-Correction”, In J. A. CannonBowers and E. Salas (Eds.), Making Decisions under Stress: Implications for Individual and Team Training, APA, Washington, DC, 1998, pp. 271-297. D. Snowden, “Three Metaphors, Two Stories and a Picture - How to Build Common Understanding in Knowledge Management Programmes”, Knowledge Management Review, Melcrum Publishing, Washington, DC, March/April, 1999. K. Sycara, and M. Lewis, “Integrating Intelligent Agents into Human Teams”, In E. Salas and S. M. Fiore (Eds.), Team Cognition: Process and Performance at the Interand Intra-Individual Level, American Psychological Association, Washington, DC, 2004, pp. 203-231. J.M. Urban, J.L. Weaver, C.A. Bowers, and L. Rhodenizer, “Effects of Workload and Structure on Team Processes and Performance: Implications for Complex Team Decision Making”, Human Factors, 38, 1996, pp. 300-310. J.F. Voss, and T.A. Post, “On the Solving of IllStructured Problems”, In M. Chi, R. Glaser and M. Farr (Eds.), The Nature of Expertise, Lawrence Erlbaum, Hillsdale, NJ, 1988, pp. 261-285. R. Weber, D.W. Aha, and I. Becerra-Fernandez, “Intelligent Lessons Learned Systems”, International Journal of Expert Systems Research & Applications, 20(1), 2001, pp. 17-34. B. Welty, and I. Becerra-Fernandez, “Managing Trust and Commitment in Collaborative Supply Chain Relationships”, Communications of the ACM, 44(6), 2001, pp. 67-73.
6. Acknowledgements Writing this paper was partially supported by Grant Number SBE0350345 from the National Science Foundation and by contract number N61339-04-C-0034 from the United States Army Research, Development, and Engineering Command (Dr. Neal Finkelstein, Contract Manager), to the University of Central Florida as part of the Collaboration for Advanced Research on Agents and Teams. The opinions expressed in this paper are those of the authors only and do not necessarily represent the official position of the U.S. Army, the U.S. Department of Defense, the University of Central Florida or Florida International University. All correspondence regarding this paper should be sent to Dr. Stephen Fiore, via email at
[email protected] or via regular mail at the Institute for Simulation and Training, 3280 Progress Drive, Orlando, FL 32826.
0-7695-2268-8/05/$20.00 (C) 2005 IEEE
10