Evaluating Intelligent Systems for Complex Socio-technical Problems

0 downloads 0 Views 1MB Size Report
Evaluating Intelligent Systems for. Complex Socio-technical Problems: Seeking Wicked Methods. Michael P. Linegang. Aptima, Inc. 1726 M Street NW; Suite 900.
Evaluating Intelligent Systems for Complex Socio-technical Problems: Seeking Wicked Methods Michael P. Linegang

Jared T. Freeman

Aptima, Inc. 1726 M Street NW; Suite 900 Washington, DC, USA [email protected]

Aptima, Inc. 1726 M Street NW; Suite 900 Washington, DC, USA [email protected] solution, and problems that change fundamentally with the imposition of any solution [2]. In these ill-structured settings, the solutions never truly solve the problem, but merely achieve some degree of balance in a complex system of competing forces. Intelligent systems are being developed to address complex socio-technical problems. This paper references one program that is taking precisely this approach: the Defense Advanced Research Projects Agency (DARPA) Advanced Soldier Sensor Information Systems and Technology (ASSIST) program [1]. Its objective is to exploit soldier-worn sensors to augment a soldier’s recall and reporting capability and enhance situation understanding in military operations in urban terrain (MOUT) environments. Figure 1 outlines a traditional systems engineering approach applied to a hypothetical wicked problem (shown in boxes with a white background). It also illustrates a new control mechanism, a “simulated wicked problem space”, as a mechanism to address some of the uncertainties associated with defining technology requirements for wicked problems (shown in the box with the gray background). The hypothesis of this paper is that a simulated world can be used to represent our assumptions about wicked problems, and thus make explicit our understanding of them and of the requirements for addressing them. A simulated world can be a valuable control mechanism in systems engineering programs conducted to handle wicked problems. Specific benefits of this simulation-based assessment as a supplement to field testing would include an ability to assess the complex interactions between new technologies and forces in the problem space that are difficult to represent in a field test environment, to do so across a great range of scenarios, and to develop a deeper understanding of the fundamental dynamics of the problem space. The remainder of this paper describes this “simulated wicked problem space” concept as a supplemental control mechanism for an intelligent systems development effort, using the DARPA ASSIST program as a hypothetical backdrop.

Abstract— Many of today’s most advanced government-sponsored systems engineering efforts are developing technology to address complex socio-technical problems. This paper describes some of the challenges associated with traditional field-based evaluation methods for supporting these types of systems engineering efforts, and offers modeling and simulation as a tool to supplement field-evaluation. One current program, the DARPA ASSIST program is examined as a case study of the challenges associated with field evaluation for “wicked” socio-technical problems. We define requirements for a simulation environment to supplement field testing for this program, and describe how cognitive models and information visualization could be used to meet these requirements. Keywords: Wicked Problems, Socio-technical Systems, Modeling and Simulation, Cognitive Models, Cultural Models, Visualization.

I. INTRODUCTION Traditional systems engineering approaches attempt to define a fixed set of requirements early in the design of new technology. These requirements drive development and are the standard to which products are tested during evaluation. The assumption underlying this approach is that the problem will be solved if the requirements are met. But many of today’s most advanced government initiatives are grappling with the creation of intelligent systems to function in complex social settings (e.g., see [1]). In these situations, the problem is ill-defined and the requirements typically evolve with the product; the requirements thus cease to function as control measures. The challenge in these circumstances is to explore the problem space and discover requirements efficiently, rapidly, and early, and to test product concepts in the problem space before they are built, or even prototyped. This paper proposes ways to accomplish this using simulations and models of complex socio-technical systems. These techniques bolster the standard control mechanism for these systems engineering problems. A. Wicked Problems Complex, socio-technical systems often pose wicked problems, that is, problems that cannot be understood until they are solved, problems for which there is no one right

179

Fig. 1. Traditional systems engineering methods (white boxes) applied to a wicked problem (blue boxes), supplemented by a “simulated wicked problem space” (gray box)

crowded with cars and trucks, most on routine errands, but some supporting insurgent activities or worse, loaded with explosives. And the buildings house businesses, families, civilian organizations, international support organizations, and sometimes insurgent gangs planning attacks or fleeing patrols. Figure 2 represents this space abstractly. It illustrates three types of entities: 1) people, represented as circles, 2) vehicles, represented as white squares, and 3) buildings, represented as black squares. Many human, social, and cultural forces are at play in this environment, some externally visible, but many internally held or only manifested through complex behaviors. Figure 2 suggests some form of social or cultural complexity by representing three groups of individuals: Group 1 represented by shades of vertical green stripes, Group 2 represented by shades of red cross-hatching, and Group 3

II. DARPA ASSIST PROGRAM The ASSIST program is a DARPA advanced technology research and development program. The objective of the ASSIST program is to enhance post-mission reports of sightings and events in MOUT operations by using data from soldier-worn sensors to supplement a soldier's recall. The program is developing two classes of technology: • The Baseline System: prototype wearable capture units and the supporting operational software for processing, logging and retrieving data. These typically require soldiers to take an active role in recording or capturing relevant data. • Advanced Technology: passive wearable capture devices, algorithms, software, and tools that automatically collect data and recognize activities, objects, and/or events. [3] provides additional details about the ASSIST program’s objectives and technologies. A. ASSIST’s Wicked Problem Space Enhancing situation understanding of an urban environment is a wicked problem. The urban environment contains a large number, many types, and varying density of entities with many ill-defined but critical relationships. A MOUT team may patrol a city with tens or hundreds of thousands of inhabitants, each allied with family, tribal, religious, political, and economic interests that drive them into cooperation or conflict with American troops. The city streets may be

Fig. 2. Representation of entities and relationships in a MOUT environment

180

represented by shades of solid blue might represent three different political affiliations (and the color saturation in each circle suggests the strength of each individual’s affiliation with their group). The environment results in frequent interactions between entities, with some interactions being highly overt (e.g. speaking to someone or interacting with an object) and others covert (e.g. passively listening to someone or observing an object). Figure 2 represents a range of interactions in the form of lines linking entities together, with dark, bold lines representing highly overt interactions between entities and lighter lines representing passive or covert interactions. This environment evolves as entities come and go, individual affiliations shift, and groups form and disband. Thus, our understanding of the environment must constantly be updated. Identifying and categorizing the multiplicity of people, objects, relationships, interactions, events, and affiliations in a complex environment is a daunting task; and monitoring the changes in those variables over time is even more challenging. Further, the act of gathering the information necessary for gaining an understanding of this complex environment is likely to spawn unpredictable changes in the environment. The presence of American troops on the streets influences who hides, the attitudes of those who remain, and their behaviors. Observing the environment may actually reduce one’s “understanding” of it if the processes used in gathering information significantly perturb the environment. The complexity of the MOUT environment, its dynamism, and the unpredictable effects of observing it make understanding it a wicked problem. B. Evaluating ASSIST Technologies:

soldiers interacting with actors on MOUT sets [5]. This two-part approach provided clear, quantitative assessments of the prototype system’s capabilities relative to the stated requirements (i.e. the elemental tests assessed the ability of the intelligent systems’ to recognize and classify portions of an urban environment at a given point in time). The approach also provided significant qualitative (and some quantitative) assessment of the potential interactions of the new intelligent systems with a limited set of the other forces at work in the problem space (i.e. the vignette tests assessed the effect of incorporating the new technology into a semi-realistic mission scenario). However, field test environments impose constraints on an evaluation that significantly limit the evaluator’s ability to examine effects of new technologies on complex socio-technical interactions; issues that are critical in addressing wicked problems. Field assessment costs and other constraints force evaluators to attempt to “tame” a wicked problem; limiting the level of complexity and viewing it within a controlled setting (relative to the complexity of the actual environment). Even the “more wicked” vignette tests (involving soldiers interacting with actors on sets) require evaluators to manufacture as much “realistic complexity” as possible within the constraints of a budget and an unrealistic field test setting. And budget and timeline constraints typically limit the evaluation to a very small number of mission scenarios, with minimal ability to observe any secondary or tertiary effects of the technologies that might be manifested over longer periods of time. For example, four ~30-40 minute ASSIST vignettes were performed over a period of twelve months (2 vignettes at a 6-month test, 2 vignettes at a 12-month test), providing less than 4 hours of system-collected data from ~9 soldiers. The ASSIST system concept calls for every operational soldier to utilize the system on every patrol; meaning one 18-soldier patrol in a real world mission would produce double the dataset currently available for system evaluation purposes after 12 months. This is a very limited sample from which evaluators must extrapolate conclusions about how the system and the data it produces will affect soldier performance and understanding of the environment. In spite of evaluators’ best efforts to create semi-realistic complexity and determine the realistic effects of technology, field evaluations force evaluators to deal with reduced complexity as a fundamental constraint of the test. This lack of complexity likely conceals critical interaction effects of the new technology with the wicked problem space. [6] cautions that seemingly effective solutions for “tame” versions of a wicked problem can be deceptive, because “the wicked problem simply reasserts itself, perhaps in a different guise … or, worse, sometimes the tame solution exacerbates the problem.” The following sections consider the challenges associated with addressing complexity in field assessment.

Traditional Techniques

There are many potential technological “solutions” to this problem: some better, some worse, none perfect. But the complexity of the environment and the task of understanding it makes quantifying, or even qualifying, the goodness or badness of each solution a difficult challenge. The ASSIST program has completed two spirals through the traditional systems engineering approach (illustrated in the white nodes of Figure 1). We describe it here as a case study of traditional evaluation techniques. An initial understanding of MOUT operations and the need for better understanding of the environment led to the specification of initial requirements for intelligent systems that are able to recognize specific classes of entities in the urban environment [1]. These requirements were interpreted by several engineering teams, and several prototype intelligent systems were developed [3]. The independent evaluation team (IET) for the ASSIST program developed and applied a two-pronged evaluation approach, effectively addressing some of the wickedness in this problem space [3]. One evaluation effort, called elemental tests, assessed levels of achievement for technology performance objectives in controlled environments [4]. The second assessed potential impact of technologies in less controlled environments, specifically, in vignettes involving

III. THE FIELD EVALUATION “LEAP OF FAITH” Referring to Figure 1, the primary limitation in field

181

evaluation methods is the limited nature of the feedback about the integration of new technology into its environment. Field evaluations can provide valuable and compelling insights into a limited set of interactions for a new technology. But it is unclear how stable the results will be when the new technology is introduced into the natural, more complex setting, or how those results might evolve over a large number of scenarios. Field evaluation results thus require investors to make a leap of faith, to extrapolate test results from a simple environment to a more complex one. Below, we use the ASSIST program to illustrate the limitations of field testing, and suggest how simulation might enhance and extend the results from ASSIST field testing. The IET determined that ASSIST technologies displayed significant improvements over the course of multiple tests in their ability to recognize and categorize a variety of entities in the test environment (based on elemental tests that assessed automated recognition capabilities in controlled settings) [4]. The IET also evaluated the potential utility of ASSIST technologies for improving an intelligence officer’s ability to understand the environment. These vignette-based tests found that, while intelligence officers needed human soldiers to help them navigate ASSIST data records to find key events, once found, those data provided a greater level of detail than soldier recall alone. ASSIST allowed intelligence officers to assess situations from an objective viewpoint, rather than rely on the soldier’s interpretation of sightings [5]. This represents the results from two cycles through the traditional requirements definition, prototype/evaluation development, and evaluation feedback sequence. These results suggest that ASSIST technologies are effective in classifying a MOUT environment, and that the introduction of ASSIST technologies as a supplement to individual soldier reports could allow an intelligence officer to gain a richer understanding of the environment. But the test environment required a great many assumptions and simplifications to achieve these results. It is unclear what effects from the use of new technology might arise as complexity is reintroduced. The following is a partial list of simplifications imposed by field evaluation constraints in the ASSIST tests: • Sparse sampling of potential missions: ASSIST technologies were evaluated on a small subset of missions. How might their performance or their impact on soldier performance change given other mission objectives, procedures, and op-tempos? • Short mission length: ASSIST was tested using less than four hours of vignettes. How might the intelligence officer’s understanding of mission data evolve given longer, more numerous, or more diverse missions? How would the technologies contribute to the intelligence officer’s understanding of the entities, relationships, and their affiliations as they evolve in the environment? How might soldier performance change as they develop new tactics, techniques, and procedures for using the ASSIST technologies? How might the dramatic increase in ASSIST system data available after multiple

missions impact the intelligence officer’s utilization of the system? • Unrealistic environments: ASSIST technologies were evaluated in missions conducted by soldiers working with relatively low levels of stress. The soldiers were not deprived of sleep or subjected to threats, for example. They were operating in a familiar setting (a military testing facility) during normal working hours. Stressors are expected to change the level of detail of soldier reports to the intelligence officer, the accuracy of those reports, and the level of conflict within and between reports. Will these effects occur in realistic settings? How will intelligence officers respond to them? • Absence of counter-measures: ASSIST technology was operated without the degrading influence of enemy counter-measures. Is ASSIST technology vulnerable to hostile counter-measures? Can intelligence officers discern those effects when they are present? Vignettes and field tests offer no easy answers for these challenges. An investor must choose between 1) a “leap of faith” that greater and different complexities will not significantly change the results, or 2) a costly investment in additional vignettes and field tests that will evaluate the technology in an environment that includes some of these additional complexities (but still a limited set). Option 1 is a relatively high risk approach since it offers no answers to these questions, but Option 2 is costly and unlikely to reduce the risk by a significant margin since it only adds a few new data points to the investor’s portfolio. An alternative approach is needed; one that can evaluate the technology in a much more complex setting, providing a more complete picture of expected interactions between the new technology and other aspects of the problem-space. We now describe a concept for using modeling and simulation to address some of this challenge. IV. SIMULATION AS A SUPPLEMENT TO ASSIST FIELD TESTS Modeling and constructive simulation has been used extensively to supplement experimental testing in domains such as team and organizational design (e.g. [7], [8], [9], [10], [11], [12]). Modeling and simulation has also been applied as a supplement to field evaluation methods early in the systems engineering life-cycle (e.g., [13]). Early life-cycle simulation offered a technique for assessing new tactics, techniques, and procedures (TTP’s) and new command and control (C2) technologies at a point in the systems engineering when field testing was not possible or practical. The following sections describe a simulation environment that could provide a similar evaluation capability to assess aspects of the ASSIST problem that cannot easily be addressed through field testing. To address the challenges of the ASSIST wicked problem, a simulated wicked problem-space could be created, based on several types of models: • Models of individuals, groups, and artifacts (buildings, vehicles) to observe using ASSIST technologies. These

182

models must represent behaviors that are directly observable, and characteristics that drive the evolution of behavior indirectly, including social and cultural models of affiliation-related behaviors, and propensity for change of those behaviors. • Models of individual observers. These models must represent the cognitive capabilities and resulting products of soldiers as they observe, recall, and interpret information. They must also represent the tactics, techniques, and procedures (the behaviors) required to use technologies to observe.. • Models of new technologies. These models must represent the capabilities of an intelligent system for capturing, organizing, and presenting data and descriptions of the environment. The simulated wicked problem space would also require: • A simulation engine. This engine must generate and measure interactions between these models, across a variety of model parameterizations, and use scenarios. • Measures. The simulation requires a way to present the evaluator with some way to quantify the key concept driving this problem: understanding of the environment. Any single measure is likely to oversimplify the issue, but the simulation must offer some methods for the user to gain an understanding of the ‘level of understanding’ achieved by different observers in the environment. As a tool to support continuous evolution in understanding of the problem, this simulation environment must support frequent modification, update, and reparameterization of models and measures, and the addition of new models and measures, to enhance the representation of complexity in the wicked problem-space. In a sense, the simulated wicked problem space must be a rapid prototyping tool, but instead of creating prototypes of the solution, it allows the user to create rapid prototypes of the problem (by gradually adding and adjusting the complexity of and interactions between the simulation models). For example, and ASSIST simulated wicked problem space would need to allow evaluators to rapidly add new complexities (e.g. simulate soldier behavior under stress, simulate degradations in system performance due to enemy counter-measures …). Two of the five simulation components, above, are commonplace. Simulation engines are readily available in flavors that include agent-based systems, blackboards, discrete event simulations, and others. Models of technology are routinely used in the design of new systems, such as those tested in ASSIST. (We note, though, that models are much less sophisticated when it comes to representing the complexity of the world in which these artifacts must operate). However, modeling of observers and the observed presents significant challenges; as does representing the “level of understanding”. In the following section, we articulate one approach to define these models based on cognitive and cultural theory, and describe how an abstract visualization might be used as a method for representing the level of understanding achieved by observers in the simulation.

A. Modeling the “Observed” To model the people whom MOUT soldiers observe and with whom they interact, we draw on the concept that every person has a “repertoire” of identities [14]. The identify profile for an individual may include affiliations of family, tribe, religion, profession, politics, and other types. Each identity is associated with a belief system, that is, attitudes (e.g., hatred of American troops, respect for our troops) and corresponding propensity towards behaviors of interest (e.g., attacking troops, assisting troops). The question of which identity is dominant at a given moment is a function of 1) the propensity of the individual to choose an extreme vs. neutral identity in response to events and 2) the degree to which the context of an event is tied to an identity. For example, an individual prone to extreme responses, and witness to an American assault on insurgents near a tribal neighborhood school, may interpret the events through a tribal identity and respond defensively, violently to the American actions. As any anthropologist will tell you, culture is not static. Thus, models of the observed must allow an individual to change the distribution of cultural affiliations over time. When many individuals shift away from a given identity (e.g., with a radical or insurgent group), the belief system of the remaining adherents tends either to soften by shifting towards the center (thus increasing the number of adherents), or to harden by shifting towards extremes (thus reducing the number of adherents to the most devoted). Models of multiple cultural identities are a promising area of research. We have developed such models using software agents, to explore methods of stabilizing failing states. B. Modeling the “Observer” To model the observers, our soldiers, we draw on the rich literature from cognitive psychology. In particular, we model the capabilities and constraints of people under stress to observe, recall (or report), and interpret what they recall. Observation must be modeled as an activity driven both by goals (“On this mission, look for suspicious activity at the market.”) and knowledge of the patterns that meet the goals (e.g., The absence of women in the market is suspicious). [15]’s seminal studies of perception justify the effort to model search for patterns as a goal-driven activity. Decades of research concerning human pattern learning and pattern recognition argue for modeling pattern knowledge. The combined effects of modeling these phenomena is a bias towards observing what one is told to see, and what one knows. Unexpected and unfamiliar entities and events will tend not to be encoded (written to memory) during observation, even if they are in fact critically important. In addition, observation capability sensitive to stress; people tend to narrow the focus of their attention under threat [16], for example, making peripheral events effectively unobservable. Thus, both stressors and sensitivity to them must be modeled. Recall is dependent (logically) on observing events in the

183

first place and (psychologically) towards highly salient events [17]. Thus, it may be difficult for people to recall events that were not highly charged in some way. Further, recall is biased towards erroneously inferring things that are expected in a given context, but were in fact absent [18], [19]. Finally, decisions about (i.e., the interpretation of) recalled events are subject to a range of well-documented biases [20]. Their net effect is to cause observers to discount, and thus underreport evidence that conflicts with their prior beliefs (e.g., about what they expected to see), and to over-rely on, and thus over-report, evidence that confirms their beliefs. Biases of these sorts must be modeled. A variety of methods exist for modeling these cognitive characteristics of individual observers, including ACT-R models for highly detailed, temporally accurate representation of behavior, through simpler, rougher rule-based models in JESS (the Java Expert System Shell), PROLOG and other languages may be adequate.

Fig. 4. Visualization of a simulated “system model’s understanding” of the MOUT environment

C. Representing “Level of Understanding” Figure 2 provided an abstract representation of the dynamics found in ASSIST’s wicked problem space. Here we suggest that this type of abstraction of the problem might offer a useful means for visually qualifying and perhaps even quantifying the level of understanding achieved by different “observers” working in the ASSIST simulated wicked problem space. To aid in this description, Figure 2 is repeated here as Figure 3. From a simulation perspective, Figure 3 might represent “ground truth” in the simulated world at any one point in time.

Fig. 5. Visualization of a simulated “human observer model’s understanding” of the MOUT environment

Fig. 5. Visualization of a simulated “high-level observer model’s understanding” of the MOUT environment

3, with a 0% false alarm rate. Taken by itself, Figure 4 offers insights into the system’s contribution to the level of understanding achieved in the environment. Specifically, this technology presents a relatively accurate representation of entities in the environment, but lacks crucial data attributes (e.g., the existence and intensity of relationships and interactions) that might lead to a higher level understanding of the environment. Figure 5 is a visualization representing a report produced by the model of a human “observer soldier’s” for the environment from Figure 3. This figure could be constructed based cognitive models of human observation and recall skills, and refined based on results from vignette testing. This example is indicative of a model that provides much sparser

Fig. 3. Visualization of “ground truth” in a simulated MOUT environment

This figure shows the present state of the cultural affiliations and relationships for the “observed” models in the simulation. The state of this figure would evolve over time according to the dynamics of the cultural models driving the observed entities. Figure 4 provides a visualization of a simulated ASSIST system’s “understanding” of this environment. This figure could be constructed based on a model of the program’s stated requirements for the system and refined based on results from field evaluations. This example represents results from an ASSIST technology model that is capable of correctly identifying 80% of the entities in the environment from Figure

184

recall of entities (and in some instances identifies entities incorrectly), but provides a much richer recognition of critical relationships and interactions in the environment, and offers speculations about affiliations for entities. Figure 5 might represent one squad report’s contribution to the level of understanding achieved in the environment. In a simulation environment, multiple squad reports (variations of Figure 5) and multiple ASSIST system datasets (variations of Figure 4) could be constructed, based on interactions between different observer, observed, and system models over the course of multiple simulated missions. The net result would be a large database of “inputs” to a simulated intelligence officer “observer” model. This intelligence officer’s level of understanding would be represented in a form like Figure 6. This figure is a visualization representing a “human observer model” that attempts to achieve an overall understanding of the environment (e.g. an intelligence officer) by combining soldier squad reports (e.g. Figure 5) with ASSIST system reports (e.g. Figure 4). This simulation model of the intelligence officer could be based on cognitive models for baseline decision-making, and tuned to reflect findings from vignette testing. For example, Figure 6 reflects a hypothetical result where the intelligence officer reviews ASSIST data, using it to make changes in the soldier report’s assessments of individual entity affiliations. Overall, these visualizations would provide a way to make qualitative assessments about the level of understanding different “observers” are able to achieve (i.e. Figures 4, 5, and 6) relative to an evolving ground truth situation (i.e. Figure 3). In this example, the observer represented by Figure 6 has successfully identified all the solid dark blue entities (which might represent clerics from one religious group), but has incorrectly identified a cluster of deep red hash-marked entities in the upper left corner (which might represent the mis-identification of a commingled group of students with different affiliations as a unified group with the same affiliation). In addition to qualitative comparison, this method could support quantification of levels of understanding by developing a scoring scheme based on the accuracy and completeness of one observer’s visualization figure relative to the ground truth visualization figure. These results could be useful for evaluating ASSIST technology interactions with other complexities in the environment, but perhaps more important, the simulation environment would support dialogue about the assumptions made in constructing the models, and allow project stakeholders to quickly modify those assumptions to determine their level of influence on the apparent results. As such, the simulated wicked problem space would provide systems engineering decision-makers with a new capability that would supplement and expand upon the feedback provided by field evaluation results.

forces. Systems engineers must, increasingly, develop technologies that function within these systems. However, traditional field evaluation methods don’t provide timely, decisive feedback concerning the adequacy of technology solutions. At best, they often advance our knowledge of the environment, lengthen the list of requirements technology must fulfill, and deepen our appreciation of the unpredicted effects of injecting one technology solution into a complex system. This is the very definition of a wicked problem. To respond, we need wicked methods. This paper offers simulation as an approach that can supplement traditional field evaluation methods by modeling the environment, technology capabilities, and technology effects early and iteratively. This accelerates and improves the feedback available in systems engineering efforts for addressing socio-technical problems. ACKNOWLEDGEMENTS This work is based on experiences gained working as part of the ASSIST IET, led by a team from the National Institute of Standards and Technology (NIST). The ASSIST program was supported by the Defense Advanced Research Projects Agency (DARPA) ASSIST program (POC. Mari Maeda). Any opinions, findings, conclusions, and/or recommendations expressed in this material are those of the authors, and do not necessarily reflect the views of NIST or DARPA. REFERENCES [1] DARPA, "Advanced Soldier Sensor Information System and Technology (ASSIST) Proposer Information Pamphlet," http://www.darpa.mil/ipto/solicitations/open/04-38_PIP.htm, 2006. [2] H. Rittel and M. Webber; "Dilemmas in a General Theory of Planning" Policy Sciences, vol. 4, pp. 155-169, 1973. [3] C. Schlenoff, B. Weiss, M. Steves, A. Virts, M. Shneier, and M. Linegang "Overview of the First Advanced Technology Evaluations for ASSIST" in Proceedings of the Performance Metrics for Intelligent Systems Workshop, 2006. [4] B. Weiss, C. Schlenoff, M. Shneier, and A. Virts "Technology Evaluations and Performance Metrics for Soldier-Worn Sensor Systems for ASSIST" in Proceedings of the Performance Metrics for Intelligent Systems Workshop, 2006. [5] M. Steves (2006) "Utility Assessments of Soldier-Worn Sensor Systems for ASSIST" in Proceedings of the Performance Metrics for Intelligent Systems Workshop, 2006. [6] J. Conklin Dialogue Mapping: Building shared Understanding of Wicked Problems. Chichester: Wiley, 2005, ch. 1, pp. 1-25. [7] F. J. Diedrich, E. E. Entin, S. G. Hutchins, S. P. Hocevar, B. Rubineau, and J. MacMillan "When do organizations need to change (Part I)? Coping with incongruence" in Proceedings of the Command and Control Research and Technology Symposium, 2003. [8] E. E. Entin, F. J. Diedrich, D. L. Kleinman, W. G. Kemple, S. G. Hocevar, B. Rubineau, and D. Serfaty "When do organizations need to change (Part II)? Incongruence in action" in Proceedings of the Command and Control Research and Technology Symposium, 2003. [9] E. E. Entin, S. A. Weil, D. L. Kleinman, S. G. Hutchins, S. P. Hocevar, W. G. Kemple, et al. "Inducing Adaptation in Organizations: Concept and Experiment Design" in Proceedings of the Command and Control Research and Technology Symposium, 2004. [10] S. A. Weil, G. Levchuk, S. Downes-Martin, F. J. Diedrich, E. E. Entin, K. See, et al. "Supporting Organizational Change in Command and Control: Approaches and Metrics" in Proceedings of the International Command and Control Research and Technology Symposium, 2005. [11] S. A. Weil, W. G. Kemple, R. Grier, S. G. Hutchins, D. Kleinman, and D.

V. SUMMARY A socio-technical system is a complex web of interactions between individual humans, social, cultural, and technological

185

Serfaty "Empirically-driven Analysis for Model-driven Experimentation: From Lab to Sea and Back Again (Part 1)” in Proceedings of the Command and Control Research and Technology Symposium. 2006. [12] Y. Levchuk, G. Levchuk, R. Grier, S. A. Weil, and D. Serfaty "Empirically-driven Analysis for Model-driven Experimentation: From Lab to Sea and Back Again (Part 2)” in Proceedings of the Command and Control Research and Technology Symposium, 2006. [13] Y. Levchuk, G. Levchuk, R. Grier, S. A. Weil, and D. Serfaty "Empirically-driven Analysis for Model-driven Experimentation: From Lab to Sea and Back Again (Part 2)” in Proceedings of the Command and Control Research and Technology Symposium, 2006. [14] S. Lovell, G. Levchuk, and M. Linegang "An Agent-based Approach to Evaluating the Impact of Technologies on C2" in Proceedings of the Command and Control Research and Technology Symposium, 2006. [15] U. Neisser Cognition and reality. San Francisco: Freeman, 1976. [16] P. Hancock “Human Factors in Homeland Security” in proceedings of Human Factors Research and Homeland Security: Current and Future Applications. 2005. [17] D. Redelmeier and D. Kahneman “Patients’ memories of painful medical treatments:Real-time and retrospective evaluations of two minimally invasive procedures” Pain, vol. 66, pp. 3-8. 1996. [18] J. R. Anderson and G. H. Bower Human associative memory. Washington: Winston and Sons, 1973. [19] G. H. Bower, J. B. Black, and T. J. Turner “Scripts in memory for text.” Cognitive Psychology, vol. 11 , pp. 177-220. 1979. [20] D. Kahneman, P. Slovic, and A. Tversky Judgment Under Uncertainty: Heuristics and Biases. New York City: Cambridge University Press. 1982.

186

Suggest Documents