Towards a Synthetic Environment Design ... - Semantic Scholar

2 downloads 0 Views 225KB Size Report
We start with addressing the benefits of using SEs versus other analysis techniques and their respective roles in the 'Battlelab' process, which aims to implement.
Towards a Synthetic Environment Design Methodology Anne-Marie Grisogono and Eyoel Teffera Land Operations Division Defence Science and Technology Organisation, Salisbury PO BOX 1500, SALISBURY SOUTH AUSTRALIA 5108 Keywords: methodology, synthetic environment, capability development, simulation ABSTRACT: The Synthetic Environment Research Facility (SERF) team within Land Operation Division (LOD) is researching the application of Synthetic Environments (SEs) for support to military issues investigations. One of the key areas identified for further research is a more rigorous and comprehensive approach to SE design methodology for specific capability development issues. This will include the requirements definition phase, the design, construction and management of the SE and the generation of useful information and outputs for the capability issues being addressed. In this paper our approach to developing SE design methodology will be described. We start with addressing the benefits of using SEs versus other analysis techniques and their respective roles in the 'Battlelab' process, which aims to implement the scientific method for defence analysis. At the next level, we aim to develop processes and methodologies for capture of requirements, and for defining the inputs, component systems, processes and outputs of the SEs, including fidelity levels. Key aspects include sound experimental design, the ability to perform sensitivity and confidence analyses on the proposed SE application, and to provide a traceable process.

1.

Introduction

The potential role of SEs1 in supporting the development and assessment of complex defence systems and systems-of-systems (SoS) in a more integrated and holistic way has been discussed in earlier publications [1,2]. Recent SE experiments performed in the SERF have been reported in a number of papers eg [3,4]. Although these experiments have led to many useful and sometimes valuable outcomes, we have been very conscious of the immaturity of this new technology, and of the sometimes ad hoc way that SE experiment design decisions have had to be made. While this may be quite acceptable (and indeed productive) for initial exploratory studies, it is important that lessons learned from these studies are captured and generalised, and that the whole field of SE experimentation in support of defence capability development be placed on a much more rigorous and scientifically defensible footing. Moreover, although the earliest SE experiments and demonstrations have quite explicitly been qualitative rather than quantitative, the very nature of computerdriven simulation is to produce vast amounts of data, much of it numeric, and in the absence of a sound analytic methodology, there is a risk that unwarranted confidence is placed in quantitative SE data. Thus while we do recognise the value of expanding the domain of SE technology to include quantitative analytical measures (and indeed are actively doing so in our C4ISR and Army aviation SEs), we are also placing a high priority on furthering our skills in experimental design, sensitivity analyses, error propagation and uncertainty estimation to ensure that quantitative data are used appropriately. 1

A Synthetic Environment (SE) is defined here as a composite environment which includes a number of interacting systems and users, each represented with the degree of fidelity (from real systems and people, to virtual or constructive simulations and agents, to simple models) appropriate to the purpose of the SE.

If the enormous power of SE technology is to be harnessed effectively to help defence planners understand, analyse and develop complex SoSs, and make significant investment decisions wisely, then there are a number of challenges that have to be met in developing a methodological framework for the design and exploitation of SEs which will supply not only the answers to the questions posed, but also guidance in posing the right questions, and guidance in the interpretation of the answers. In other words, the challenges we see fall into three distinct phases: defining the problems or questions, • answering the questions, or solving the • problems posed, and delivering the outcomes to the clients in a way • that minimises risks and maximises the value of the outcomes to the clients. Since we will draw heavily on the successes of the scientific method in meeting analogous challenges for the worldwide scientific community over the last couple of hundred years, we begin in the following section with a discussion of what its key elements are, and how these may be applied to defence problems. We then discuss each of the three phases in turn in the next three sections, and outline those tools and approaches which are already in use or development, or which we believe will be useful. Finally we conclude with a summary discussion of our planned way forward.

SEs and The Scientific Method

2. 2.1.

The Scientific Method

The simple central concept of the scientific method is putting ideas to the test. The subtleties and the difficulties are in devising the tests and in evaluating the results. Nevertheless, the scientific method is an extremely powerful engine for generating understanding and

knowledge – witness the sheer scope and depth of today’s science and technology.

Make prediction Problem or questions

Make/refine model

Test in experiment

Evaluate results

New knowledge

Figure 1: Schematic view of the Scientific Method In the schematic view of figure 1 above, the process begins with a problem to be solved, a question about how the world is, or indeed an ill-defined area of uncertainty. Defining the problem or posing the questions may not be entirely straightforward for complex problem areas, but being a cyclic process, the scientific method does allow for iterative refinement. The next step is to create a conceptual model from which predictions can be made. Issues at this stage are determining the scope of the model, including all the relevant factors and knowing or guessing at the relationships between them. This may be the most difficult part for complex systems. Again, iterative refinement is an essential part of the process. The resulting model is then used in the next stage to make testable predictions or hypotheses. Since the purpose here is to put the model through its paces and uncover weaknesses, misconceptions, or omissions, the challenge is to ensure that the hypotheses do explore the full range of parameters and conditions relevant to the questions posed. For complex non-linear systems with many degrees of freedom, this will generally not be possible, due to the magnitude of the entire parameter space, so the issue then becomes one of determining a subset domain which is both testable and relevant to the questions at issue. Having a set of predictions, the next stage is to test them experimentally. This requires setting up the experiments to reproduce precisely the assumptions and conditions applied in the model to generate the predictions, otherwise there is no valid basis for evaluating the model. ‘Precision’ here needs clarification, since there are always limits to the precision with which continuous parameters can be determined. In brief, the precision required should match that of the predictions. This stage will often require statistical methods and sophisticated experimental design techniques. The results of the experiment are then evaluated against predictions. Outcomes which are unexpected or deviate from predictions can be the most useful, since these can suggest factors that have been overlooked in

generating the model, or raise new aspects of the problem that need to be understood. In either case, the new information is applied to refining the model and/or the questions, and the cycle is repeated until the required knowledge is obtained. A result which agrees with predictions does not, contrary to popular opinion, confirm that the model is an accurate reflection of that portion of the real world which it purports to represent. All it does is confirm that the model predicts the real world outcome under the specific conditions of the particular experiment performed. This may be quite useful information, but it is important to be extremely careful when extrapolating such predictions outside the established domain of validity – the tested parameter ranges. In particular, a model which makes some predictions that are verified experimentally may still be based on false concepts of causal relationships or mechanisms. The experimental agreement may be the result of fortuitous cancellation of errors or effects, or simply due to the insensitivity of the predictions tested to the veracity of those elements of the model, or it may be that the false concepts are nevertheless good approximations to the real mechanisms over the tested domain. The overall outputs of the scientific method applied to a problem area are firstly a conceptual model which can be used with a known degree of confidence to make relevant quantitative and qualitative predictions, and secondly, experimentally confirmed knowledge under the selected test conditions. 2.2.

Applying the Scientific Method to Defence Problems

The scientific method described here has obvious applicability to the phases outlined in section 1. The first set of iterations around the cycle in figure 1 could address the problem definition phase. At this stage, the ‘models’ and ‘tests’ could be conceptual and first approximations, since the aim here is simply to get to the right questions. Given that the problem may be posed by a client in rather broad terms (eg ‘should we upgrade A or invest in B?’), while the analysis may be provided by specialist teams, part of the process required here is elucidating critical issues, assumptions, constraints and common understanding of terminology, as well as decomposing high-level issues into concrete detailed ones. This latter process includes defining the measures and data that need to be collected, and the experimental conditions required. The next phase, now armed with clearly defined questions, still requires two steps. Firstly, the model (representation of the problem space) needs to be adapted to support observation of the measures that will answer the questions. This process also requires several iterations of the cycle until the analysts are confident that all the necessary factors and relationships are included and the model is making sound predictions in the domain of interest. This begs the question of how the predictions are to be evaluated, since real world experimenting is not generally an option. One possibility is to exploit historical data where it exists for relevant previous conflicts. Another approach is to rely on Subject Matter Advisers to assess

the model’s predictions, based on their experience. The second step, once the model is judged to be adequate, is to apply it to generate the measures and information required. The final phase is delivery of outputs. In the ‘pure’ scientific world this is generally in the form of publications, with enough information given to allow other researchers to reproduce the results presented for independent confirmation, and to check their consistency with known results in other related areas. The information required includes assumptions, approximations, boundary conditions, parameter domains and error analyses. In a more applied domain such as defence, the outputs must be targeted to clients’ requirements, such as advice upon which decisions can be made. However the potential impact of those decisions is such that the need for reproducibility, independence, clear analysis of errors and domain of validity is stronger than ever. Thus we expect that the techniques used in scientific research to perform such analyses will find direct applicability here too. Traceability is another requirement for analysis of defence problems, not only because of public accountability for large investment decisions based on those analyses, but also as a sound project management practice. The methodology developed should therefore guarantee a traceable process, with checkpoints and provisions for post-exercise evaluation, and a controlled experimental environment to allow tracing of the many hardware and software changes that are made during the various stages of the study. 2.3.

The BattleLab Process

The so-called BattleLab Process, commonly described as model-test-model, is based on elements of this scientific method. It has already been used for some time by operations analysts to address defence problems [5], with such test techniques as closed simulation, constructive or seminar wargaming, and field trials. Most recently, it has been embraced by the Australian Army as the cornerstone of the Army Experimental Framework (AEF), which will be applied to Army’s management of capability development. There are two particular difficulties facing analysts in attempting to apply the scientific process to defence problems. One is the fact that these test techniques can never really reproduce the variability and unpredictability of the real environments for which defence systems are required. This creates a greater degree of uncertainty in applying the conclusions that can be drawn under test conditions to the real world. The second also relates to complexity. Defence SoSs are open systems with an enormous number of interacting parameters, many of those interactions being highly non-linear and too complex to model. The parameter space is just so large that one can only expect to ever explore a tiny fraction. Thus we have the double challenge of choosing an appropriate parameter subset domain which not only allows models to be adequately tested therein, but which can also inform the defence problem posed. Nevertheless, defence analysts have not been

deterred by these challenges and have already scored significant successes in applying the BattleLab Process to problems such as the Restructuring of The Army (RTA) in Australia. These studies are therefore taken as a baseline for evaluating new SE-based concepts. 2.4.

The Role of SEs

How can SEs help? And what are the special issues to be faced in making better use of them? At the prima facie level, SEs appear to offer just another way of modelling the problem space, or another test environment for a BattleLab Process, and have indeed already met with some successes in such roles. However, we propose that SEs have a role to play at all stages, from problem definition, through model development and experimentation, to processing SE data so as to generate information useful to the client and finally, to presenting the outcomes to stakeholders. (Note that we are not proposing that SEs replace other approaches – rather that they will complement them in important ways) Our assertion is based on three particular features of SEs which we believe will offer significant advantages: Complexity – SEs are particularly good at • handling complexity. Since they consist of a number of dynamically interacting systems, possibly including humans, they recreate (rather than attempt to model) the complex interactivity of SoSs in a natural test environment; Flexibility – for each of the component systems • there are many options for the level of representation – thus rather than a ‘one SE fits all’ approach, design choices can be made to focus on the issues being addressed; Immersion – SEs have proven their ability to • engage a wide range of stakeholders in a shared problem environment and to facilitate communication between them. We will refer back to these features in the ensuing discussion. We now turn back to the three phases outlined in section 1.

3.

From capturing a baseline to a full-fledged SE methodology

Taking the BattleLab Process as instantiated in recent DSTO studies as a baseline, we have attempted to capture the processes employed, to map them to the scientific method described in section 2.1, and to thus identify those areas where gaps or weaknesses may exist, particularly where SEs may offer remedies. It is our expectation that an SE-based methodology solidly grounded in the scientific method will prove to be a powerful and robust addition to the analyst’s armory. In order to facilitate the adoption of new SE-based technology by other defence analysis agencies, we also intend the process model to evolve eventually into a framework which can be used as a guide to designing and executing future SE capability studies. We envisage the framework providing an interface between an SE-based study environment, and a study team consisting of various stakeholders and specialists, and guiding them through each phase and set of cycles,

from initial problem definition to final delivery of outcomes, with defined inputs, outputs, processes and decision criteria at each point. However, this work is still at a very early stage. 3.1.

Process Representation Tools

In order to capture the processes in a form that can present a clear picture of the baseline methodology, we need an appropriate process representation tool. Ideally, the tool would also facilitate the subsequent stages – analysing and optimising the evolving SE methodology, and ultimately, generating the workflow management or wizard type tools to guide users through the resulting processes. A number of representation tools have been examined, such as IDEF02, IDEF33, iGrafx4 (formerly ABC FlowCharter), SEDRIS5, SynXpert6 and Rumbaugh7 against their abilities to: display processes, • display required control inputs for the process, • display data requirements for the process, • display resources required for the process, • simulate the process, and • clearly present process interrelationships. • Our initial assessment of the suitability of the different tools for our process representation requirements indicates that SEDRIS and Rumbaugh are really aimed at representing synthetic environment data rather than processes. SynXpert is more suited to manufacturing applications where there are operations, inspections, decisions and storage steps – however it may be beneficial in identifying non value-adding functions. IDEF0 can provide facilities to document processes, control functions, resources, outputs and some inputs, but lacks the ability to execute a simulation of those processes. IDEF3 does have an executable capability, however it lacks IDEF0’s ability to display control data and resources. Neither product readily conveys the total picture with all the processes and inter-relationships, since these decompose over a number of sheets. IDEF3 and iGrafx were selected for trialling in the capture and presentation of the baseline RTA process, with iGrafx proving to be the most useful at this stage mainly due to its ability to facilitate visualisation of complex process maps. 3.2.

Structure of the Baseline

The Battlelab process as applied by LOD to RTA, is described in terms of five major stages: define, develop, analyse, advise and review. Clearly, the first is the same as our problem definition phase, while the second and third represent what we have described as the model development, experimentation and evaluation phases. The last two, 2 IDEF0: Steven C. Hill and Lee Robinson , A Concise Guide to The IDEF0 Techinque 1995 3 IDEF3: www.idef.com 4 IGrafx: Micrografx Australia/New Zealand, iGrafx Professional/Process Trial CD 1999 5 Synthetic Environment Data Representation Interchange Specification: see www.sedris.org 6 SynXpert: The MIRUS Group, SynXpert Process Mapper 1997 7 Rumbaugh James, Object Oriented Modeling and Design 1991

represent the alternative branches at the bottom of figure 1 – generating new knowledge which is delivered to clients as advice, and using the lessons learnt to go back to review and refine the questions and the models. Although the list above seems sequential as presented, the RTA implementation has in fact been iterative, with overlaps and cycles, similar to our discussion of the scientific method. Each major stage is then further decomposed at the next level of the BattleLab Process, eg ‘define’ and ‘develop’ become: problem definition, • determining evaluation tasks, • preparing evaluation framework and • decision processes. • These are again decomposed into further subprocess levels. In each process, there are particular inputs and outputs generated, which become the inputs of the next stage or process. So far in the RTA work, the define and develop stages have been completed, with current work being at the analysis stage. Therefore, our initial baseline work is focusing on these completed stages, discussed now in the next section. 3.3.

Phase 1: Problem Definition

We now look in more detail at the implementation of the selected baseline: the RTA ‘define’ and ‘develop’ processes described above. The RTA study identified the inputs to the problem definition stage as including strategic guidance, the clients’ aims and objectives, and current Army doctrine. The processes employed at this stage included seminars, wargames, Tactical Exercises Without Troops (TEWTs) and workshops. The outputs were an agreed concept of operations within which the questions would be posed and examined. This was in turn broken down into scenarios, task elements within those scenarios (formally called the Mission Essential Task List, METL) and critical areas. Finally the questions and problems emerged as hypotheses to be tested and critical issues to be explored. The outputs from this stage then became inputs for the next: ‘determine evaluation tasks’ and so on. The iGrafx process map succinctly portrays all these process elements and relationships on a very full A3 sheet, too large to reproduce here. What we have discussed so far is no more than a description of actual processes used in a particular study, but what we are seeking is an abstraction of such processes which is generic enough to be captured in a guidance framework, and then applied to completely different problem areas. Generalisation of individual processes is not too difficult when the objectives of each step have been elucidated, but what we are still lacking is a clear formulation of the decision criteria to be applied at each choice point. For example, in a problem definition sequence such as the above, how does one select the appropriate process to transform the known inputs into the desired outputs? In the context of examining the utility of SEs to support such studies, the issue is really how to arrive at the optimal set of questions which will meet the clients’

requirements, and in particular, how to define the measures required and the tolerances with which they should be determined. Resolving these questions will naturally lead into determining the design specification of the SE to be used in the following stages, and thus justify the fidelity level required of each SE component and interaction. Getting this step right is therefore an extremely important part of the whole process, as it can make the difference between needing to make large investments in upgrading fidelities or not, as the case may be. While we do not yet have a complete scheme, we do have some indication of the direction that the answers may come from. To begin with, we can learn from the baseline process described above. The steps adopted there can be generalised as successive refinements of the statement of the ‘problem’ – in early stages to determine the scope (what is the context? what are the essential tasks? what are the critical areas?), and in later stages to pose and prioritise clear questions to answer and hypotheses to test. The refinement processes have made use of domain experts interacting in a representation of the problem space (through seminars, wargames, TEWTs etc). It seems that the key aspect has been enacting the problem area in this low resolution mode to ‘experimentally’ determine what matters most in scope and issues. All three of the particular features of SEs described in section 2.4 lend themselves to supporting such a problem definition phase: the ability to recreate complexity means that more of the potentially relevant factors could be incorporated, the flexibility could be exercised in selecting low resolution representations for most of the elements to allow a quick and rough exploration, while the engaging nature of the SE would rapidly immerse stakeholders from different backgrounds into the shared problem space where their various special fields of expertise could interact fruitfully. Thus an SE design methodology grounded in the scientific method, could in fact make use of low resolution SEs to support the first stage of SE design. There is another issue worth raising at this point, since the baseline methodology seems grounded in the use of scenarios, and since SEs also naturally lend themselves to playing out scenarios. The issue is that there is a particular problem with scenario-based analysis in that the specificity of the outcomes lays them open to attack as irrelevant or unrepresentative of the ‘real’ issues. Thus, if the outcomes do not match preconceived preferences, it is relatively easy to discredit them. Since our aim is to develop objective analytic tools and techniques, this is a serious potential issue. One possible approach to circumvent scenario dependence is to focus rather on ‘atomic’ scenario elements that would be common to many different scenarios. In fact, such a concept already exists in military circles – the METL mentioned above. We suggest that alternative ways of composing SE experiments based on various combinations of METLs rather than detailed scenarios, should be explored.

The remaining steps of determining what has to be evaluated, preparing the evaluation framework and decision processes, flow naturally from the hypotheses posed, and correspond to the design of the experiments to test them. We have not yet examined the baseline implementation of these stages in detail. However we can speculate that in an SE-based methodology, determining the hypotheses to be tested or questions to be answered will in turn lead to determining the measures to be observed and the accuracies required. Using these to derive the SE fidelity specifications for the next stage of experimentation is basically an exercise in estimation of uncertainties and their propagation through interactions in the SE to the final observed measures. Similar exercises are routinely performed in scientific experimental design and analysis. Where we expect it will get a little more difficult is in the area of nonreadily quantifiable aspects of fidelity. An example of what is required is performing sensitivity analyses to see the effect on measures to be observed, of varying values or probability distributions of input parameters, or of changing the fidelity of a component or of a class of interactions. 3.4.

Phase 2: Problem Solution

We will have less to say in this section since the baseline study has not been performed yet, and since the subject of this paper is only the design methodology. However for completeness we report on some preliminary work. This is the central phase of actually implementing and utilising the SEs for the studies planned in phase 1. If the earlier stage has been successfully completed, this stage should be a more straightforward one of putting those plans into action. Therefore project management tools are likely to be helpful here. We have already examined some open source process model representation schemes such as the FEDEP (Federation Development and Execution Process) [6] tools and Conceptual Models of the Mission Space (CMMS) [7], as they are particularly focused on simulation studies. The FEDEP model was developed by the US Defence Modelling and Simulation Office (DMSO) and is used for implementing the High Level Architecture (HLA) system, particularly migration from DIS to HLA. There are six major steps and two to three substeps in each of the six major steps of FEDEP, which divide the process into a logical and manageable size, from defining objectives, through developing federation Conceptual Models, designing the federation, developing, integrating and testing it, and finally executing the federation and analysing results. Although the FEDEP tools are HLA-specific, the processes closely parallel some of the processes we have been discussing. Comparing the process maps of the FEDEP and evolving SE methodologies should therefore yield further insights, and some useful processes. The CMMS is another DMSO tool, providing simulation and implementation independent functional descriptions of the real world processes, entities, and environment associated with a particular set of

missions. They support the FEDEP approach with an overall development process illustrated schematically in figure 2 below. Again, we expect to learn from a closer study and mapping of the process. Logical Design Model

Real world System Analysis Model

User Space Model

References Software Model

Exercise Model Mission Space Model

Synthetic Representation Model

Simulated world

Figure 2 The CMMS process map 3.5.

1.

2. 3.

4.

Phase 3: Delivery of Outcomes

From the client’s point of view, the delivery of outputs is always the most important part of a study. A traditional approach might be to work with the client to establish a common understanding of what is required, but then to go away for some time while carrying out the study, and to only reappear at the end to ‘tell them the answers’. Historically, the outcome of this approach has often been less than successful – not only because of the length of time between posing the problem and getting the answer (it may be too late, and many factors may have changed in the meantime), but also because it has often proved very difficult to convey lessons learned to the client, if those lessons challenge accepted paradigms, and if the client has not been involved in arriving at the new understandings. This is another area where the particular features of SEs described in section 2.4 come into their own. Our early experiments have already demonstrated the power of engaging stakeholders in the SE, to participate in the definition and design phases, as well as the execution and analysis. Of course, not all stakeholders will have the time or inclination to get involved to that degree. However if at least some user or client representatives are involved, there are benefits to both sides. The effectiveness of the study can gain from more interactive guidance and focusing, and the clients are much more likely to get value out of the study. Another aspect of SEs which we have recently started to develop [8], is the concept of an analytical layer residing over the SE, which generates realtime measures computed from the dynamic data produced during execution. The aim is to eliminate slow postprocessing of data, and to support realtime interaction in the SE of planners, controllers, clients and participants. We believe that this will greatly enhance the utility of SE-based capability studies.

4.

The next stage will include completion of the baseline study, and developing SE techniques for sensitivity and error analyses. Following that, we will focus on the design of experiments, and the design of the guidance framework and its interface.

Future Development

Although the work reported here is still at a very early stage, we aim to develop a mature and robust process to be implemented for defence planners.

5.

6. 7. 8.

Grisogono AM Synthetic Environments in support of Capability Development Proceedings of International Symposium on Synthetic Environments II, RMC Shrivenham, UK, 27th October 1999 Grisogono, AM and R Seymour Architectural Overview of a Synthetic Environment for Land C4ISR Proc. SimTecT 2000 Grisogono, AM Application of Synthetic Environments to Armed Reconnaissance Helicopter Mission Concept Development Proc. HeliEurope 99, Stockholm, 4 November 1999Recent SE expts Menadue, WI et al, Development and Operation of Virtual Helicopter Simulators as a Data Acquisition Tool for Operations Research, Proc. SimTecT 2000 Curtis NJ and DK Bowley Hierarchical Systems of Enquiry for Analysis of the Land Force Proc. Defence Operations Research Conf. March 1999, Salisbury, South Australia http://hla.dmso.mil/federation/fedep/ http://www.dmso.mil/cmms/ Ashton, K, F Principe and S Robertson. Interfacing with ModSAF for Data Collection and Analysis Proc. SimTecT 2000

Acknowledgments The authors would like to thank Dean Bowley, Phillip James, and Alan Burgess for useful discussions.

Author Biographies Anne-Marie Grisogono Anne-Marie Grisogono gained her PhD in Mathematical Physics at the University of Adelaide and spent several years in academia at various overseas institutions, Flinders University and most recently in the Optics Group at the University of Adelaide. With DSTO since 1995, she currently heads the Simulation Discipline in Land Operations Division, where she has led the successful development of helicopter mission simulators and the conceptual framework for applying Synthetic Environments to capability development. Eyoel Teffera Eyoel Teffera graduated from Monash University in 1993 with a BE (Industrial & Computing). He then went to work for wine and plastic manufacturing industries as an industrial engineer in process improvement, layout plans, manufacturing resource planning, process control and quality systems projects. He commenced work for DSTO Land Operations Division in May 1999 and is currently working in the Synthetic Environments Research Facility.