Nov 1, 2001 - the best of companions. ... taught by example what it means to be a teacher, and I can only hope to follow ... and that a warm hug, even by email, is one of the most wonderful things there is. ..... CAMPAIGN CODE SELECTION .
A Methodology for the Probabilistic Assessment of System Effectiveness as Applied to Aircraft Survivability and Susceptibility
A Thesis Dissertation
Submitted to the Academic Faculty In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in Aerospace Engineering
By
Danielle Suzanne Soban
Georgia Institute of Technology Atlanta, GA November 1, 2001 Copyright 2001 by Danielle S. Soban
ACKNOWLEDGEMENTS It is said that a journey of a thousand miles begins with a single step. That single step took place many years ago, and I have been so fortunate to have had along the way the best of companions. The people who have walked beside me on this journey have provided an unending supply of love, support, professional and technical experience, and unfailing encouragement. This work is theirs as much as it is mine. To my advisor, Dr. Dimitri Mavris, thank you for believing in me and not taking no for an answer. You have opened a door on a world of professional experiences and opportunities, and I am grateful for your guidance. To Dr. Schrage and the rest of my thesis committee, Mr. Gene Fleeman, Dr. James Craig, and Dr. Bill Bell, thank you for sharing with me your wisdom, your experience, and your time. To the ASDL family, Michelle, Elena, Andy, and all of the rest: we’ve laughed together and we’ve cried together, we’ve moved mountains and we’ve played in the sand. Thanks for making this crazy place so interesting to come to every morning. And to Dr. Dan Biezad: you have taught by example what it means to be a teacher, and I can only hope to follow in your substantial footsteps. Todd, my love, my partner, you have filled the journey with joy and adventure. You pushed when I needed pushing, you lent your shoulder when I needed a place to lay my head. I wake each day amazed that you walked into my life and then decided to stay. This journey may be over, but I look forward to sharing with you all of the new adventures that await us.
iii
To my huge and crazy family, thanks for showing me it’s okay to be a little nuts, and that a warm hug, even by email, is one of the most wonderful things there is. Your love has filled my sails and your support has kept me afloat. And finally, to my mother, Suzanne. You have always held the beacon that illuminates my way, and I have never been lost. You have been my inspiration, my staunchest supporter, and my dearest friend. I will never stop learning from your wisdom and your courage. Thanks, Mom.
Dani
November, 2001 Atlanta, GA
iv
TABLE OF CONTENTS ACKNOWLEDGEMENTS ......................................................................................................... iii
TABLE OF CONTENTS ............................................................................................................... v
LIST OF FIGURES....................................................................................................................... xi
LIST OF TABLES...................................................................................................................... xvi
NOMENCLATURE .................................................................................................................. xvii
SUMMARY................................................................................................................................ xxii
INTRODUCTION .......................................................................................................................... 1 MOTIVATION ................................................................................................................................ 1 NEED FOR A MILITARY SYSTEM EFFECTIVENESS FRAMEWORK.................................................... 3 The Changing Economy and its Effect on Decision Making................................................... 3 The Link to System Effectiveness ............................................................................................ 5 Measures of Effectiveness .................................................................................................................6 Use of System Effectiveness Metrics.................................................................................................6 Resource Allocation .....................................................................................................................7 Requirements Definition...............................................................................................................7 Trade Studies Between System Components................................................................................8
Lack of Overall System Effectiveness Methodology................................................................ 9 Semantics and a Surplus of Synonyms ..............................................................................................9 Difficulty Accessing Government and Classified Material..............................................................14
SYSTEM OF SYSTEMS APPROACH ............................................................................................... 14 A Shifting Paradigm.............................................................................................................. 15
v
The Theater as the System..................................................................................................... 17 Mathematical Modeling ........................................................................................................ 21 Use of Probability Theory................................................................................................................22
RESEARCH QUESTIONS ............................................................................................................... 24 INCORPORATING UNCERTAINTY: THE RESPONSE SURFACE METHOD AND MONTE CARLO ANALYSIS................................................................................................................... 26 SIZING AND SYNTHESIS .............................................................................................................. 26 METAMODELS ............................................................................................................................ 27 RESPONSE SURFACE METHODOLOGY ......................................................................................... 29 Response Surface Equations ................................................................................................. 30 Design of Experiments .......................................................................................................... 31 USING RESPONSE SURFACE METHODOLOGY................................................................................ 33 Setting up the Problem.......................................................................................................... 33 Screening and Pareto Plots .................................................................................................. 34 Prediction Profiles ................................................................................................................ 36 Technology Impact Forecasting ........................................................................................... 38 Overall TIF Environment.................................................................................................................39 Technology Mappings and k-factors................................................................................................40 Monte Carlo Simulation...................................................................................................................43
Analysis: the Probability Graphs.......................................................................................... 44 SUMMARY OF INCORPORATING UNCERTAINTY AND RESPONSE SURFACE METHODOLOGY ........... 46
THE NATURE OF MILITARY MODELING .......................................................................... 49 MODELS AND MODELING ........................................................................................................... 49 Types of Models .................................................................................................................... 50 Conceptual Models vs. Computer Models ............................................................................ 53 How Models are Used........................................................................................................... 55
vi
Transparency ........................................................................................................................ 57 CLASSIFICATION OF MILITARY MODELS .................................................................................... 58 SIMTAX................................................................................................................................. 59 Other Taxonomies................................................................................................................. 61 Hierarchical Modeling.......................................................................................................... 62 Decomposition Levels .....................................................................................................................65 Engineering Models....................................................................................................................65 Mission Models ..........................................................................................................................67 Campaign Models.......................................................................................................................68
The Military Code Continuum .............................................................................................. 71 SUMMARY OF MILITARY MODELING ............................................................................................ 74
PRELIMINARY INVESTIGATION: APPLYING CURRENT METHODS TO CAMPAIGN LEVEL ................................................................................................................................. 76 CAMPAIGN CODE SELECTION ..................................................................................................... 76 ITEM ..................................................................................................................................... 77 CASE I: THEATER LEVEL CASE .................................................................................................. 78 Scenario for Case I ............................................................................................................... 79 Inputs and Outputs................................................................................................................ 81 Results from Case I ............................................................................................................... 82 CASE II: SURVIVABILITY TEST CASE.......................................................................................... 85 Scenario for Case II .............................................................................................................. 85 Inputs and Outputs................................................................................................................ 88 Results from Case II.............................................................................................................. 90 SUMMARY OF PRELIMINARY INVESTIGATION ............................................................................. 92 RESULTING ISSUES AND SOLUTIONS TO PRELIMINARY INVESTIGATION .......... 93 IDENTIFICATION OF THREE PRIMARY ISSUES ............................................................................... 93
vii
Level of Detail....................................................................................................................... 93 Model Integration ............................................................................................................................94 Zooming .....................................................................................................................................95 Model Abstraction ...........................................................................................................................96
Human in the Loop Dilemma ................................................................................................ 97 Scenario Significance.......................................................................................................... 100 PROPOSED SOLUTIONS ............................................................................................................. 100 “Abstragration”: A Linked Analysis Environment ............................................................. 101 Creation of the Conceptual Analysis Model ..................................................................................102 Applying “Abstragration”..............................................................................................................103
Full Probabilistic Environment .......................................................................................... 104 Tree Diagrams/Decision Trees.......................................................................................................105 Impact Dials...................................................................................................................................108
Summary of Solutions ......................................................................................................... 108 PROPOSAL OF NEW METHOD-POSSEM .......................................................................... 110 SUMMARY OF RESEARCH ......................................................................................................... 110 THE POSSEM FLOWCHART ..................................................................................................... 112 Difference between Analyst and Analysis Tool ................................................................... 112 CREATE THE CONCEPTUAL MODEL .......................................................................................... 114 IDENTIFY KEY DECISION NODES .............................................................................................. 116 CREATE LINKED ANALYSIS ENVIRONMENT ............................................................................. 117 CREATE FULL PROBABILISTIC ENVIRONMENT.......................................................................... 118 ANALYSIS ................................................................................................................................. 119 POSSEM PROOF OF CONCEPT............................................................................................ 122 SURVIVABILITY CONCEPTS....................................................................................................... 122 The Need to Bring Survivability into Preliminary Design Process..................................... 124
viii
The Paradigm Shift and Survivability............................................................................................127 The Link to System Effectiveness..................................................................................................130
EXAMPLE: CREATE THE CONCEPTUAL MODEL......................................................................... 130 Answers to Key Questions................................................................................................... 131 What problem are we trying to solve? ...........................................................................................131 What level of detail do we need? ...................................................................................................132 What tools are needed and available? ............................................................................................134 FLOPS ......................................................................................................................................134 ITEM ........................................................................................................................................136 Mission Level Code..................................................................................................................147
Baselines ............................................................................................................................. 148 Aircraft Baseline: F/A-18C............................................................................................................149 Design Mission.........................................................................................................................151 Validation .................................................................................................................................152
System Inputs and Outputs.................................................................................................. 155 Inputs: Engineering Level..............................................................................................................157 Outputs: Campaign Level ..............................................................................................................159
Scenario .............................................................................................................................. 160 Summary of Conceptual Model........................................................................................... 164 EXAMPLE: IDENTIFY KEY DECISION NODES............................................................................. 165 EXAMPLE: CREATE LINKED ANALYSIS ENVIRONMENT ............................................................ 167 Engineering Level: FLOPS................................................................................................. 169 Radar Cross Section Mapping........................................................................................................170
Campaign Level: ITEM....................................................................................................... 172 Mission Level Mapping....................................................................................................... 175 Detectability...................................................................................................................................176 Maneuverability .............................................................................................................................183
Linking the Codes Together ................................................................................................ 186
ix
EXAMPLE: CREATE FULL PROBABILISTIC ENVIRONMENT ........................................................ 188 Creation of the Metamodels................................................................................................ 189 The Complete Probabilistic Analysis Environment ............................................................ 192 Adding the Probabilistics...............................................................................................................193
EXAMPLE: ANALYSIS ............................................................................................................... 195 Engineering Level Results................................................................................................... 196 Screening Test ...............................................................................................................................197 Prediction Profile ...........................................................................................................................200
Mission Level Results.......................................................................................................... 206 Prediction Profile ...........................................................................................................................206
Campaign Level Results...................................................................................................... 207 Prediction Profiles .........................................................................................................................208
Complete Environment Results ........................................................................................... 214 Effect of Varying Design Variables...............................................................................................215 Effect of Varying Threat Variables................................................................................................219 Effect of Fully Probabilistic Environment .....................................................................................224
CONCLUDING REMARKS ..................................................................................................... 232 RESEARCH QUESTIONS AND ANSWERS ..................................................................................... 238 RECOMMENDATIONS ................................................................................................................ 240 REFERENCES ........................................................................................................................... 245 VITA............................................................................................................................................ 253
x
LIST OF FIGURES FIGURE 1 – WEAPONS SYSTEM EFFECTIVENESS EXAMPLE............................................................................ 12 FIGURE 2 – PARADIGM SHIFT: BRINGING KNOWLEDGE FORWARD IN DESIGN PROCESS ............................... 15 FIGURE 3 – PARADIGM SHIFT: FROM PERFORMANCE BASED QUALITY TO EFFECTIVENESS BASED QUALITY ............................................................................................................................................................. 17 FIGURE 4 – THE AIRCRAFT AS THE SYSTEM .................................................................................................. 17 FIGURE 5 – SYSTEM OF SYSTEMS FORMULATION .......................................................................................... 20 FIGURE 6 – EXAMPLE OF A PARETO PLOT ..................................................................................................... 36 FIGURE 7 – EXAMPLE OF A PREDICTION PROFILE .......................................................................................... 37 FIGURE 8 – PROCESS TO CREATE TIF ENVIRONMENT AND ASSESS TECHNOLOGY SCENARIOS ..................... 39 FIGURE 9 – EXAMPLE K-FACTOR NOTIONAL SHAPE FUNCTION FOR A WEIGHT REDUCTION TECHNOLOGY DIAL .................................................................................................................................................... 41 FIGURE 10 – EXAMPLES OF A PROBABILITY DENSITY FUNCTION AND A CUMULATIVE PROBABILITY FUNCTION ............................................................................................................................................ 45 FIGURE 11 – MODEL CATEGORIZATION ........................................................................................................ 52 FIGURE 12 – CHARACTERISTICS OF A CONCEPTUAL MODEL ......................................................................... 55 FIGURE 13 – SIMTAX CATEGORIZATION OF WARFARE SIMULATION .......................................................... 60 FIGURE 14 – TRADITIONAL PYRAMID OF MILITARY MODELS ....................................................................... 64 FIGURE 15 – ALTERNATE HIERARCHICAL STRUCTURE OF MILITARY MODELS I........................................... 64 FIGURE 16 – ALTERNATE HIERARCHICAL STRUCTURE OF MILITARY MODELS II.......................................... 65 FIGURE 17 – ENGINEERING LEVEL MODEL FEATURES .................................................................................. 67 FIGURE 18 – MISSION LEVEL MODEL FEATURES .......................................................................................... 68 FIGURE 19 – COMMON CAMPAIGN CODES IN USE TODAY ............................................................................ 69 FIGURE 20 – CAMPAIGN LEVEL MODEL FEATURES....................................................................................... 70 FIGURE 21 – THE MILITARY CODE CONTINUUM ........................................................................................... 74 FIGURE 22 – THE ITEM GRAPHICAL INTERFACE AND ENVIRONMENT .......................................................... 77
xi
FIGURE 23 – APPLYING CURRENT METHODOLOGY USING ITEM ................................................................. 78 FIGURE 24 – FLORIDA SCENARIO WITH AIR SUPERIORITY OPERATIONS SITUATION ..................................... 79 FIGURE 25 – BLUE AIRCRAFT MODELED FOR SCENARIO .............................................................................. 80 FIGURE 26 – RED SAM SITE DETAIL ............................................................................................................ 80 FIGURE 27 – PARETO PLOTS FOR THEATER SCENARIO SCREENING TEST ...................................................... 84 FIGURE 28 – PREDICTION PROFILE FOR THEATER SCENARIO SCREENING TEST ............................................ 85 FIGURE 29 – FLORIDA SCENARIO USED IN THEATER LEVEL SURVIVABILITY STUDY .................................... 87 FIGURE 30 – SAM SITE WEAPON COMPARISON FOR THEATER LEVEL SURVIVABILITY STUDY .................... 87 FIGURE 31 – PREDICTION PROFILE FOR THEATER LEVEL SURVIVABILITY STUDY ........................................ 92 FIGURE 32 – A ZOOMING APPROACH: BREAKDOWN OF RDTE..................................................................... 96 FIGURE 33 – FLOWCHART FOR DECISION-MAKING FOR ITEM...................................................................... 98 FIGURE 34 – NOTIONAL EXAMPLE OF DECISION TREE DIAGRAM ............................................................... 106 FIGURE 35 – PROPOSED FULL PROBABILISTIC ENVIRONMENT .................................................................... 107 FIGURE 36 – SUMMARY OF ISSUES AND PROPOSED SOLUTIONS .................................................................. 109 FIGURE 37 – THE POSSEM FLOWCHART ................................................................................................... 113 FIGURE 38 – CREATE CONCEPTUAL MODEL STEP OF POSSEM.................................................................. 116 FIGURE 39 – IDENTIFY KEY DECISION NODES STEP OF POSSEM............................................................... 117 FIGURE 40 – CREATE LINKED ANALYSIS ENVIRONMENT STEP OF POSSEM .............................................. 118 FIGURE 41 – CREATE FULL PROBABILISTIC ENVIRONMENT STEP OF POSSEM .......................................... 119 FIGURE 42 – ANALYSIS AND FINAL STEP OF POSSEM ............................................................................... 120 FIGURE 43 – VOLPE’S SURVIVABILITY 3-D APPROACH .............................................................................. 123 FIGURE 44 – RELATIONSHIP BETWEEN SURVIVABILITY AND LIFE CYCLE COST .......................................... 125 FIGURE 45 – EFFECT OF SURVIVABILITY ON FORCE SIZE ............................................................................ 126 FIGURE 46 – EFFECT OF SURVIVABILITY ON FORCE EFFECTIVENESS .......................................................... 127 FIGURE 47 – CREATE CONCEPTUAL MODEL STEP OF POSSEM.................................................................. 131 FIGURE 48 – FLOPS ANALYSIS FLOWCHART ............................................................................................. 136 FIGURE 49 – HIERARCHICAL STRUCTURE OF ITEM DATABASE .................................................................. 139
xii
FIGURE 50 – SAM ENGAGEMENT FLOW CHART ........................................................................................ 142 FIGURE 51 – SURFACE TO AIR MISSILE ENGAGEMENT PROCESS ................................................................. 145 FIGURE 52 – SAM INTERCEPT OPPORTUNITIES .......................................................................................... 146 FIGURE 53 – THREE VIEW OF BOEING F/A-18C .......................................................................................... 150 FIGURE 54 – DESIGN MISSION USED FOR SIZING AND MATCHING THE F/A-18C ....................................... 151 FIGURE 55 – F404-GE-402 ENGINE USED ON THE F/A-18C MODEL .......................................................... 153 FIGURE 56 – MATCHED DRAG POLARS FOR F/A-18C ................................................................................. 155 FIGURE 57 – DETERMINING SYSTEM LEVEL INPUTS AND OUTPUTS ............................................................ 156 FIGURE 58 – SCENARIO GEOGRAPHY FOR POSSEM EXAMPLE................................................................... 161 FIGURE 59 – SCENARIO WEAPONS .............................................................................................................. 162 FIGURE 60 – SCENARIO SAM SITE.............................................................................................................. 162 FIGURE 61 – SCENARIO VEHICLES .............................................................................................................. 163 FIGURE 62 – SCENARIO AIRBASES .............................................................................................................. 163 FIGURE 63 – IDENTIFY KEY DECISION NODES STEP IN POSSEM ............................................................... 165 FIGURE 64 – DECISION NODES FOR POSSEM SURVIVABILITY EXAMPLE ................................................... 167 FIGURE 65 – CREATE LINKED ANALYSIS ENVIRONMENT STEP IN POSSEM............................................... 168 FIGURE 66 – USE OF PRE-EXISTING RESPONSE SURFACE EQUATION FOR RADAR CROSS SECTION ............. 171 FIGURE 67 – ENGINEERING LEVEL MAPPING .............................................................................................. 171 FIGURE 68 – AIRCRAFT DATA WINDOW IN ITEM ...................................................................................... 173 FIGURE 69 – ENGINEERING LEVEL MAPPINGS AND CAMPAIGN LEVEL MAPPINGS ...................................... 175 FIGURE 70 – SECONDARY MISSION USING VARIABLE ALTITUDE................................................................ 183 FIGURE 71 – COMPLETE LINKED ANALYSIS ENVIRONMENT CREATED FOR SURVIVABILITY EXAMPLE....... 188 FIGURE 72 – CREATE FULL PROBABILISTIC ENVIRONMENT STEP IN POSSEM........................................... 189 FIGURE 73 – COMPLETE PROBABILISTIC ANALYSIS ENVIRONMENT ............................................................ 193 FIGURE 74 – ANALYSIS STEP IN POSSEM .................................................................................................. 196 FIGURE 75 – ANALYSIS AND RESULTS OVERVIEW ...................................................................................... 196 FIGURE 76 – PARETO PLOTS FOR ENGINEERING LEVEL SCREENING TEST 1 ................................................ 199
xiii
FIGURE 77 – PARETO PLOTS FOR ENGINEERING LEVEL SCREENING TEST 2 ................................................ 199 FIGURE 78 – PREDICTION PROFILE FOR ENGINEERING LEVEL ..................................................................... 205 FIGURE 79 – PREDICTION PROFILE FOR MISSION LEVEL ............................................................................. 207 FIGURE 80 – PREDICTION PROFILE FOR CAMPAIGN LEVEL, CASE 1 ............................................................ 212 FIGURE 81 – PREDICTION PROFILE FOR CAMPAIGN LEVEL, CASE 4 ............................................................ 214 FIGURE 82 – PROBABILITY DENSITY FUNCTIONS FOR SURVIVABILITY OF BLUE-1 AIRCRAFT WITH PROBABILISTIC DESIGN ENVIRONMENT ............................................................................................. 217 FIGURE 83 – PROBABILITY DENSITY FUNCTIONS FOR SURVIVABILITY OF BLUE-2 AIRCRAFT WITH PROBABILISTIC DESIGN ENVIRONMENT ............................................................................................. 218 FIGURE 84 – OVERLAY CHARTS FOR AIRCRAFT SURVIVABILITY WITH PROBABILISTIC DESIGN ENVIRONMENT ................................................................................................................................... 219 FIGURE 85 – PROBABILITY DENSITY FUNCTIONS FOR SURVIVABILITY OF BLUE-1 AIRCRAFT WITH PROBABILISTIC THREAT ENVIRONMENT ............................................................................................ 221 FIGURE 86 – PROBABILITY DENSITY FUNCTIONS FOR SURVIVABILITY OF BLUE-2 AIRCRAFT WITH PROBABILISTIC THREAT ENVIRONMENT ............................................................................................ 222 FIGURE 87 – COMPARISON OF PDF AND CDF FOR AIRCRAFT SURVIVABILITY, COMPOSITE CAMPAIGN .... 223 FIGURE 88 – OVERLAY CHARTS FOR AIRCRAFT SURVIVABILITY WITH PROBABILISTIC THREAT ENVIRONMENT ................................................................................................................................... 224 FIGURE 89 – PROBABILITY DENSITY FUNCTIONS FOR SURVIVABILITY OF BLUE-1 AIRCRAFT WITH FULLY PROBABILISTIC ENVIRONMENT .......................................................................................................... 226 FIGURE 90 – PROBABILITY DENSITY FUNCTIONS FOR SURVIVABILITY OF BLUE-2 AIRCRAFT WITH FULLY PROBABILISTIC ENVIRONMENT .......................................................................................................... 227 FIGURE 91 – OVERLAY CHARTS FOR AIRCRAFT SURVIVABILITY WITH FULLY PROBABILISTIC ENVIRONMENT ........................................................................................................................................................... 228 FIGURE 92 – COMPARISON OF PDFS AND CDFS FOR BLUE-1 SURVIVABILITY, COMPOSITE CAMPAIGN ..... 229 FIGURE 93 – OVERLAY CHART COMPARING CONTRIBUTIONS OF DESIGN AND THREAT ENVIRONMENTS TO FULLY PROBABILISTIC ENVIRONMENT FOR BLUE-1 SURVIVABILITY ................................................. 229
xiv
FIGURE 94 – CUMULATIVE EFFECTS OF VARYING SETS OF INPUT DATA FOR SURVIVABILITY RATE OF BLUE-1 .............................................................................................................................................. 231
xv
LIST OF TABLES TABLE 1– EXAMPLE DESIGN OF EXPERIMENTS TABLE.................................................................................. 32 TABLE 2– SEVERAL DOES AND REQUIRED EXPERIMENTAL CASES ............................................................... 33 TABLE 3 – INPUTS FOR THEATER LEVEL SCENARIO ...................................................................................... 81 TABLE 4 – OUTPUTS FROM THEATER SCENARIO ........................................................................................... 82 TABLE 5 – INPUT VARIABLES FOR THEATER LEVEL SURVIVABILITY STUDY ................................................ 89 TABLE 6 – OUTPUT VARIABLES FOR THEATER LEVEL SURVIVABILITY STUDY ............................................. 90 TABLE 7 – SURVIVABILITY ENHANCEMENT CONCEPTS ............................................................................... 124 TABLE 8 – DESIRED CAPABILITIES AND RESULTING MODEL FEATURES IN ITEM....................................... 138 TABLE 9 – SELECTED CHARACTERISTIC DATA OF BOEING F/A-18C........................................................... 150 TABLE 10 – GENERAL ENGINE SPECIFICATIONS FOR F404-GE-402............................................................ 152 TABLE 11 – WEIGHTS MATCHING FOR THE F/A-18C IN FLOPS ................................................................. 154 TABLE 12 – SYSTEM LEVEL INPUTS ............................................................................................................ 158 TABLE 13 – SYSTEM LEVEL OUTPUTS ......................................................................................................... 159 TABLE 14 – OUTPUT PARAMETERS TRACKED FROM FLOPS....................................................................... 169
xvi
NOMENCLATURE ε
Error term in a response surface equation
λ
Wavelength of the radar
σ
Radar cross section
ANOVA
Analysis of Variance
b0
Intercept term in a response surface equation
bi
Regression coefficients for the first order terms in an RSE
bii
Coefficients for the pure quadratic term in an RSE
bij
Coefficients for the cross-product terms in an RSE
CCD
Central Composite Design
CDF
Cumulative Distribution Function
CINCPAC
Commander in Chief Pacific
COFA
Concept Feasibility Assessment
dB
Decibels
DoD
Department of Defense
DoE
Design of Experiments
Ek
Estimated Kill
FedEx
Federal Express
FLOPS
Flight Optimization System (computer code)
Fmaneuver_elem
Maneuverability factor for aircraft in the raid element
Freldet,r,e
Relative detectability of raid element
G
Gain or electronic amplification of the radar xvii
GUI
Graphical User Interface
Hs
Height of the radar for the SAM site
Hr
Altitude of the raid
IPPD
Integrated Product and Process Design
ITEM
Integrated Theater Engagement Model (computer code)
JHAPL
Johns Hopkins Applied Physics Laboratory
JTCG/AS
Joint Technical Coordinating Group on Aircraft Survivability
k
Total number of variables considered in an RSE
k
Boltzmann’s constant (1.38 X 10-23 Watt-second/°K)
k1
Weighting factor for excess energy
k2
Weighting factor for turning rate
k3
Weighting factor for turning radius
k-factor
Multiplicative factor to a variable
krcs/pw
Multiplicative factor on RCS for parasitic weight
L
Loss factor accounting for atmospheric attenuation of radar energy
L/D
Lift to Drag ratio
MoE
Measures of Effectiveness
MORS
Military Operations Research Society
Nac_damage, e Number of aircraft damaged Nac_elem
Number of aircraft or weapons in the raid element
NASA
National Aeronautics and Space Administration
MoP
Measures of Performance xviii
Ndetected
Total expected targets
Ndet
Number of raid elements detectable at minimum distance
Nsalvos_elem Number of salvos launched at each raid element Nsim_engage Number of simultaneous engagements for the site Nweps_avail Number of weapons available Nr,e
Number of original raid elements
P
Value to be converted into decibels
P(dB)
Value in decibels
PDF
Probability Density Function
Ph
Probability of Hit
Pk
Probability of Kill
Pk/h
Probability of Kill given Hit
Pksalvo_elem Probability of kill of the salvo against the raid element Pksam, elem Probability of kill of the SAM against the raid element Pmin
Minimum receiver signal threshold
P0
Reference value when converting to decibels
Pop_fc, s
Probability the site's fire control is operational
Pop_radar,s
Probability that SAM radar is operational
POSSEM
PrObabilistic System of Systems Effectiveness Methodology
Ps
Probability of Survival
Ps,sizing
Excess energy from aircraft sizing
Ps,sizing
Excess energy from the alternate mission xix
Pt
Radar transmitter power
R
Desired response term in a response surface equation
Rcpa
Closest point of approach of raid element to SAM site
RCS
Radar Cross Section
RCSold
RCS value from existing RSE
RCSnew
RCS value after multiplying by parasitic weight factor, krcs/pw
Rdet_unlim,e
Non-horizon-limited radar range for SAM site
Rdetect, s
Site's maximum detection and tracking range against the raid element with the largest relative detectability
RDTE
Research, Development, Test, Evaluation
Rh
Horizon-limited radar range for SAM site
RSE
Response Surface Equation
Rsam
SAM range
RSM
Response Surface Methodology
Rtrk,s,
Tracking range of the SAM site
Ssam
SAM speed
Sraid
Raid speed
Sratio
Speed ratio = Sraid / Ssam
Ssalvo
SAM salvo size
SAM
Surface to Air Missile
TIES
Technology Identification/Evaluation/Selection (methodology)
TIF
Technology Impact Forecasting (methodology) xx
TNR
Threshold to Noise Ratio
Trad,s
Turning radius from the aircraft sizing
Trad,alternate
Turning radius from the alternate mission
Trate,s
Turning rate from the aircraft sizing
Trate,alternate
Turning rate from the alternate mission
Treact
Reaction time of SAM fire
Ts
System noise temperature and includes internal radar and electronics and external random noise effects
UPS
United Parcel Service
WWII
World War II
xi ,xj
Independent variables in a response surface equation
xxi
SUMMARY Significant advances have been made recently in applying probabilistic methods to aerospace vehicle concepts. Given the explosive changes in today’s political, social, and technological climate, it makes practical sense to try and extrapolate these methods to the campaign analysis level. This would allow the assessment of rapidly changing threat environments as well as technological advancements, aiding today’s decision makers. These decision makers use this information in three primary ways: resource allocation, requirements definition, and trade studies between system components. In effect, these decision makers are looking for a way to quantify system effectiveness. Using traditional definitions, one can categorize an aerospace concept, such as an aircraft, as the system. Design and analysis conducted on the aircraft will result in system level Measures of Effectiveness.
System effectiveness, therefore,
becomes a function of only that aircraft’s design variables and parameters. While this method of analysis can result in the design of a vehicle that is optimized to its own mission and performance requirements, the vehicle remains independent of its role for which it was created: the warfighting environment. It is therefore proposed that the system be redefined as the warfighting environment (campaign analysis) and the problem be considered to have a system of systems formulation. A methodology for the assessment of military system effectiveness is proposed. Called POSSEM (PrObabilisitic System of System Effectiveness Methodology), the methodology describes the creation of an analysis pathway that links engineering level changes to campaign level measures of effectiveness. xxii
The methodology includes
probabilistic analysis techniques in order to manage the inherent uncertainties in the problem, which are functions of human decision making, rapidly changing threats, and the incorporation of new technologies. An example problem is presented, in which aircraft survivability enhancements are added to a baseline aircraft, and the effects of these additions are propagated to the campaign analysis level.
xxiii
1
Danielle Soban
CHAPTER I
INTRODUCTION
Motivation Assessing the success and effectiveness of today’s complex systems becomes an increasingly challenging problem. Demands for increased performance, lower system life cycle costs, longer operating capacities and improved productivity and efficiency must be balanced against limited resources, scant and sometimes unknown data, the identification and resolution of conflicts and problems, and resource allocation [1]. Consideration of these tradeoffs dictates the need for an integrated and systematic methodology that can identify potential problem areas and assess system effectiveness during all phases of the system’s life cycle. This analytical framework must also support decision-making between alternatives and options while assessing the consequences of such decisions. In the current world military environment, system effectiveness takes on a new meaning. In the past, military aircraft design has been characterized by an emphasis to design for optimum performance. Aircraft success was defined in terms of the aircraft’s ability to perform at least as well as the requirements to which it was designed, effectively ignoring adaptability to rapidly changing threat environments. Performance was characterized by such attributes as speed, payload capacity, etc. Recent imperatives,
Danielle Soban
2
however, have shifted the emphasis from performance to overall system effectiveness as a key measure of merit for the aircraft. Today, system effectiveness must not focus only on the aircraft’s performance, but instead on its ability to satisfactorily complete its mission, against a wide variety of threats and situations, at an affordable life cycle cost. Aircraft survivability is a key metric that contributes to the overall system effectiveness of military aircraft as well as to a lower life cycle cost. As shown by Volpe [2], linear changes in survivability produce exponential changes in force effectiveness. In addition, a more survivable aircraft is both safer and more reliable (in peacetime operations) and thus reduces life cycle cost. Survivability, defined as the capability of an aircraft to avoid and/or withstand a man-made hostile environment, is a function of both susceptibility (denying the target and degrading the threat) and vulnerability (the inability of the aircraft to withstand the damage caused by the hostile environment) [3]. Susceptibility and vulnerability, in turn, are functions of characteristics such as shape, performance, agility, stealth, mission planning, mission requirements, and threat environment. Because these characteristics are themselves functions of basic design parameters and requirements, as are the other more traditional design disciplines, it becomes both necessary and cost-effective to consider survivability as a design discipline in its own right, allowing tradeoffs to occur at the preliminary design stage, thus optimizing the aircraft to an overall system effectiveness metric with a resulting reduction in design cycle time. The aircraft designer, therefore, must have a complete and thorough understanding of the interrelationships between the components of survivability and the
3
Danielle Soban
other traditional disciplines as well as how they affect the overall life cycle cost of the aircraft. If this understanding occurs, the designer can then evaluate which components and technologies will create the most robust aircraft system with the best system effectiveness at the lowest cost. Thus, there exists a need for an integrated and efficient framework that can rapidly assess system effectiveness and technology tradeoffs for today’s complex systems.
Need for a Military System Effectiveness Framework
The Changing Economy and its Effect on Decision Making In recent years, the world has been changing at a remarkable pace.
A
revolutionary new economy has risen. This economy is based on knowledge rather than conventional raw materials and physical labor [4]. With this new economy comes new emphasis on technology and its impact, especially in the warfighting environment. Almost all of the world’s countries spend a significant amount of their budget on the research, development and procurement of increasingly sophisticated weapons and warfare technologies [5].
This is necessary because countries need to maintain or
enhance their military capabilities in order to maintain their supremacy over their adversaries. In addition, strong and capable military capabilities serve as a deterrent to other countries that might otherwise turn aggressive.
However, the high cost of
maintaining these capabilities must be balanced against limited resources. Former U.S. Secretary of Defense Dick Cheney is credited with the statement “budget drives strategy,
Danielle Soban
4
strategy doesn’t drive budget” [4]. Military decision makers need to understand and assess the benefits and consequences of their decisions in order to make cost efficient, timely, and successful choices. Along with changes in the world’s economy come changes in the way war is fought.
Substantial progress has been made in both weapon lethality and military
technology. In addition, the battlefield of today has become increasingly complex, with interactions and their consequences becoming more and more difficult to isolate and understand. Because of the rapid advance of these developments, the decision makers are often left with ambiguous information and relatively short time spans to conduct analysis. Often, these changes occur so rapidly that previous analysis is rendered obsolete. For example, an aircraft that is designed to incorporate a certain avionics suite will often find that those avionics are obsolete by the time the aircraft comes into production. The inherent uncertainty in this information makes definitive analysis difficult and implies that the use of scenario-based probabilistic methods to understand and interpret this information is most appropriate. Overall, military decision makers need to be able to rapidly and efficiently answer questions such as those raised by Jaiswal [ 5]: What is the effectiveness of a weapon system or tactical plan in a plausible combat scenario? If the various factors influencing the performance of a system can be expressed qualitatively, can the performance be quantified? What force mix should be deployed for a specified mission?
Danielle Soban
5
How many types of weapons should be deployed on various sites to provide cost-effective defense? How should weapons be assigned to targets to achieve a specified objective? Who is likely to win? These questions all point to the need for a military analysis capability that takes places at the theater level. Decision makers must be able to take rapidly changing information and technologies and combine them with projected situations in order to make decisions and understand their consequences.
The Link to System Effectiveness What these decision makers are looking for is a quantification of system effectiveness. In this case, the system of interest is the warfighting theater or campaign. The primary tool of today’s military decision makers is the campaign analysis environment. These environments are modeling tools in the form of computer codes that model force-on-force engagements. They are often quite complex and vary in their abilities to capture different aspects of the warfighting environment. It is common for campaign analysis tools to have a detailed primary force models with only rudimentary modeling of secondary forces. For example, an Army code may have complex and sophisticated models for ground troops movement and support vehicle logistics, but have a relatively simplistic air campaign model, or even no air campaign model at all. True joint force models (models that capture all aspects of the warfighting environment with an equal level of analysis) are relatively few.
Danielle Soban
6
Measures of Effectiveness The output of these tools are system effectiveness quantifiers, or Measures of Effectiveness (MoEs). An MoE is a metric used to indicate the quality of a system [1]. It may be a measurable quantity or calculated from other output parameters, or it can also take the form of a weighted combination of several other metrics. These metrics often consist of final calculations of damage done or resources used. The following are some typical examples of campaign level MoEs: Number of Red aircraft shot down by Blue aircraft Number of damaged runways Distance in kilometers to halt Red advance Number of returning aircraft from a specific mission Each theater or campaign tool provides either its own set of hardwired MoEs or enough output data for the user to create his own system effectiveness metrics (or both). It is through the shrewd choice of these metrics that the decision maker links the MoEs to the answers to questions such as those posed above. Use of System Effectiveness Metrics There are three primary ways that decision makers utilize system effectiveness information: resource allocation, requirements definition, and system component trade studies.
7
Danielle Soban
Resource Allocation Most countries, when considering their military wants and needs, must deal with limited and often strict budgets. Different government agencies, often with competing agendas, must all vie for a finite set of resources. In addition, these agencies will often make decisions in isolation, negating the chance for potentially mutually beneficial, and cost effective, decisions.
Deciding how to allocate precious funds and resources,
therefore, becomes a key issue. System effectiveness concepts, when applied to the theater level, give the decision makers a way to link dollars to campaign level metrics. Comparisons may be made between dissimilar components of the system. For example, there may be a need to assess whether additional resources should be supplied to a missile program or an aircraft program. Straight one-on-one comparison of these two types of vehicles may be difficult because of their inherently different capabilities and performance. But when placed in the context of the overall system (the warfighting environment), their individual (or even combined!) effect on the overall system effectiveness metrics can be assessed and appropriate decisions made.
Requirements Definition Another way system effectiveness metrics aid the decision maker is in the development of requirements for system components.
Given the performance and
capabilities of a system component, a campaign analysis tool can use that information to assess the effect of that component. But this assessment capability may be turned around. By varying the capabilities and performance characteristics of a notional system
Danielle Soban
8
component, the optimal settings of these characteristics can be obtained that maximize system effectiveness. An aircraft may be used as an example. The question to be considered may be: what is the optimal aircraft strike speed needed to obtain a specific campaign objective? A notional aircraft is modeled and the strike speed allowed to vary in the campaign analysis until the selected MoEs reach their optimal value(s). Now the ideal strike speed is known for that class of vehicle. This information can be used to define a design goal for future aircraft, or it may be used to assess the potential of modifying existing aircraft to achieve the new strike speed. In this way, complete requirements for new system components and new technologies may be developed. Finally, sensitivities of the values of specific requirements may be assessed. This can be tremendously useful information: can a difficult requirement be relaxed, allowing cost savings or trade-offs between other characteristics, at an insignificant or acceptable reduction of overall system effectiveness?
Trade Studies Between System Components Finally, system effectiveness metrics can be used to assess the differing values and effects of system and sub-system components. As mentioned earlier, it is often difficult to compare and contrast dissimilar sub-systems. By placing those sub-systems in a larger framework (or system), the changes they affect in the top-level metrics may be observed and quantified. For example, say it was of interest to consider which of two avionics packages would be better to use on an existing aircraft. Analyzing changes in individual aircraft performance with each of the avionics packages could be difficult or
9
Danielle Soban
indistinguishable. But if the aircraft, with the avionics packages, were placed as system components in the theater, the effect of the avionics packages could be assessed. In this case the avionics packages were allowed to fulfill their intended function within the larger system, and thus their effects more easily quantified.
Lack of Overall System Effectiveness Methodology Given the power of a system effectiveness consideration of the modern warfighting environment, coupled with its usefulness in decision making, it is surprising to find a lack of cohesive and accepted methodologies used to address campaign level system effectiveness in the open literature. To be true, there are a multitude of campaign level modeling tools, and the creation, use, and improvement of these tools is a flourishing endeavor [6]. In addition, many decision makers and analysts use these tools in their own individual way.
But finding information specifically detailing overall
methodologies is difficult. There are several possible reasons for this lack of obvious resources. These reasons are detailed below. Semantics and a Surplus of Synonyms In order to formulate a systems effectiveness framework, it is important to understand and clearly define the concepts of both “system” and “system effectiveness”. There is general agreement across fields and disciplines as to what constitutes a system. The following definition is representative of this agreement, and is an acceptable definition for the developing framework:
Danielle Soban
10
A system may be considered as constituting a nucleus of elements combined in such a manner as to accomplish a function in response to an identified need…A system must have a functional purpose, may include a mix of products and processes, and may be contained within some form of hierarchy…[7] However, the definitions of system effectiveness vary widely and are often application dependent. Some examples that illustrate the diversity of these definitions include: “The overall capability of a system to accomplish its intended mission” [11] “The probability that the system can successfully meet an operational demand within a given time when operated under specified conditions” [8] “A measure of the degree to which an item can be expected to achieve a set of specific mission requirements, and which may be expressed as a function of availability, dependability and capability” [9] The authors of an annotated bibliography on system effectiveness models in 1980 concluded “A wide range of definitions, and measures of system effectiveness are used without strong guiding logic” [11]. The words “system effectiveness” and the concept they represent first reared its head in the 1950s and 1960s [10,11]. However, these early formulations of system effectiveness were defined primarily as functions of the “-ilities”: reliability, availability, repairability, and maintainability. As such, the system effectiveness concept was applied to a single component or tool that itself was defined as the system. For example, a missile would be defined as the system, and its system effectiveness assessed based on its availability, reliability, etc. While this was a revolutionary concept at the time, these
Danielle Soban
11
definitions are not as useful if the theater itself is considered the system. Each component of the system may be assessed by its “-ilities” but these “-ilities” are inadequate to serve solely as the theater level Measures of Effectiveness. These pioneering definitions are somewhat still in use today [10,1], making research specifically on campaign analysis system effectiveness difficult to isolate. Figure 1 shows one way in which the “-ilities” are incorporated into the effectiveness of a weapons system, in this case an aircraft [12]. Each of the effectiveness components, survivability, readiness, capability, and dependability, are further subdivided into those weapons features that determine that particular characteristic. These features are all distinctions of that particular weapons system. These features are then balanced against the various costs associated with that weapons system, and a balance, or tradeoff, is conducted, optimizing the features with respect to cost. In this way an overall operational effectiveness is computed as a weighted sum of each of the individual features. Finally, “system effectiveness” holds different meanings for different communities and applications. Some organizations tailor their definitions and methods to apply to very specific problems [11]. A representative of the Potomac Institute for Policy Studies offers that the difficulty in finding information on system effectiveness lies in the broad connotations of the term: “System Effectiveness has many different "branches", primarily based upon the application area, e.g., military system effectiveness, policy analysis, information system effectiveness, reliability analysis, etc. Within each application area there are multiple areas for consideration, e.g., in policy analysis there is the study of health care
12
Danielle Soban
reform and its affect (sic) on society; the effect of transportation policies on a metropolitan area, etc.” [13]
Cost
Operational Effectiveness
• Acquisition cost • Acquisition cost
Survivability Survivability
Readiness Readiness
Capability Capability
Dependability Dependability
• Operation cost • Operation cost • Maintenance cost • Maintenance cost
• Susceptibility • Susceptibility • Vulnerability • Vulnerability
• Maintainability • Maintainability • Inherent availability • Inherent availability
• Performance • Performance • Maneuverability • Maneuverability
• Safety • Safety • Reliability • Reliability
• Reliability • Reliability • Logistic support • Logistic support
• Lethality • Lethality
• Maintenance defects • Maintenance defects • Design defects • Design defects • Operations • Operations
• Aircraft replacement • Aircraft replacement • Crew replacement • Crew replacement training training • RDT&E cost • RDT&E cost
Affordability Measurement Effectiveness = k 1(Capability)+ k 2(Survivability)+ k 3(Readiness)+ k 4(Dependability) + k 5(Life Cycle Cost)
Survivability Maneuverability Availability Safety Capability
RDT&E Cost O&S Cost Acquisition Cost
Figure 1 – Weapons System Effectiveness Example
In addition, “system effectiveness” is often synonymous with other concepts, such as “operations research” and “systems analysis”. However, even these other concepts umbrella a huge array of specific analysis approaches and definitions, and locating the unique niche of military system effectiveness is difficult. For example, Reference 5 is a very recent (1997) state-of-the-art book on Military Operations Research. This book uses the words “system effectiveness” only once in a brief, passing note.
Similarly,
Kececioglu [10] in his 1995 book devotes only one small section to system effectiveness
Danielle Soban
13
and defines it again in terms of mission reliability, operational readiness, and design adequacy, which is again difficult to apply to the theater. A new, consistent definition for system effectiveness, therefore, is necessary and must be justified by identifying key elements crucial to a useful and informative definition. First, the term “effectiveness” implies that some sort of quantification needs to occur. This quantification must necessarily be the result of some sort of systematic analysis of variables and metrics that represent the system performing its function. In addition, in order to perform the quantification, an intended or expected effect needs to be identified in order to properly model the results of the system performance. Combined, these concepts result in the following definition put forth by the author for use in formulating the framework for the probabilistic assessment of system effectiveness: System effectiveness is a quantification, represented by system level metrics, of the intended or expected effect of a system achieved through functional analysis. This definition will be used for the remainder of this dissertation and subsequent research. Another confusion arises when there is a lack of distinction between the modeling tools and the methodologies that use the tools. Research that asks the question “What is the current state of the art in system effectiveness methodologies?” often turn up only the codes that can be used in such methods. A true methodology should be a freestanding framework that is relatively independent of the tools it utilizes. As the tools improve in fidelity, they should be able to be substituted into the methodology with little or no
14
Danielle Soban
interruption. Because the answer to this question usually results in a listing of modeling codes rather than methods or frameworks, an inherent lack of such methodologies is indicated. Difficulty Assessing Government and Classified Material Originally, system effectiveness studies were confined to military and space systems. Agencies of the US Government, such as the Department of Defense and the National Aeronautics and Space Administration (NASA), were the ultimate customers. Because of this, the available literature on system effectiveness and the accompanying models were published primarily as technical reports, but rarely appear in widely published journals [11].
Today’s analysts appear to have new interest in system
effectiveness studies using campaign modeling, especially in the area of technology infusions. However, much of this work is classified or proprietary, limiting accessible publications and information. Finally, those non-government agencies that do make advances in theater modeling and system effectiveness may find it necessary to keep their in-house methods proprietary in order to retain their competitive edge. Because of these restrictions, some fundamental contributions to this field do not appear in this body of research.
System of Systems Approach In order to successfully formulate a system effectiveness methodology, it is imperative to clearly define the system and its components. The preceding sections
15
Danielle Soban
discussed the benefits to the decision maker of considering system effectiveness at the theater or campaign level. This endpoint represents an expanding progression of what is considered the system. The resulting “system of systems” formulation is a key concept in the development of the proposed methodology.
A Paradigm Shift In traditional design, most design decisions are made relatively early in the process, when the designer (or design team) has the least available knowledge about the proposed new system. Design decisions lock in financial commitments, so the bulk of the cost is committed early in the design process. As these decisions are made, design freedom falls off rapidly (Figure 2).
A paradigm shift, founded on the notion of
Integrated Product and Process Design (IPPD), is now widely accepted. IPPD seeks to bring more knowledge about the system life cycle to an earlier stage of the design process, in an attempt to delay cost commitments and also keep design freedom open [14]. In other words, the designer needs to understand and quantify the implications of her/his decisions earlier in the design process in order to effectively reduce cost.
Figure 2- Paradigm Shift: Bringing Knowledge Forward in Design Process
Danielle Soban
16
In addition, there is a parallel paradigm shift that considers what the measure of “goodness” is for a system. Traditionally, differing designs would be compared based on their performance. For example, the questions that would mark the “goodness” of an aircraft would be of the sort:
How fast does it fly? How far can it fly? How much payload can it support? All comparisons between competing designs would be based on performance. The new paradigm shifts this emphasis not to individual system performance but to system effectiveness (Figure 3). For an aircraft, this effectiveness would be illustrated by the answers to such questions as:
What is the exchange ratio? What is the damage per sortie? What is the maintenance hours per flight hours cost? Together, these two paradigm shifts represent a broadening view of the design process, expanding the ideas and concepts from detailed particulars to a “big picture” representation. This momentum will be carried forward, further expanding these basic concepts, to result in a system of systems depiction.
17
Danielle Soban
• Exchange ratio? • Damage per sortie?
Altitude
• How high? • How far? • How fast?
paradigm shift
Mach
Quality based on Performance
Quality based on Effectiveness
Figure 3- Paradigm Shift: From Performance Based Quality to Effectiveness Based Quality
The Theater as the System Using the traditional definitions, one can categorize an aerospace concept, such as an aircraft, as the system. Design and analysis conducted on the aircraft will result in system level Measures of Effectiveness. System effectiveness, therefore, becomes a function only of that aircraft’s design variables and parameters. The relationship between the aircraft’s input design parameters and its outputs (called responses, or MoEs) is
Vehicle Level
Metrics/Objectives
Responses
Constraints
MoEs
illustrated in Figure 4.
relationships between inputs and outputs Top Level Requirements
Vehicle/Design Econ. Vars.
Technology K-factors
Figure 4- The Aircraft as the System
18
Danielle Soban
While this method of analysis can result in the design of a vehicle that is optimized to its own mission and performance requirements, the vehicle remains independent of its role for which it was created. In other words, the aircraft is never placed in its correct context and evaluated as a system fulfilling its intended function. In order to place the aircraft in its correct context, the system must be expanded and redefined. No longer is the aircraft the sole system; rather let the aircraft’s intended environment become the system. For a military aircraft, this new, larger system is the warfighting environment: the theater. Thus, the theater (system) becomes a function of its components (systems in their own right, yet sub-systems here) and the overall formulation becomes a “system of systems”. There is, however, a missing level in this formulation. The outputs of the vehicle level (performance parameters) do not usually map directly as inputs to theater level modeling codes. Rather, the inputs at the theater level usually consist of probability of kill values, or effectiveness values that are the result of component vs. component encounters. There must be an intermediary mapping that takes the output of the vehicle level as its inputs, and in turn generates outputs that serve as inputs to the theater level. This concept is illustrated in Figure 5.
With this formulation comes a necessary
redefinition of output parameters, solely for clarity. The output responses of all sublevel analysis will be called Measures of Performance (MoPs) and the output of the top level system (in this case, the theater) will be called Measures of Effectiveness.
Thus,
referring to Figure 5, theater level MoEs are functions of vectors of subsystem MoPs (at
Danielle Soban
19
the engagement level) which are in turn functions of the requirements, design and economic variables, and technology factors associated with the vehicle level inputs. When the methodology is complete, there will exist a continuous mapping between vehicle level design parameters and theater level Measures of Effectiveness. Changes at the vehicle level can thus be propagated all the way to the theater level. Instead of optimizing an aircraft, for example, to its own pre-defined performance and mission constraints, the aircraft can now be optimized to fulfill theater level goals and objectives. In addition, as more system level components are treated as input variables, tradeoffs can be established not only at the individual component level, but across the components. In other words, the methodology will allow tradeoffs between, say, the effectiveness of a surface-launched cruise missile compared to an aircraft carrying a specified weapons load. Tradeoffs could also be made between the number of system components needed: two of aircraft “A” could produce the same effectiveness of five of aircraft “B”, but at less cost. Thus, the methodology becomes a key device for design decisions as well as resource allocation. Finally, the completed methodology can be used to actually determine the mission and design requirements for the vehicles themselves that comprise the system. By using the Measures of Effectiveness at the theater level as a measure of goodness, tradeoffs can be made between vehicle design and mission requirements. These requirements, when optimized to maximize the overall effectiveness of the system, become the requirements to which the vehicles are then designed.
20
MoEs
Danielle Soban
Theater Level System MoE = fn( MoP1, MoP2, MoP3, etc)
MoPs
MoEs become MoPs
Mission Level
Metrics/Objectives
Responses
MoP = fn( Xreq, Xdesign/econ, Xtech factors)
Vehicle Level
Constraints
MoPs
MoPs of vehicle become variables for next level
Top Level Requirements
Vehicle Design/ Econ . Vars
Technology k-factors
Figure 5- System of Systems Formulation
A note must be made at this point concerning system decomposition. The first system to be put forth was the aircraft itself. It becomes obvious, in the light of the previous discussion, that the aircraft itself is a system of systems. The aircraft is made up of a variety of major components, which can be seen as sub-systems. In turn, each of these subsystems can be seen to be functions of their components, so they, too, are each a system of systems. Going in the other direction, the engagement level can be seen as a function of not just one aircraft vs. aircraft scenario, but must be comprised of many differing engagements in order to generate a complete set of information for the next system in the hierarchy: the theater. System decomposition, therefore, can be understood to have a pyramid shape, with each level in the decomposition being subdivided into its components, which are in turn subdivided. The question becomes, then, where does one stop? How much subdividing is necessary, and how many levels are needed? The
Danielle Soban
21
answer to this depends on the definition of the problem that is being studied and the tools that are available. This leads to the idea of the “conceptual model”, which is discussed more thoroughly in Chapter III. It is up to the skill and experience of the designer or decision maker to accurately and adequately bound the problem and define the system and its components effectively.
Mathematical Modeling Once the system and its components have been clearly identified, an analysis environment must be created. The key word in the definition of system effectiveness is “quantification”. In order for the decision maker or designer to analyze the system effectively, the results of the analysis must be presented as quantifiable metrics. This involves restating a research goal or design decision into a question that can be answered quantitatively. Dixon [15] states this explicitly: “An engineering analyst must begin by defining quantitatively answerable questions”.
Mathematical methods, thus, become
primary tools in system analysis because of their ability to rapidly provide these calculable (quantifiable) metrics. In addition, mathematical modeling allows the user to understand and make informed decisions at various levels within the system hierarchy. With the “system of systems” concept comes an appreciation of the potential complexities and interactions involved.
Mathematical modeling offers significant benefits: “There are many
interrelated elements that must be integrated as a system and not treated on an individual basis. The mathematical model makes it possible to deal with the problem as an entity
Danielle Soban
22
and allows consideration of all major variables of the problems on a simultaneous basis [14].” Use of Probability Theory The paradigm shift of Figure 2 makes the argument that bringing knowledge forward in time results in better decision making. However, it must be recognized that this knowledge has an associated uncertainty with it. This lack of certain knowledge could be based on missing, unavailable, or incomplete information, the incorporation of a new technology as a function of its readiness level, or even an uncertainty in the modeling tools used in the analysis. The question becomes how to accommodate this uncertainty into the mathematical modeling and subsequent analysis. The answer to this is to incorporate basic probabilistic elements into both the modeling and the analysis, and, by extrapolation, the overall system effectiveness methodology. Understanding the sources of the uncertainty helps determine why a probabilistic approach is useful. Referring back to the “system of systems” hierarchy, it is clear that each subsystem level will have its own inputs. Perfect knowledge about these inputs is rare, and it is often that the designer or decision maker must make assumptions based on available data and personal experience. Using probabilistic inputs would allow the user to account for variation in his assumptions. Analysis based on these probabilistic inputs could provide useful information about the sensitivities of the inputs, which in turn could be translated into requirements definitions. By allowing the inputs to vary, the designer or decision maker could play “what if” games, using the models as a computationally and economically inexpensive way to explore the boundaries of the problem. And finally,
Danielle Soban
23
variable inputs would allow an investigation of the robustness of a solution (i.e. that solution whose performance parameters are invariant or relatively invariant to changes in its environment). Another major source of uncertainty can be found when considering the incorporation of a new technology. Modeling current technologies is straightforward, with the performance parameters of that technology generally known. However, current technologies may not be capable of meeting customer needs or design goals. In addition, current technology may be obsolete by the time the system is implemented.
This
necessitates a prediction capability concerning the impact of new technologies. Performance of a new technology is a function of its readiness level, but that function may or may not be completely defined. By modeling a new technology in a probabilistic fashion, one can explore various assumptions pertaining to the performance and the corresponding effects of that technology. In addition, knowledge gained through the probabilistic analysis of a new technology could be used to decrease the technology’s development time. Overall, the presence of uncertainty in most complex systems points to the use of probabilistic elements. Coupled with a mathematical modeling capability, an analysis environment can be created for incorporation into a system of systems effectiveness methodology.
24
Danielle Soban
Research Questions In summary, there exists a need for an integrated and systematic methodology that can assess military system effectiveness. This methodology will be used to aid the decision maker in resource allocation, trade studies between system components, and requirements definition.
The inherent presence of uncertainty in such a process
necessitates the use of a probabilistic analysis environment, facilitated through the use of mathematical modeling. These needs give rise to specific research questions: 1) What are the needed elements of a systems effectiveness methodology?
2) How does one define the system and its components?
3) How does one link design level metrics to system level Measures of Effectiveness?
4) Can a methodology be formulated and used to analyze the effect of incorporating survivability concepts at the vehicle level, yet assess them at the theater level?
5) How can uncertainties, such as unknown or incomplete data, rapidly changing technologies, and uncertainties in the modeling
25
Danielle Soban
environment be taken into account in a system effectiveness methodology?
The remainder of this thesis outlines the tools and approaches needed to answer these questions.
Once the methodology has been formulated, a proof of concept
implementation is presented to clarify the various steps and concepts of the proposed method. In addition, the proof of concept is used to identify any issues that were overlooked in the formulation process.
26
Danielle Soban
CHAPTER II
INCORPORATING UNCERTAINTY: THE RESPONSE SURFACE METHOD AND MONTE CARLO ANALYSIS As discussed in the previous chapter, the modeling and inputs of the complete system will necessarily contain uncertainties. These uncertainties can be functions of data availability, the incorporation of new technologies, modeling deficiencies, and the modeling inputs themselves; an uncertain threat environment or the exploration of alternate scenarios.
In order to incorporate these uncertainties into the system
effectiveness methodology, an extrapolation of the basic response surface methodology coupled with a Monte Carlo analysis technique will be used. The following sections summarize the basic concepts of the response surface method, and illustrate how this collection of tools and techniques may be used to form a foundation for the incorporation of uncertainty into the system of systems effectiveness methodology.
Sizing and Synthesis At the vehicle level, system design is facilitated through the use of multidisciplinary sizing and synthesis techniques. Synthesis is defined as “the combining of separate elements or substances to form a coherent whole” [16]. In vehicle design, a system can be decomposed, often by discipline, in order for individual analysis to be performed. The recomposition of these elements into an integrated whole is known as
27
Danielle Soban
synthesis. Although synthesis can be achieved through a number of differing algorithms, for aerospace vehicles synthesis is usually accomplished in conjunction with a sizing routine. Sizing is often a physics-based process in which the geometric configuration of a vehicle is “flown” through a specified design mission. At each point in the mission, fuel burn is calculated and is ultimately matched to the fuel available to the configuration. This fuel balance is conducted simultaneously with an engine scaling, which matches drag produced against thrust available (thrust balance). This iterative process results primarily in a vehicle gross weight, wing area, and corresponding sized (scaled) thrust, but results may also include component weights, performance data, disciplinary outputs, and economics. Synthesis and sizing is most often accomplished through the use of computer codes. These codes use algorithms that can be based on the physics of the problem, historical databases and extrapolations, or through mathematical representations of empirical relationships. It is clear to see the inherent sources of uncertainty in such models. In addition, uncertainty can be introduced into the sizing and synthesis process through the incorporation of new technologies, which may or may not be represented appropriately in an analytic algorithm. Finally, changes in the inputs may introduce uncertainty with the goal of determining the design sensitivities to these changing inputs.
Metamodels Synthesis and sizing tools are used to analyze and simulate an aerospace vehicle concept. Through the use of these tools, design knowledge is brought into the process
28
Danielle Soban
sooner.
This is the goal of the paradigm shift discussed earlier.
In addition, the
multidisciplinary nature of these tools allows the designer to conduct an overall system analysis of the concept.
The drawback of these tools, however, stems from their
complexity. Each of the decomposed elements of the analysis environment contains its own set of algorithms, databases, and relationships. These individual elements are then combined in the synthesis process, which usually contains an overall algorithm that ties the pieces together and calls the separate analyses as they are needed. The result is often a tool that is useful in its capabilities but is expensive in terms of computational efficiency and run times. As the fidelity of the synthesis components increase, the expense increases likewise. One option is to use lower fidelity models that run more quickly, and sacrifice the accuracy. However, an alternative approach is to represent the higher fidelity analyses using approximation techniques. These representations are called metamodels. A metamodel is, in essence, a model of a model. It is formulated so that it captures most, if not all, significant effects of the more complex model it is emulating. There are several methods used to create metamodels, such as neural networks, statistical techniques, universal kriging, interpolative procedures, and Gaussian processes. While each of these methods has its own advantages and drawbacks, the method that will be used as a foundation for the system of systems effectiveness methodology will be response surface methodology, which is a subset of the statistical techniques.
29
Danielle Soban
Response Surface Methodology The response surface methodology (RSM) is an efficient, multivariate approach to modeling that defines clear cause-and-effect relationships between design variables and system responses. The technique has been used extensively since the 1950s for a variety of applications in areas such as engineering, agriculture, and chemistry [17,18,19]. Originally, RSM was used in situations where the only way to establish a relationship between inputs and responses was through the use of costly and complex experiments. This technique allows the maximum amount of information to be gained from the fewest number of experiment executions, and thus provides trade study results in a more costeffective manner. RSM is based on a statistical approach to building and rapidly assessing empirical models [19,20].
In general, in order to thoroughly establish a cause-and-effect
relationship between given system variables and system responses, there must exist a complete set of knowledge about the system.
Because this complete knowledge is
difficult (and often impossible) to obtain and identify, this knowledge can be approximated with an empirically-generated deterministic relationship.
The RSM
methodology, employing a Design of Experiments (DoE) strategy, aids in this by selecting a subset of combinations of variables to run experimentally which will guarantee orthogonality (i.e. the independence of the various design variables) and will allow for the creation of a statistically representative model.
30
Danielle Soban
Response Surface Equations The primary representation of the metamodel generated using RSM is a polynomial formulation known as a response surface equation (RSE). Although a variety of forms of the equation may be used, the most common is a quadratic polynomial that may be viewed as a truncated Taylor series approximation. The form of this type of RSE k
k
i =1
i =1
k −1
R = bo + ∑ bi xi + ∑ bii xi2 + ∑
k
∑b
i =1 j = i +1
ij
xi x j + ε
is:
R
is the desired response term
b0
is the intercept term
bi
are regression coefficients for the first order terms
bii
are coefficients for the pure quadratic term
bij
are the coefficients for the cross-product terms
xi ,xj
are the independent variables
k
is the total number of variables considered
ε
is the error term
When using RSM to create a metamodel of a computer code, the error ε can be assumed to be zero due to the repeatability of the experiment. This also changes, slightly, the form of the Design of Experiments discussed below. If the non-linearities of the problem are not sufficiently captured using this form of the equation, then transformations of the independent and dependent variables can be found and alternate forms of the equation, such as logarithmic or exponential, may be used.
Danielle Soban
31
The simplicity of the resulting RSE makes its attraction clear. The equation may be easily manipulated for optimization purposes. For probabilistic assessments, the RSE can be used in conjunction with a Monte Carlo analysis, providing significantly reduced run times over conducting a Monte Carlo analysis around the original code. Overall, the polynomial form of the RSE makes it a good choice for a metamodel.
Design of Experiments In order to create the response surface equations used in response surface methodology, the Design of Experiments technique is used. The DoE is expressed as a table or matrix, specifying which values to use for each experimental run.
When
modeling a computer code, each case represents one execution of the code, with input variables (the factors) set to the values specified by the DoE table. The values are usually normalized to a high, low, or midpoint value of the variable to aid in the statistical analysis, and to avoid inappropriate weighting of the responses. An example DoE table is shown in Table 1. After each experiment or code execution takes place, the responses (or outputs) of interest are parsed and placed in the response column of the table. A least squares regression analysis is used to determine the coefficients of the response surface equation. An Analysis of Variance (ANOVA) is then conducted to determine the relative importance and significance of each of the variables. There will be one RSE for each response, and that response will be a function of all of the experimental factors.
32
Danielle Soban
Table 1- Example Design of Experiments Table
Experimental Case
Factor 1
Factor 2
Factor 3
…Factor n
Response 1 (R1)
…Response n (Rn)
1
-
-
0
-
R1-1
Rn-1
2
-
+
0
+
R1-2
Rn-2
3
+
-
0
0
R1-3
Rn-3
4
+
+
0
+
R1-4
Rn-4
5
0
-
-
0
R1-5
Rn-5
6
0
-
+
0
R1-6
Rn-6
7
0
+
-
-
R1-7
Rn-7
8
0
+
+
-
R1-8
Rn-8
9
-
0
-
+
R1-9
Rn-9
10
+
0
-
-
R1-10
Rn-10
11
-
0
+
0
R1-11
Rn-11 Rn-12 …
12
+
0
+
-
R1-12
…
…
…
…
…
…
0 +
Minimum Value Nominal (midpoint) Value Maximum Value
Values are non-dimensionalized
There are many different types of DoEs. To gain the most complete information, a full factorial set of experimental cases needs to be conducted. This arrangement tests all possible combinations of variables. Because the total number of combinations in a full factorial is exponential, as the number of variables increase, so do the number of cases that need to be run. For example, a twelve variable experiment would require 312 or 531,444 experimental runs. This quickly becomes impractical for larger numbers of variables. To reduce the number of runs required to create the metamodel, yet retain as much useful information as possible, other DoE designs are considered. These designs are called fractional factorials and are created to take into account the portion of a full factorial that are needed to represent certain effects of interest. Table 2 (recreated from
33
Danielle Soban
[19]) shows several common designs, along with the total number of experimental runs that need to be conducted.
These designs are discussed in detail in References
[19,20,21]. For most purposes, the face-centered central composite design (CCD) is considered to be an efficient trade between accuracy and computational cost [22].
Table 2- Several DoEs and Required Experimental Cases Experimental Cases Needed Design of Experiments
Equation 7 Variables
12 Variables
Full Factorial
2,187
531,441
3
Central Composite Design
143
4,121
2 + 2n + 2
Box-Behnken
62
2,187
n/a
D-Optimal
36
91
(n+1)(n+2)/2
n
n
Using Response Surface Methodology Response Surface Methodology may be employed in a systematic way to generate metamodels as well as other useful information gained in the Analysis of Variance. The steps and tools commonly used in RSM are described below.
Setting Up the Problem The first step in the RSM process is identical to the first step in all analysis methodologies: the design and set up of the analysis. This involves a clear problem
Danielle Soban
34
definition or analysis goal. The inputs of interest must then be clearly identified and appropriate responses determined. These inputs and responses are first a function of the problem/goal, but the experimental environment must also be taken into account. Variables and metrics that represent the effects of interest must exist in the modeling environment. Once the input variables have been determined, variable ranges must be set to limit the scope of the problem. A high and low value of each variable will define the design space to be explored. Care must be taken that the ranges are large enough to capture the effects of interest, yet are confined enough to clearly isolate results.
Screening and Pareto Plots One of the limitations to the RSM method is the number of variables for which DoEs exist. For CCD and Box-Behnken designs, the number of input variables can be software limited. Custom designs for more variables are available [23], but the creation of statistically sound models is not trivial. In addition, an increase in the number of variables brings with it the “curse of dimensionality”; an increase in the number of experimental cases needs to be conducted in order to create a viable metamodel. It is beneficial, therefore, to restrict the number of variables to as few as possible. One way of doing this is to conduct a screening test. A screening test utilizes a two level fractional factorial DoE, which tests the fit of a linear model. This fit only accounts for main variable effects, and no interactions are calculated. Once the screening DoE is complete, an Analysis of Variance is conducted, yielding a Pareto plot that clearly identifies the most statistically significant variables to
Danielle Soban
35
the response selected [19,20,21]. A Pareto plot is a bar chart that displays the frequency, relative frequency, and cumulative frequency of a set of variables to a specific response. For example, the Pareto plot shown in Figure 6 indicates that the X6 variable contributes most significantly to the variability of the response, while all variables below X3 are negligible. By using the Pareto plot, some variables may be eliminated from the higher order metamodel by noting the ones with little or no effect, and setting those variables to their nominal value. To choose the variables to eliminate, the Pareto Principle may be utilized.
The Pareto Principle, named after Vilfredo Pareto, a nineteenth century
economist and sociologist, was coined by Dr. Joseph Juran during his work on Total Quality Management. The principle states that a small number of causes is responsible for a large percentage of the effect- usually a 20% to 80% ratio [24]. Applied to RSM, this means that all of the variables that cumulatively affect 80% of the response should be kept and the rest of the variables discarded (set to their nominal values and no longer considered as variables). A new, higher order DoE using these variables should be conducted. In the example in Figure 6, this means that only three variables should be kept for higher order analysis: X6, X4, and X2. Choosing the variables that contribute the most to the variability of the response, using the Pareto Principle, completes the screening test of the overall RSM process. A new DoE is selected based on the remaining variables, and the experiments or computer runs are conducted accordingly. It should be noted here that a variety of statistical analysis packages, such as JMP [23], Minitab [25], and iSIGHTTM [26], are available to aid in the construction of the DoE, the ANOVA, and the subsequent analysis. Based on
36
Danielle Soban
ease of use, capabilities, and availability, the statistical package JMP was selected for use in this research.
Pareto Plot of Estimates Term
80% of Response
Orthog Estimate
X6
1738.9192
X4
1146.0329
X2
-1004.6644
X5
983.8613
X1
255.9926
X3
86.5018
X10
-1.1168
X9
0.3635
X7
0.0000
X8
0.0000
X11
0.0000
X12
0.0000
X13
0.0000
X14
0.0000
X15
0.0000
X16
0.0000
X17
0.0000
.2
.4
.6
.8
Individual Effect Cumulative Effect
Relative magnitude and direction of each effect
Figure 6- Example of a Pareto Plot
Prediction Profiles A key feature of the JMP software is its Prediction Profiler. This innovative tool allows the user to see the side-by-side effects of all variables on all responses. In addition, the user may change the values of the input variables, within the range that is valid for the metamodel, and the responses update instantaneously. The tool may also be used for optimization. An example Prediction Profile is shown in Figure 7. The prediction curve shown in Figure 7 is a graphical representation of the response in which the variable of interest is changing while all other variables are held constant.
In this way, the prediction profile can show the sensitivity of the response to
37
Danielle Soban
that input variable. As the slope increases, so does the influence of that variable, and the direction of the slope indicates a positive or negative relationship. As previously noted, the DoE formulation normalizes the inputs so that their low, midpoint, and high values are shown as –1, 0, and 1 respectively. By moving the hairline, the variable value can be changed, and the corresponding value of the response is instantaneously calculated. Altogether, the prediction profile is a useful real-time tool that can be used by the designer to gain insight into the problem and also to seek optimal configurations.
change value of variable by dragging this line along curve 7.4043
prediction curve and error bars
7.042221 5.543
current value of the response current value of the variable
0
0
0
X4
X5
X6
minimum/maximum values of the variable
Figure 7- Example of a Prediction Profile
At this point in the process, the response surface equations are available as metamodels. JMP provides the coefficients of the resulting regression analysis, and these numbers can be exported easily and converted to a form useful to the designer. The next section details a specific methodology that uses RSM at its foundation.
38
Danielle Soban
Technology Impact Forecasting The tools and methods described in the previous sections have been combined into several distinct analysis pathways. One of these pathways is called the Technology Impact Forecasting (TIF) methodology. This particular methodology is discussed here because its basic structure and features were used as an outline for the formulation of the proposed methodology. Although the methodology was originally designed specifically for the incorporation of new technologies onto an aerospace vehicle concept, the basic methodology can be expanded to include concepts that are not specifically technological in nature. TIF is a technique that generates an environment that allows the quantitative exploration of new technology concepts as applied to aerospace systems [27, 28]. Originally a stand-alone methodology, TIF was an evolutionary step in the creation of more
sophisticated
methodologies,
first
culminating
in
the
Technology
Identification/Evaluation/Selection (TIES) method [29, 30, 31], and then both being absorbed in the Concept Feasibility Assessment (COFA) framework [32].
These
subsequent methodologies, however, go into great detail that specifically address all aspects of technology inclusion for a concept, and ensuing concept selection based primarily on that technology. The TIF methodology, on the other hand, concerns itself with the creation of a top level environment that can quickly describe the impact of certain technologies, based on k-factor concepts, Monte Carlo analysis, and technology scenarios. These are the specific features that will be incorporated into the proposed methodology. For the system of systems formulation envisioned, the detailed analysis of
39
Danielle Soban
TIES and COFA are not necessary and the basic tools and concepts of TIF suffice. These tools and concepts are described in the following sections. Overall TIF Environment The TIF methodology is presented in Figure 8. In order to start the TIF process, it is assumed that a baseline aerospace concept is given, and that this concept is realistic but not necessarily feasible (i.e. satisfying all design constraints) or economically viable (i.e. satisfying all economic targets). This baseline should be a representative configuration usually before any candidate advanced technologies are added. It must have the ability to be easily modified, and care should be taken that the concept can still be sized with reasonable accuracy at the extremes of the variable ranges.
Variables and Responses
Technology Scenarios
TIF Environment
Synthesis Environment
Response Surface Equations (RSEs)
R = f(k1, k2, …)
max min max
Monte Carlo
Analysis
min max min -1 1-1 1-1 1-1 1 R1 R2 R3 R4
Baseline Definition
Figure 8- Process to Create TIF Environment and Assess Technology Scenarios
The next step is the same as in basic RSM methodology: define the problem, including appropriate inputs and responses. Then, an appropriate synthesis and sizing
Danielle Soban
40
environment needs to be determined that adequately models and sizes the concept under consideration. Typically this environment will be a computer code that will have as inputs the geometric definition of the concept, aerodynamic, propulsion, and structural (weights) data, and a mission profile. The code must also be capable of resizing the concept subject to a list of performance related constraints (i.e. sizing points). An economics package must be included or linked to provide the necessary economic analysis. Ideally the code will be user-friendly and lend itself to quickly changing inputs and multiple runs. Normally a shell script is created to facilitate the multiple runs and changing variables. From these initial steps, the RSM methodology is applied as described earlier. Before continuing, however, the concept of k-factors must be introduced, as well as their effect on determining the inputs of the problem. Technology Mapping and k-Factors A new technology concept is characterized by ambiguity and uncertainty with regards to its performance, cost, etc. This uncertainty is directly proportional to its development status and the uncertainty is at its greatest in the early phases of the technology’s development. In order to introduce these uncertainties into the model, variability must be added to each input variable. When applied to new technologies, the variability is introduced through the use of technology factors (in other words, a disciplinarian metric multiplier) referred to here as k-factors. A technology is mapped against a vector of k-factors representing primary and secondary effects. For example, Figure 9 shows an example shape distribution for the k-factor associated with wing
41
Danielle Soban
weight. This particular shape distribution would be appropriate for a technology that is expected to give a 7.5% decrease in wing weight, yet recognizes, through the use of a skewed distribution, that there is some chance of achieving either a greater or lesser change in wing weight. Other distribution shapes that may be used include a uniform distribution, used for when each value is as likely as another value, or a normal distribution, which is used when there is an equal uncertainty around a particular value.
k-Wing Weight
-9.00%
-8.25%
-7.50% -6.75%
% Reduction in Wing Weight Figure 9- Example k-factor Notional Shape Function for a Weight Reduction Technology Dial
Once defined, the k-factors become the variables used in the DoE table and subsequent RSE generation.
The responses, in the form of the response surface
equations, are functions of the individual k-factors. Each technology concept, then, becomes a vector of these variables. In essence, a k-factor becomes a “technology dial”. The methodology establishes a direct relationship between the technology dials and the responses of interest. By examining the prediction profile, created by varying the kfactors in the DoE instead of design variables, the designer can identify their sensitivities to the responses. Remembering that the k-factors directly represent parts of technology
42
Danielle Soban
concepts, the designer can clearly identify those factors which have a significant impact on the responses. Later, these sectional technology concepts, represented by their kfactors, can be grouped together (to create a vector) to form “technology scenarios”, with each scenario representing a complete technology concept. In this way, the designer can analyze both the benefits and the risks associated with a technology concept. A final advantage to use the k-factors is that they represent a smooth relationship between the k-factors and the system responses, based on the shape distributions given. This allows the designer to select not only the endpoints of the technology (in other words, having the technology “on” or “off”) but also lets him/her select an intermediate value of a technology improvement and assess its impact on the design. For example, if a k-factor represents a technology that impacts aircraft L/D ratio, the designer could “dial” in a maximum value of, say, 15% improvement and quantify this impact. However, the designer could also explore a 10% improvement, a 5% improvement, or any other value that is contained in the range of the k-factor that was used to produce its RSE. A technology scenario is created as follows. For each candidate technology, identify the key disciplinary metrics (represented by k-factors) that will be affected by that technology and decide by what amounts they will be affected. For example, an advanced aerodynamics scenario might affect, either positively or negatively (representing both benefits and drawbacks), the following variables: overall aircraft drag (an improvement), engine specific fuel consumption (a degradation due to possible increased engine bleed to power, say, a blown wing), and the systems learning curve (increased due to increased system complexity).
Together, this group of variables
Danielle Soban
43
represents one technology scenario. (Realize that each of the variables selected must have been used as a variable in the creation of the RSEs.) This step will be based on data provided by the discipline experts, empirical or historical databases, and the configuration designer’s own knowledge and intuition. Once a technology scenario is created, a Monte Carlo analysis is conducted to assess the impact of that technology scenario. Monte Carlo Simulation After using k-factor inputs and applying the RSM to create the RSEs, the next step is to import the RSEs into a Monte Carlo analysis package in order to conduct the analysis. Crystal Ball [33] is one such statistical package that works in conjunction with Microsoft Excel. Excel spreadsheet templates were created to allow the user to easily import the RSEs in the format they are provided by JMP. A new input file is created for each technology scenario to be explored. A shape function must be assigned to each variable affected by the scenario. These shape functions will determine the probability of achieving certain values of variables. Because the actual shape functions are subjectively selected and can heavily influence the results, it is up to the designer to use their database of knowledge and expertise to ensure the shape distributions are appropriate and reasonable. Variables that are not affected by the technology scenario are set at their most probable, or baseline, values. A Monte Carlo simulation, based on each scenario, is then conducted. The program achieves this by randomly choosing variable values based on the shape distributions given. The responses are then calculated through the use of the
Danielle Soban
44
RSEs. The results are probability distributions that indicate the likeliness of achieving a certain result. The precision of a Monte Carlo analysis is directly proportional to one over the square root of the number of simulations that are run [34]. In order to get a good statistical analysis, it is not unfeasible that the number of runs be on the order of 10,000 cases.
Because the RSEs are fairly straightforward equations, they require little
computational power, and running thousands of cases is a timely and feasible task. (Compare this to running the same number of cases through the synthesis code and the computational beauty of the RSEs becomes apparent.)
Analysis: the Probability Graphs Figure 10 shows examples of the two ways that the probabilistic results from the Monte Carlo simulation can be presented. The first is the probability density function (PDF), which depicts the frequency that a certain value is observed in the simulation. The second is the integral of the PDF, called the cumulative distribution function (CDF), which shows the probability or confidence of achieving a certain value. By examining the CDF in Figure 10, the designer can see that there is about a 10% chance of achieving a takeoff gross weight of 33,475 pounds or less, but a 100% chance of achieving a takeoff gross weight of less than 33,850 pounds (find 33,475 on the horizontal axis, follow it up to where it hits the curve, and read the corresponding probability from the vertical axis). The designer can interpret information from the probability distributions in a number of ways. If the distribution has quite a bit of variability, but some or most of it fulfills the requirement being examined, this would suggest the benefit of investing more
45
Danielle Soban
resources into that technology concept. This addition of resources could have the effect of narrowing the uncertainty associated with the technology. On the other hand, if the distribution indicates that the probability of meeting the requirement is low, then it might be more provident to examine other technology options before investing money into a technology that might not be sufficient to solve the problem. This kind of system-level investigation can also show how much the detrimental effects of the technology are penalizing the system. This information, shared with the disciplinary experts that engage in the development of the technologies, could be investigated to see how resources need to be allocated towards reducing the penalties, as opposed to improving benefits.
Forecast: TOGW 5,000 Trials
Frequency Chart 123 92.2 61.5 30.7 0
33,350.00
33,475.00
33,600.00
33,725.00
33,850.00
33,725.00
33,850.00
Forecast: TOGW 5,000 Trials
Cumulative Chart
1.000 .750 .500 .250 .000 33,350.00
33,475.00
33,600.00
Figure 10- Examples of a Probability Density Function and a Cumulative Probability Function
46
Danielle Soban
At this point the TIF environment is complete and the tool is ready for further analysis and use. If a response represents a design constraint and the CDFs show a low probability of achieving that response, the designer has a few options.
One is to
manipulate the shape functions to give a better probability of success. Realize, however, that this represents a potentially higher level of technology that must be achieved. For example, if an advanced aerodynamics scenario is created and the designer, based on information from his/her discipline experts, can expect a 10% improvement in L/D, but the CDF shows a low probability of success, the designer can rerun the simulation and see how much a 15% improvement will do. If it is determined that there is a high probability of success with the 15% improvement, the designer will need to go back to his/her discipline experts and have them determine whether such levels are realizable or even possible. Other options include redefining or reducing the constraint, or continuing to look at alternative technologies.
Either way, the designer has gained valuable
knowledge concerning the effect of integrating a technology on a vehicle system. The environment that has been created represents a specific class of aircraft with a specific range of variables. The tool, therefore, may be used to conduct studies on similar aircraft without having to recreate a new environment.
Summary of Incorporating Uncertainty and Response Surface Methodology Response Surface Methodology and Technology Impact Forecasting have been applied extensively to aerospace concepts [35]. These tools provide the designer with knowledge crucial to their design decisions, yet much earlier in the design process, where
47
Danielle Soban
these decision have the most economic effect. The statistical and probabilistic nature of these techniques allow for the consideration of uncertainty and risk when considering new technologies or assumptions made on insufficient or unavailable data. The goal of the current research is to use these basic tools and methods to aid in the formulation of an overall methodology at the system of systems level. For example, rather than having a sizing and synthesis environment for a single aerospace concept, the environment becomes the whole theater level simulation code.
Metamodels of the relationships
between system components, such as aerospace vehicles but also diverse components like ground vehicles or communication systems, may be developed using RSM. Screening for inputs of high effect will help scope the problem to a manageable level. And TIF concepts, including k-factors and technology scenarios, can be expanded to account for various uncertainties inherent in the problem (not just limited to the infusion of technologies). There have been endeavors to apply basic RSM at the theater level. Grier et al [36,37] uses factor analysis in an attempt to simplify response surface modeling of the campaign objectives for the campaign code THUNDER. Soban and Mavris [38] applied RSM to the campaign code ITEM to check for method feasibility as well as identify any shortcomings inherent in the application of the methods at the theater level. Results of this study lead to issues and solutions discussed in Chapter IV: Probabilistic Concepts. Finally, it should be realized that to truly understand the complex interactions and model the complete effects of system components, metamodel and probabilistic techniques need to be applied not only at the top level of the system of system
Danielle Soban
48
environment (the theatre level) but analysis needs to be conducted at the sublevels as well. This concept is discussed in greater detail in subsequent sections.
49
Danielle Soban
CHAPTER III
THE NATURE OF MILITARY MODELING
Models and Modeling Chapter I summarized the motivation for the development of a system of systems effectiveness methodology.
The uncertainty inherent in such a method will be
approached by utilizing the probability methods described in Chapter II. These methods, as well as the overall proposed methodology, depend, at their heart, on appropriate and adequate modeling tools. This chapter discusses in detail the nature and characteristics of military modeling.
An understanding of this nature is paramount to the successful
development of a system of systems effectiveness methodology. The world we live in is comprised of many complex systems. The naturally occurring complexities of biological systems, the intricacies of planetary and stellar motion, or the thermodynamic processes of weather are augmented with the results of our technological leaps and bounds of the last century: airplanes and automobiles, intricate economic infrastructures, even the harnessing and transportation of energy that is the backbone of our common household utilities. In our never-ending fascination with the world, we seek to study and analyze these complex systems. Commonly this analysis is conducted through modeling and simulation.
Danielle Soban
50
Modern military strategy is an ideal example of a system that is constantly increasing in complexity. The modern warfighting environment is comprised of vastly disparate sub-systems that must all combine together effectively to achieve a desired objective.
In addition, the rapid development of new technologies adds potential
capabilities that must be explored. Because of this, the use of models in modern military defense strategy continues to play an ever increasing role of importance [39].
Types of Models A model can be defined as a purposeful abstraction of a more complex reality [6, 40].
This reality could take the form of a real or imagined system, an idea, or a
phenomenon or activity. The model tries to capture the essence of this reality in a simplified representation. A military model is specifically defined as “a representation of a military operation and is an abstraction of reality to assist in making defense-related decisions” [6]. Models can be divided into two categories [6, 40, 41, 42]. The first is a physical, or iconic, model. A physical model is usually just that: a physical representation of the object it is representing. For example, a scale model of an aircraft that is used for wind tunnel testing is physical (iconic) model. Likewise, a map, a model car, a blueprint or scale model of a building all represent iconic models. In contrast to a physical model is an abstract, or mathematical model. This kind of model uses symbols and logic to create an abstraction of reality. Mathematical relationships are often utilized to represent the dynamic properties of the object. Examples of an abstract model include an aircraft
Danielle Soban
51
sizing and synthesis code, a biologic model that mimics population growth of bacteria, or an economic model of the stock market. Most military models in use today are abstract. The remainder of this research confines itself to the use of abstract models. Abstract models are further divided into descriptive and prescriptive [6, 41]. A descriptive model limits itself to replicating the behavior of what it is representing. As such, there is no value judgement placed on the behavior; no “goodness” or “badness” is represented. An example of a descriptive model is a combat simulation. Decisions based on information from descriptive models are made by inference, as there is no integral optimization structure inherent in the model. For example, a sizing and synthesis code, which is another example of a descriptive code, may indicate that an aircraft’s gross weight is 35,000 pounds, but the model itself does not specify whether this is an acceptable value or not. In contrast, a prescriptive model specifies a course of action, with an attached value judgement. A prescriptive model (sometimes called normative, which implies a representation of human behavior) may label an output as adequate, inadequate, or optimal. Linear programming, dynamic programming, game theory, and decision theory are all methodologies that indicate to their user an acceptable course of action. It is often difficult to separate a descriptive model from a prescriptive one. Descriptive models are often used prescriptively, to explore options and solutions by trial and error. Prescriptive models are often used for insight only, as some loss of fidelity is usually traded off in order to incorporate optimization schemes. The user of such models needs to understand
52
Danielle Soban
the abilities and limitations of each model in order to utilize them accurately and effectively. This overall categorization of models is shown in Figure 11.
Model Iconic (physical)
Abstract (mathematical)
Prescriptive (normative) Attached Value Judgement linear programming dynamic programming game theory decision theory
Descriptive No Value Judgement replicates behavior no integral optimization
Figure 11 – Model Categorization
Finally, a word needs to be said on simulation. There are those who do not distinguish between simulation and modeling. Caughlin [43] states that a model could be “…a mathematical relationship or a method (algorithm) of expressing that relationship- a simulation. Therefore, we consider a simulation to be a particular representation of a model and will not distinguish between them.”
This parity is echoed by the following
definition “1. A model is a representation of a system 2. A simulation is: a model the exercise of a model a Monte Carlo model …In this paper, model and simulation are used interchangeably” [44]
53
Danielle Soban
Other authors, however, claim a subtle yet distinct difference between modeling and simulation. Bennet [40] defines simulation as “a technique or set of techniques whereby the development of models helps one to understand the behavior of a system, real or hypothetical.”
This relationship is similarly defined by “Simulator: any
compilation system (such as a single processor, a processor network, or more abstractly, an algorithm) capable of executing a model to generate its behavior” [45]. Perhaps the clearest definition of the difference between the two is given by “Simulation is the process of building a model of a system and experimenting with or exercising the model on a computer for a specific purpose such as evaluation, comparison, prediction, sensitivity analysis, optimization, what-if analysis, or training. Simulation cannot be performed without a model.
Every simulation must use a model” [41].
For the
remainder of this study, the above definition and distinction between modeling and simulation will hold: the research will contain models and modeling of the system, and simulation will be conducted using those models in order to assist in the analysis.
Conceptual Models vs. Computer Models An important concept in the modeling and analysis of complex systems is the distinction between a conceptual model and a computer model. As referred to in the first chapter, there is often a lack in differentiation between analysis methodologies and the tools (models) used in those methodologies. Hillestad et al [39] states “Using a campaign model for analysis requires maintaining the proper relationship between the model and the analysis. What seems to have been lost in many of the modeling developments is that
Danielle Soban
54
analysis is a “process” and that the model is but one tool in that process.” Consequently, the first step in the analysis of a complex system should be the formulation of a plan of attack. This plan should clearly identify the goals of the analysis as well as the inputs and outputs necessary for a successful completion of the analysis. This plan, or algorithm, or methodology, becomes the conceptual model of the problem. Zaerpoor and Weber [46] state “it is of utmost importance to have a documented, coherent and justified conceptual model of the experiment before one reaches into the bag of validated computer models and blessed scenarios.” Further, “In making the conceptual model, one starts with the questions of interest and conceives of the leanest physical experiment that would result in satisfactory answers for the questions. The conceptual model is then an abstraction of this experiment.” Computer models, then, become the tools used to achieve the analysis that is prescribed by the conceptual model.
If the conceptual model is in the form of a
methodology, then careful investigation of the problem will identify the appropriate models to utilize within the methodology. It should also be noted that a methodology can be a stand-alone algorithm that represents a certain class of problems. The specific problem, however, determines which models to use. In this way, the methodology is not a specific combination of modeling tools, but rather a framework that allows an interchange of modeling tools as appropriate. This concept is illustrated in Figure 12. The methodology proposed in this research, POSSEM, is designed to be a conceptual model for the analysis of the effectiveness of military system of systems. Although
55
Danielle Soban
specific computer models were used in its development, the methodology is intended to be a framework of analysis, independent of specific computer models.
Conceptual Model...
First step in analysis problem Defines analysis pathway Clearly identifies and states the problem Identifies analysis goals Aids in determining which tools to use
Figure 12- Characteristics of a Conceptual Model
How Models are Used An endless variety of problems that need to be studied necessitate a corresponding endless variety of esoteric models. Thus, it can be said that there are an infinite number of uses for models. However, all of these models do have a unified purpose: they aid in analysis. The purpose of analysis, in turn, is to gain insight into a particular problem. Thus, the ultimate purpose of any model is to aid in gaining insight into the particular problem it represents. Often this insight is translated into decision-making capabilities. As discussed in the first chapter, this research is intended to result in the formulation of a
Danielle Soban
56
framework for the probabilistic assessment of system of system effectiveness in order to facilitate decision-making. With this in mind, the uses of models can be classified into three primary categories. Assessment- called “point estimation” by Zaerpoor et al [46], assessment is using a model to replace a laboratory experiment. This is done when a laboratory experiment is not necessarily economical or feasible. An example of this is an aircraft synthesis code, a less expensive and quicker way to model design changes than actually building and testing an aircraft prototype. Assessment usually uses models that are high fidelity, physics-based computer codes. Most engineering codes are used for assessment. Comparison- this use of models allows for a comparison of alternatives to be explored.
Comparisons usually use models of lower fidelity than those used for
assessment, and concentrate on identifying trends and ratios rather than explicit numbers. Hillestad et al [39] explains, “the model can be used to help identify and analyze the relative importance of various systems, operational concepts, and force structures.” Exploration- finally, exploration examines the relationship between input variables. As stated by Zaerpoor et al [46], exploration “is the method of choice when the number, range, and effect of unknown or unknowable variables dominate the range of outcome.”
This use of a model allows a broader insight into possible future scenarios as
well as the impact of new concepts, such as technologies. Before utilizing a model, it is necessary to understand what type of model it is and how it is primarily used for analysis. Too often, codes are used incorrectly to achieve specific analysis goals. “Is it widely assumed that a blessed scenario plugged into a
Danielle Soban
57
validated combat model will automatically result, not only in credible, but also in useful results” [46]. The distinction between conceptual models and computer models drives home this point: a conceptual model of a problem to be analyzed must be what determines the appropriate models to be used. This research strives to develop one such conceptual model: a methodology.
Transparency Sometimes called “visibility”, transparency in a model is the client’s and analyst’s ability to understand the model’s cause and effect relationships [6]. This understanding is a key factor to a successful analysis. It is the responsibility of the analyst to not only present results but also to explain how the outputs result from the inputs, and how the data and assumptions affect the resulting conclusions [39]. Further, the conclusions should be model-independent; they should follow logically from the inputs and assumptions. It is a circular relationship: “the ability to explain the results depends on transparency.
This in turn depends on both the analytic process and the analyst’s
understanding of the relationships in the campaign model-explanation of the results is only partially a function of the model” [39]. Transparency, thus, is very user specific. The analyst who is very familiar with mathematical relationships and equations that comprise the model may think the model very transparent, while the new user may have difficulty performing an accurate analysis based solely on the inputs and the outputs. The opposite of transparency is opacity, and is a common feature of complicated, complex simulations and computer models. Because the same model can be labeled as
58
Danielle Soban
transparent or opaque, depending on the user, a heavy emphasis on the model’s interface can be used to improve transparency. But when all is said and done, the primary factor in improving model transparency is a clear, precise analysis path. As stated by Hillestad et al [39], “In the end, trust in the model, analyst, and data will provide some comfort that mistakes have been avoided, but it will not substitute for the explanatory transparency developed by the analytic process.”
Classification of Military Models The vast number of existing military models [47, 48] presumes the need to organize them into some sort of classification system. Such a system would aid in the development, acquisition, and use of models, as well as provide guidelines for the choice of models to be used in a particular analysis. The challenge becomes to choose a classification system that is intuitive and useful. To start, the difference between a taxonomy and a catalogue needs to be clarified. A taxonomy is a classification system that divides things into ordered groups or categories. A catalogue, on the other hand, is a listing or itemized display that usually contains descriptive information.
A taxonomy, thus, can form an indispensable
foundation for a catalogue [49], yet the taxonomy itself may not provide essential information such as that found in the catalogue. For example, the taxonomy used to classify a rosebush contains no information as to whether that rose is pleasing to look at or has a nice fragrance. A useful classification system must thus be more than just a taxonomy. The goal of this section is to discuss some current classification systems in
59
Danielle Soban
use today. This will be done in order to understand how models relate to each other, and thus aid in formulating the foundation of the overall conceptual model being developed.
SIMTAX Responding to the Department of Defense (DoD) need for a wargaming/warfare simulation descriptive framework, the Military Operations Research Society (MORS) hosted a series of workshops that resulted in the development of SIMTAX-a taxonomy for warfare simulation [44].
This taxonomy was designed to aid the DoD in the
development, acquisition, and use of warfare models. Specific features of the taxonomy were designed to accomplish the following goals: classify warfare simulations, construct frameworks for comparing conflict models, and provide the foundation for a comprehensive wargames catalogue. The SIMTAX workshop concluded that warfare simulations needed to be classified according to three characteristics of equal importance and occupying the same relational level (in other words, they are not hierarchically related).
These three
characteristics are the purpose, the qualities, and the construction of the model. Figure 13 shows the traits and sub-categories associated with each characteristic. The purpose of the model describes why the model was built or to what purpose the model could be applied. Although it was conceded that a particular model may be used for a variety of purposes, it was agreed that a singular purpose could be determined that described the basic function of the model. The qualities of the model are those aspects that describe the model itself, such as its level of detail, its scope or its domain. Finally, the construction
60
Danielle Soban
of the model describes the design of the model, such as whether or not human participation is needed or how randomness is treated.
Warfare Simulation
Purpose
Qualities
Analysis
Domain Span Environment Force Composition Scope of Conflict Mission Area Level of Detail Processes Entities
Research & Evaluation Operation Support (decision aids) Training and Education Skills Development Exercise Driver
Construction Human Participation Required Not Required Time Processing Static Dynamic Treatment of Randomness Stochastic Deterministic Sidedness One-sided Two-sided Three or more-sided
Figure 13- SIMTAX Categorization of Warfare Simulation
SIMTAX is a useful way to classify existing models. It allows models to be grouped by common characteristics, thus aiding side-by-side comparisons of model capabilities.
It should be emphasized that SIMTAX was developed for warfare
simulations, and thus is not necessarily appropriate for the classification of all types of military models, for example, engineering models.
61
Danielle Soban
Other Taxonomies While SIMTAX provides an exhaustive framework for the classification of warfare simulations, Hughes [6] provides a sampling of other characteristics under which military models in general may be classified. Models by Application or Analytical Purpose- similar to the SIMTAX characteristic, Hughes’ monograph lists this as the primary categorization of military models. Models are divided into seven major categories of application: battle planning, wartime operations, weapon procurement, force sizing, human resources planning, logistic planning, and national policy analysis. In addition, two other categories may be extracted from those above and separated into distinct entities: command, control, communications, and intelligence, and cost models. Models by Technique or Level of Abstraction- military models can be categorized into one of the following four areas: analytical representations, computer simulations, war games, and field experiments. Models to Describe, Prescribe, and Predict- this category describes the descriptive, predictive, and normative traits as discussed earlier in this chapter. Ad Hoc and Standing Models- a standing model is similar to a legacy code. It implies a model that is continuously supported, updated, and improved. In contrast, ad hoc models are built with a specific decision in mind. Scientific and Sensible Models-
characteristics that classify something as
“scientific” include openness, explicitness, objectivity, the use of empirical data, quantification, and a self-correcting character [50]. A scientific model, therefore, should
62
Danielle Soban
have these inherent traits.
Military modeling, however, can often be described as
“sensible” rather than “scientific”. A sensible model is one in which the parameters and variables are set down, the equations (processes) are open to inspection, and the same inputs can be shown to produce the same or similar responses [6]. However, there are inherent assumptions and value judgements that contributed to the logic of the model. Analysis involving sensible models focuses more on the inherent logic of the model and the choice of inputs. The results can not be validated in the scientific sense, but if the model is accepted and its limitations recognized, the results that follow can be useful in aiding the decision-making process. Models by Scope and Scale- this final category separates models into three or four levels of aggregation. Rather than discuss only characteristics of each model, this sort of classification starts to take into account how the models relate to each other. The next few sections will discuss this idea of hierarchical modeling in more depth. It should be noted that Hughes’ categories cannot be used concurrently as a single taxonomy. The categories as stated are not mutually exclusive. However, each on their own provides a useful key to understanding the capabilities and nuances of individual models.
Hierarchical Modeling A well known way of relating military models to each other is to classify them as to their position in a defined hierarchy. This hierarchy is often portrayed as having a pyramidal shape (Figure 14) and is described by Hughes [6]: “Whatever the number of
63
Danielle Soban
echelons that is included in it, the bottom will contain mainly phenomenological models and the top a single macro model. The pyramid may represent the nesting of models in one interacting, organic whole; show how results are integrated from echelon to echelon; or merely be an abstract representation of a structure that relates independent model operations.” Typically, a hierarchical structure will be divided up into three or four aggregate levels. Ziegler [51] first introduced the idea of decomposing models in a hierarchical manner that corresponded to levels of detail. Later, he described this decomposition as having as its first level the most abstract behavioral description of the system. This is followed by sub-systems levels of increasing detail, until a limit is reached and further decomposition is not justified [52].
This type of decomposition is echoed in the
hierarchical structures of military models described today: the first level usually contains an encompassing theater or campaign level model, followed by engagement or mission models, with the lower levels being reserved for engineering sub-system models. Figure 15 shows a traditional pyramidal structure containing three levels. Figure 15 (created from data in Reference [56]) and Figure 16 (Reference [53]) portray two alternative decompositions. Each of these categorical levels will be discussed in more detail.
64
Danielle Soban
Theater or Campaign Model Mission Model Single or Multiple Engagement Model
Engineering Model
Figure 14- Traditional Pyramid of Military Models
Level I: System/Engineering Analysis deals with individual systems or components
Level II: Platform Effects Analysis component is associated with a platform
Level III: Mission Effectiveness Analysis
assesses contribution of platform to a combat mission environment, including Command and Control, time-sensitive maneuvers, and a defined enemy posture
Level IV: Force Effectiveness Analysis all activity associated with operations of joint campaigns against an enemy combined arms force
Figure 15- Alternate Hierarchical Structure of Military Models I
65
Danielle Soban
Decision-making Focus
Applicable Levels of M&S
Strategic Issues
Strategic
Operational Issues
Campaign
Tactical Issues System Design Issues
Mission Engagement Engineering
Figure 16- Alternate Hierarchical Structure of Military Models II
Decomposition Levels The examples of hierarchical levels given in the previous section illustrate an interesting point: although each example chooses its own delineation for its levels, and names them correspondingly, each individual hierarchy does encompass the entire spectrum of relational military models and analysis. In order to discuss and define these types of models, they will be divided into roughly three categories: engineering models, mission models, and campaign models. These model categories will now be discussed in more detail, with the understanding that they will overlap in definition as they pertain to individual hierarchies.
Engineering Models The level that comprises the bottom of all hierarchies is that of engineering models. These are usually detailed mathematical representations of individual systems or components that may or may not be associated with a platform. For example, a hierarchy may consider a jammer or a sensor to be an engineering model, yet another hierarchy
Danielle Soban
66
may lump them together onto a platform and have one model of that integrated system. Engineering models are usually physics-based, contain a high level of detail, and are of relatively short simulation timeframe. Inputs are usually design variables, sizing criteria, and new technologies. Basic mission requirements and constraints may be introduced at this level to aid in sizing. For example, an aircraft sizing and synthesis code typically needs a rudimentary mission profile to be input for sizing purposes. (An interesting investigation of the influence of mission requirements on the vehicle being sized can be found in Reference [54].) Outputs of engineering models often consist of geometric dimensions and performance data. Engineering codes are generally used to conduct tradeoff studies of design variables and technologies, and to calculate performance characteristics. These types of codes can usually be validated and they lend themselves well to real-world testing. However, analysis conducted using this type of model is limited to the scope of the model. As discussed in the first chapter, an airplane sizing and synthesis model can provide data as to the performance of a particular aircraft, yet does not provide information as to how well that aircraft will aid in reaching system (theater or campaign) level goals. Figure 17, from Reference [53], shows the engineering level features.
67
Danielle Soban
Engineering/Engagement View of the World Characteristics: -Short simulation timeframe -Narrow scope -High detail Detailed systems and subsystems Some physics-based interactions Strengths: -Amenable to real-world testing -Generally able to be validated -Data collection centralized (1 SPO) Limitations: -Limited scope / context Only answers question in a small arena
Figure 17- Engineering Level Model Features [53]
Mission Models The middle level of the traditional pyramid is occupied by mission models. These models, also referred to as engagement models, encapsulate one vs. one or many vs. many encounters of specific sub-system components. For example, a mission level code could be used to simulate air-to-air combat between multiple flights of aircraft. The focus on this level concerns the timing and details of a single mission. Scenario, strategy, support, and overall force capabilities are usually not considered [55]. The timeframe of mission models is usually at the hours or days level, and they involve a medium level of detail. The level of interaction is increased, and the model will usually contain aggregate systems and subsystems.
Unlike the engineering models, mission models are more
68
Danielle Soban
troublesome to validate, and data collection for input is often difficult. Figure 18 shows a representation of the mission level from Reference [53].
Mission-Level View of the World Characteristics: -Medium simulation timeframe (Hrs - days) -Up to theater in scope -Medium detail Entity-Based: Simulation entities map to real world entities Many interactions effects-based Systems & Subsystems aggregated Strengths: Scope: Variety of interactions Theater sized scenarios Limitations: -Not comparable to real world testing standards -Not easily validated -Data collection dispersed
Figure 18- Mission Level Model Features [53]
Campaign Models Campaign models, also called force models or theater models, are usually large single codes that encompass the effects of the total forces involved, including air, ground, and naval, as well as coalition forces [39, 56]. They are primarily used to answer questions and make decisions at the larger system level. For example, campaign analysis, aided by campaign models, is used to study the interactions of strategy, force allocation, and system capabilities.
Other features that dominate analysis are the effects of
69
Danielle Soban
command and control decisions, deployment and sustainment (logistics) issues, and the cumulative effects of decisions as considered in a time-spanning environment. There are many campaign (or theater) level models in use today. It is common that certain organizations favor specific codes for their analysis needs, and these codes often emphasize, in terms of modeling detail and capability, a particular force. Some of the more well-used codes and the organizations that are their primary users are listed in Figure 19.
USN GCAM ITEM
USA TAC WAR VIC
USMC COMBAT IV
USAF THUNDER
OSD TAC WAR ITEM
Figure 19- Common Campaign Codes in Use Today
There are inherent limitations to campaign models. The first is that an inordinate amount of experience and information is needed by the user to use them effectively. This echoes a key point made earlier in this chapter: transparency of the model is user dependent, and the quality of the analysis is often directly related to the experience of the analyst. In addition, considerable detail is needed in both the scenario definitions and in the component descriptions in order to have a complete analysis. Yet at the same time, the complexity and run time of the code necessitates that detail be kept at a minimum. One issue in the use of campaign codes that must be noted is the ability of the user to influence the outcome.
Campaign codes are often based on empirical
relationships and experience mapped into coded equations, as opposed to direct physics-
70
Danielle Soban
based mathematics. It is difficult to model such complexities as human judgement and error. As noted previously, the campaign level codes need to be used by an analyst with a great deal of insight, skill, and experience with the problem being modeled. The input assumptions crucially drive the results. As such, the analyst has the capability, either wittingly or unwittingly, of unduly influencing the results towards a particular outcome. Another problem is the length of the campaign to be modeled. As the campaign time increases, difficulty in retaining fidelity of the model also increases. A key feature of campaign time is that as the campaign progresses, tactics and decisions evolve based on experiences and results so far [6]. This “human in the loop” problem is considered in more detail in subsequent sections. The campaign level is depicted by Reference [53] and is shown in Figure 20.
Campaign View of the World Characteristics: -Long simulation timeframe (months)
-Up to global in scope -Low detail Interactions effects-based Units & entities aggregated Strengths: -Scope: Wide variety of interactions over time -Full campaign Limitations: -Far from real world testing standards -Difficult to validate -Data collection widely dispersed (multi-service) -Reliance on abstraction
Figure 20- Campaign Level Model Features [53]
Danielle Soban
71
The Military Code Continuum Given the features of each type of military model, the traditional pyramid formulation can be replaced and enhanced by a new concept, similar to that in Reference [53], of a military code continuum. Figure 21 shows this continuum, and illustrates the primary two analysis tradeoffs of the continuum: detail versus complexity. As analysis moves from the engineering end of the spectrum to the campaign analysis end, the modeling codes increase dramatically in complexity yet lose an enormous amount of detail. The sheer number of entities that need to be modeled at the campaign level, coupled with an increasing number of decisions and interactions, soon lead to a modeling problem that is so complex that it becomes impractical to model the inputs with any level of detail. However, this necessary sacrifice of detail does come with a price. At the engineering level, where significant details is captured, the resulting metrics are very specific. Questions can be answered precisely. As the analysis moves towards the campaign level, the metrics become increasingly amorphous, with results that are more subjective and provide insight rather than explicit answers. Before proceeding, it is important to understand the differences between detail and complexity. Level of detail, or resolution, relates to how well and to what depths the object under consideration is being modeled [57]. It is a measure of the degree to which the model has captured the essence of the object. Complexity, on the other hand, is a computational feature of the model itself, and is a measure of the intricacy of the model.
Danielle Soban
72
How these two concepts relate to model accuracy is likewise important. The following intuitive assertions are repeated from Reference [57]: - Increased level of detail does not necessarily imply increased accuracy - Increased complexity does not necessarily imply increased accuracy - Increased level of detail usually does imply increased complexity - Complexity is directly related to computer runtime. As shown in Figure 21, complexity is represented by several modeling features. First is validation. When considering a highly detailed, physics-based model, validation becomes rather straightforward. Results can be compared and correlated to existing systems, and the equations verified. As the model moves up the continuum, however, the modeling becomes less physics based and more effects based. The results of campaign analysis provide more insight than deterministic answers, making validation very difficult. Indeed, these subjective answers and insight are the next area of increasing complexity. Metrics based on the answers resulting from the engineering end of the continuum are specific and easy to understand and use. At the campaign end of the spectrum, however, the metrics can become vague, leading to insight but rarely concrete answers. In addition, these metrics are usually the result of a multitude of effects, making isolation of primary effects difficult. Next is the skill of the user/analyst. While it is good practice to ensure that the user of any type of modeling code has a good idea of what and how is being modeled, it can still be said that an engineering code is more straightforward and implies “less” skill to use. If the user/analyst of an engineering code trusts the basic physics of the code,
73
Danielle Soban
then the use and analysis of that code is less complicated. On the other hand, there is a high level of experience needed to use and analyze a campaign code successfully. The lack of deterministic relationships in the model, coupled with vast number of entities and concepts modeled, necessitates a user that is thoroughly familiar with campaign concepts and can make appropriate assumptions. The next three features can be discussed together.
Moving up through the
continuum from the engineering level to the campaign level, the number of entities that need to be modeled increases, as well as the number of interactions and the numbers of decisions that need to be made. This in itself tremendously increases the complexity of the campaign end of the spectrum. Finally, we discuss timeframe. An engineering code is usually considered to be independent of a timeframe. Moving through the continuum, entities are grouped together and pitted against each other. Once a scenario becomes involved, time becomes a factor. Mission level codes may have a timeframe of minutes to hours. A typical campaign can be modeled at hours, days, or even weeks. The addition of a time element increases the complexity of the model. The level of detail at the engineering end of the continuum is very high. This is because only a single entity is being modeled, and so must be modeled in sufficient detail. As the continuum moves to the right, however, the sheer number of entities that must be modeled, as well as their interactions, necessitates that each of these entities cannot be modeled in the same level of engineering detail. This would cause huge costs both in terms of the setup and analysis of the code, as well as the computational runtime.
74
Danielle Soban
The capture of the physical process can be considered high at the engineering end, although this may or may not indicate a greater model accuracy.
COMPLEXITY Validation Metrics Necessary Skill of Analyst Number of Entities
Easy
Difficult
Specific/Answers
Subjective/Insight
Low
High
Few
Many
Number of Decisions
None
Number of Interactions
Few
Timeframe
Many (unmanageable)
Few
Many Hours
None
Weeks
Days
DETAIL Capture of Physical Process Level of Detail
High
Low
High
Low/None Squadron
Receiver
Radar
Aircraft
Flight
Group
Wing
Air Force
Command
RCVR
Component
Sub-system
Engineering
Platform
Unit
Mission
Force
Campaign
Figure 21 – The Military Code Continuum
Summary of Military Modeling The purpose of this chapter was to provide an overview of military models, what they are, and how they are related to each other. This understanding is necessary given the goal of this research: to develop a methodology to assess system of systems effectiveness within the military modeling environment.
Danielle Soban
75
The traditional pyramid of models was discussed, as well as its replacement with the military code continuum. It is this continuum and its features that have the biggest effect on the development of the methodology.
The next chapter discusses the
application of existing probabilistic methods directly to a campaign level code. In doing so, several key issues are identified, and the solutions to these issues make up the backbone of the proposed methodology. This backbone relies heavily on the concept of the military code continuum, and the relationships and characteristics of the types of models that make up the continuum.
76
Danielle Soban
CHAPTER IV
PRELIMINARY INVESTIGATION: APPLYING CURRENT METHODS TO CAMPAIGN LEVEL The first three chapters provided the background for the ensuing research. The next step was to begin the formulation of the methodology. The first investigative question that needed an answer was: can the existing statistical methods that were developed for the engineering level be applied directly the campaign level?
A
preliminary investigation of this question was conducted in order to identify any specific issues of concern that might arise from the direct extrapolation of the methods from the engineering level to the campaign level. Once these issues were identified, solutions to these issues could be proposed. Based on the incorporation of these solutions, an overall new methodology would be developed. The results of this investigation did indeed identify three major issues of concern that had a direct impact on the formulation of the system of systems effectiveness methodology: model abstraction vs. integration, the human in the loop dilemma, and scenario significance. The initial investigation will be discussed in this chapter, and the resulting issues and their proposed solution will be contained in the next chapter.
Campaign Code Selection In order to conduct the preliminary investigation, a suitable campaign level code was needed. The selected code would need to have input and output variables that were
77
Danielle Soban
relative, provided insight, and were easily manipulated. In addition, in order to be compatible with the existing methodology, the code needed to have the ability to run multiple cases quickly and efficiently.
ITEM The code selected for the study was ITEM (Integrated Theater Engagement Model) developed by SAIC, Inc [58] and is described more fully in Chapter VII. ITEM is an interactive, animated computer simulation of military operations in theater-level campaigns. It has fully integrated air, land, and naval (including amphibious) warfare modules and contains a strong emphasis on visualization. The inputs and output are GUI-driven (Graphic User Interface): an example of this interface is shown in Figure 22. ITEM is fully object-oriented in design and execution and contains a hierarchical structure of its database. Master Inter face Process Graphical User Interface
WARVIEW Graphical User Interface
Figure 22- The ITEM Graphical Interface and Environment
78
Danielle Soban
ITEM was inserted as the modeling code into the current probabilistic methods. A representation of this method is shown in Figure 23.
Integrated Theater Item Model (ITEM) Input Parameters: Effectiveness: Blue Weapons Salvo Size: SAM Sites PH of Red Weapons PK Red Weapon 5 vs. Blue Weapon 4 etc
Response Surface Methodology
\ \
\ \
\
\
\
\
\
\ C2 \ C3
\
\
Prediction Profile
\
\
Pareto Chart
+ Monte Carlo =
X1 C4
Dynamic Contours
Probability
\
\
\
\
C1
\
\
X2
\
\
\
X1 X2 X3 X4 ...
\
\
\
X1
Measures of Effectiveness: Total Damage Red SAM Sites Total Damage Red Airbases Total No. Blue Aircraft Destroyed etc
Metric
Probability Distribution
Figure 23 – Applying Current Methodology Using ITEM
Case I: Theater Level Case The first case to be investigated was a general theater level case described in the scenario below. This was the first attempt to directly apply the current probabilistic methods to a campaign level code. Inputs and outputs are described below, as well as initial results.
79
Danielle Soban
Scenario for Case I The scenario, operational situation, and inputs and outputs were provided by Johns Hopkins Applied Physics Laboratory (JHAPL). The test scenario chosen was a fictional conflict in which South Florida attacks North Florida with missiles and aircraft, and the scenario and variable ranges were selected by JHAPL in order to facilitate an unclassified study. An Air Superiority Operational Situation was constructed, and is summarized in Figure 24. Two Blue aircraft were modeled, and are called Strike 1 and Strike 2. The variables that are available to model aircraft are shown in Figure 25 and the difference between the two aircraft specified.
It is important to note how few
variables in ITEM are used to represent an aircraft, and the disconnect between these and traditional design variables, such as those used in an aircraft synthesis code, becomes obvious. The Red Surface to Air Missile (SAM) sites are shown in Figure 26, with their main goal to protect the Red airbase.
Day 1 Ship-launched cruise missile strikes on four SAM sites
Red Airbase protected by SAM Sites
Objective: Increase survivability of two aircraft strikes on Day 2
Day 2 Two aircraft strikes from Blue airbase on red airbase protected by the four SAM sites.
Ships/ Task Groups
Blue Airbases
Objective: Render airbase inoperational
Figure 24 - Florida Scenario with Air Superiority Operations Situation
80
Danielle Soban
Strike 2
Strike 1 Mission Duration
2.5 hrs
Mission Duration
Turnaround Time
5 hrs
Turnaround Time
Max. Speed
1600 kts
Max. Range Relative Detectability
4000 nm
Maneuverability Degradation
Variable
0.4
Shelterable
3 hrs 6 hrs 1500 kts
Max. Speed Max. Range
4500 nm
Relative Detectability
0.25
Maneuverability Degradation
Variable
Shelterable
Maximum Altitude
25000 ft
Standard Conventional Load
L1, L2
Maximum Altitude
25000 ft
Standard Conventional Load
L1, L2
Figure 25 – Blue Aircraft Modeled for Scenario
Radius of SAM Site SAM 3 SAM 1
Airbase
Communications Range of SAM Site
SAM 2
Figure 26 – Red SAM Site Detail
SAM 4
81
Danielle Soban
Inputs and Outputs The inputs to the scenario as provided by JHAPL are shown in Table 3. “PK” is the incremental probability of kill and “EK” represents the expected number of targets destroyed per hit.
The minimum and maximum values bound the ranges for the
variables, with the average of the two values being used as the midpoint, or baseline, value. The output Measures of Effectiveness are given in Table 4. Note that the number (percentage) of aircraft destroyed is the same as 1- (number of aircraft survived).
Table 3 - Inputs for Theater Level Scenario Minimum Maximum PK of Cruise Missile versus SAM Fire Control System
0.65
0.95
Maneuverability Degradation of Strike 1 Aircraft
20%
60%
Maneuverability Degradation of Strike 2 Aircraft
40%
80%
EK of Strike 1 AC / SCL 1 Loadout versus Airbase Runway
1
2
EK of Strike 1 AC / SCL 2 Loadout versus Aircraft in Open
2
4
EK of Strike 2 AC / SCL 2 Loadout versus Aircraft in Open
1.5
2.5
PK of SAM-1 Missile versus Strike 1 Aircraft
0.1
0.3
PK of SAM-1 Missile versus Strike 2 Aircraft
0.1
0.4
PK of SAM-2 Missile versus Strike 1 Aircraft
0.1
0.2
PK of SAM-2 Missile versus Strike 2 Aircraft
0.1
0.3
Reliability (i.e. P(hit)) of SAM-1 Missile
0.55
0.85
Reliability (i.e. P(hit)) of SAM-2 Missile
0.55
0.85
82
Danielle Soban
Table 4 - Outputs from Theater Scenario
Measures of Effectiveness Expected Number of Strike 1 Aircraft that are Destroyed Expected Number of Strike 1 Aircraft that Survive Expected Number of Strike 2 Aircraft that are Destroyed Expected Number of Strike 2 Aircraft that Survive Expected Number of SAM-2 Missiles Launched Expected Number of Runways Destroyed by Aircraft Strikes Expected Number of Aircraft in Open Destroyed by Aircraft Strikes
Results from Case I Preliminary results are shown in the form of a 2-level DoE screening test. This is a first order fit between the input variables and the output variables, and a total of 129 cases were run. Pareto plots showing the magnitude of the contributions of the different input variables to the variability of the response is shown in Figure 27. For the expected number of aircraft destroyed for both Strike 1 and 2, it can be seen that the Pk of the cruise missile against the SAM fire control system is the highest variable contributor. This confirms the Day 1 objective: increasing the Pk value of the cruise missiles does indeed increase the survivability of the strike aircraft. (It is important to remember that the Pareto plot shows the greatest contributors to the variability of the response. In Figure 27, the Pk of the cruise missile is shown to contribute the highest amount to the expected number of aircraft destroyed. This is not to say that an increase in the Pk value increases the number of aircraft destroyed. In fact, the opposite is true, as shown in
Danielle Soban
83
Figure 28.) Likewise, the Pk of the cruise missile has an overwhelming effect on the number of SAM2’s launched against the strike aircraft. SAM1 was represented in the model, but because of the geometry involved, was in effect a non-player. The variable was kept in as a sort of sanity check; if SAM1 was to show an effect, the model would have been unrealistic, identifying an error somewhere in the model. Similarly, some of the results are obvious: the Pk of Strike 2 aircraft against aircraft in the open does indeed have the highest influence on the expected number of aircraft in the open destroyed. Again, this shows the model and system are behaving as expected. Figure 28 is the prediction profile for the screening test. While the Pareto plot identifies the chief contributors to the response, the prediction profile identifies magnitude and direction of the impact of the input variables, as well as shows the simultaneous impact of the other variables. As noted in the Pareto plots, the prediction profile shows that increasing the Pk of the cruise missile does indeed increase the survivability of both strike aircraft. In addition, the number of SAM2’s launched is decreased. There is little change in the number of airbase runways destroyed or the number of aircraft in the open destroyed, which makes intuitive sense. A correlation is also seen between the degradation of the maneuverability of the strike aircraft and the number of these aircraft destroyed. Other results are checked for intuitive correctness.
84
Danielle Soban
.2
Term
.4
.6
.8
.2
Term
Pk of Cruise Missile vs. SAM Fire Con.
Pk of Cruise Missile vs. SAM Fire Con.
Maneuv. Degradation of Strike 1 A/C
Pk of SAM2 vs Strike 2 A/C
Pk of SAM2 vs. Strike 1 A/C
Maneuv. Degradation of Strike 2 A/C
Ph of SAM2
Ph of SAM2
Pk of SAM1 vs. Strike 2 A/C
Pk of SAM1 vs. Strike 1 A/C
Pk of Strike 1 A/C vs. Aircraft in Open
Pk of SAM2 vs. Strike 1 A/C
Pk of Strike 2 A/C vs. Aircraft in Open
Maneuv. Degradation of Strike 1 A/C
Strike 1 A/C vs. Runways
Ph of SAM1
Pk of SAM1 vs. Strike 1 A/C
Strike 1 A/C vs. Runways
Pk of SAM2 vs Strike 2 A/C
Pk of Strike 2 A/C vs. Aircraft in Open
Ph of SAM1
Pk of SAM1 vs. Strike 2 A/C
Maneuv. Degradation of Strike 2 A/C
Pk of Strike 1 A/C vs. Aircraft in Open
Expected Number of Strike 1 A/C Destroyed
.2
Term
.4
.6
.6
.8
Expected Number of Strike 2 A/C Destroyed .2
Term
.8
Pk of Cruise Missile vs. SAM Fire Con.
Pk of Strike 2 A/C vs. Aircraft in Open
Maneuv. Degradation of Strike 2 A/C
Pk of Strike 1 A/C vs. Aircraft in Open
Ph of SAM1
Ph of SAM2
Strike 1 A/C vs. Runways
Ph of SAM1
Pk of SAM2 vs. Strike 1 A/C
Pk of SAM1 vs. Strike 1 A/C
Pk of SAM1 vs. Strike 1 A/C
Pk of SAM2 vs Strike 2 A/C
Pk of SAM2 vs Strike 2 A/C
Maneuv. Degradation of Strike 2 A/C
Maneuv. Degradation of Strike 1 A/C
Strike 1 A/C vs. Runways
Pk of Strike 2 A/C vs. Aircraft in Open
Pk of SAM1 vs. Strike 2 A/C
Pk of SAM1 vs. Strike 2 A/C
Pk of SAM2 vs. Strike 1 A/C
Pk of Strike 1 A/C vs. Aircraft in Open
Maneuv. Degradation of Strike 1 A/C
Ph of SAM2
Pk of Cruise Missile vs. SAM Fire Con.
Expected Number of SAM2 Launched
.4
.6
.8
Expected Number of Aircraft in the Open Destroyed .2
Term
.4
.4
.6
.8
Strike 1 A/C vs. Runways Pk of Cruise Missile vs. SAM Fire Con. Maneuv. Degradation of Strike 1 A/C Pk of SAM2 vs. Strike 1 A/C Pk of SAM1 vs. Strike 2 A/C Pk of Strike 2 A/C vs. Aircraft in Open Pk of SAM1 vs. Strike 1 A/C Pk of Strike 1 A/C vs. Aircraft in Open Maneuv. Degradation of Strike 2 A/C Pk of SAM2 vs Strike 2 A/C Ph of SAM1 Ph of SAM2
Expected Number of Runways Destroyed
Figure 27 - Pareto Plots for Theater Scenario Screening Test
85
0.218 0.05
3.996
0.234
Strike 2 Survived
Strike 1 Survived
0.004
Strike 2 Destroyed
Strike 1 Destroyed
Danielle Soban
3.94 3.782
0.05 0.004 1.996 1.94 1.766
A/C Runways SAM2 in Open Destroyed Missiles Destroyed by Launched by A/C Strikes A/C Strikes
3.281 1.92
0.575 4 2.98 1.96 8 6.75 4.96 .65
.95 20%
60% 40%
Pk of Maneuv. Cruise Missile Degradation vs. of SAM Strike 1 Fire Control System
80% 1.0
Maneuv. Degradation of Strike 2
2.0 2.0
Ek of Strike 1 A/C w/ SCL-1 vs. Runway
4.0 1.5
Ek of Strike 1 A/C w/ SCL-2 vs. A/C in Open
2.5 0.1
Ek of Strike 2 A/C w/ SCL-2 vs. A/C in Open
0.3
Pk of SAM1 Missile vs. Strike 1
0.1
0.4
Pk of SAM1 Missile vs. Strike 2
0.1
0.2 0.1
Pk of SAM2 Missile vs. Strike 1
0.3 0.55
Pk of SAM2 Missile vs. Strike 2
0.85 0.55
Phit of SAM1 Missile
0.85
Phit of SAM2 Missile
Figure 28 - Prediction Profile for Theater Scenario Screening Test
Case II: Survivability Test Case For the second test case, a survivability assessment was chosen. This was in anticipation of the final application selected to demonstrate the proposed methodology: apply survivability enhancement technologies at the engineering (vehicle) level, yet assess their effectiveness at the campaign level.
Scenario for Case II Because aircraft were selected as the baselines, an unclassified notional air superiority scenario was chosen. The scenario needed to be simple enough to be able to clearly identify results of tradeoffs (especially since this first case is a proof of concept)
86
Danielle Soban
yet complex enough to capture important interactions between variables.
An
extrapolation of the basic scenario used in the above section was chosen. This scenario models a situation in which South Florida attacks North Florida and is shown in Figure 29. Two airbases are established in South Florida and the notional aircraft stationed there. The target is an airbase in North Florida that is protected by four SAM sites. The SAM sites are distributed around the airbase.
Each sites has identical
features, including the same number of launch rails, reload time, antenna height, and engagement radius. However, there are two different kinds of weapons defined that differ in terms of range, speed, and altitude (Figure 30). It must be noted that some of the values, such as SAM speed, can be considered unrealistic. These numbers were provided as proof of concept numbers in order to maintain the unclassified nature of the study. The results, therefore, should be considered quantifications of trends rather than absolute figures. Two of the airbases have one type of weapon and the other two have the second type.
87
Danielle Soban
Ships Blue Airbases
Figure 29 - Florida Scenario used in Theater Level Survivability Study
no factor no factor
Weapon Min Range (nm) Max Range (nm) Speed (kts) Min Alt (ft) Max Alt (ft) Rel. Detectability Ph Engagement Time (min)
RSAM-1 2 20 600 2000 18000 0.5 0.9 1.5
RSAM-2 1 15 800 1000 14000 0.5 0.9 0.5
Figure 30 - SAM Site Weapon Comparison for Theater Level Survivability Study
88
Danielle Soban
Features of the South Florida (Blue) airbases were not crucial to the study. The two notional aircraft were assigned to each Blue airbase and a surplus of weapons was available. The North Florida (Red) airbase featured runways, shelters, revetments, and aircraft in the open. These aircraft did not mobilize to defend the airbase, but were rather targets. The only defense of the Red airbase came from the surrounding SAM sites. The air campaign consisted of hourly attacks on the Red airbase for a total of 8 hours. Blue airbases alternated attacks, so each airbase launched a total of four strikes, consisting of five aircraft per strike mission. Each aircraft was assigned one of five targets: a SAM fire control, a SAM radar, a runway, a shelter, or aircraft in the open. Flight paths were straight line from airbase to airbase. SAM sites and Red airbase features were allowed a variable repair rate. There was no repair rate for Blue aircraft. SAM sites defended automatically when detecting incoming aircraft.
Inputs and Outputs Input choices were limited to those available in the synthesis code.
Those
variables most closely modeling susceptibility/survivability concepts were chosen. For aircraft, the variables selected were detectability, maneuverability, and turnaround time. The detectability variable models the relative detectability of the aircraft, and is a scaling factor used to reduce the probability of detection of opposing forces against the aircraft. It is directly analogous to radar cross section (RCS), and will be discussed in more detail later in this chapter. Likewise, the maneuverability variable measures the aircraft’s
89
Danielle Soban
ability to evade defensive systems and is a scaling factor used to reduce the probability of hit of engaging weapons against the aircraft [58]. Several threat variables were chosen. The engagement range of SAM sites was allowed to vary, but all four sites varied at the same time. Additionally, repair rates for the SAM fire control and radar systems, as well as airbase runways, shelters and aircraft in the open were selected as variables of interest. Table 5 summarizes the variables and their ranges.
Table 5 - Input Variables for Theater Level Survivability Study
Variables Detectability Strike 1 Detectability Strike 2 Maneuverability Strike 1 Maneuverability Strike 2 Turnaround Time Strike 1 Turnaround Time Strike 2 Track Range SAM Site Repair Rate SAM Fire Control Repair Rate SAM Radar Repair Rate Runways Repair Rate Shelters Repair Rate Aircraft in the Open
Low
Baseline
High
0.7
0.5
0.3
1
0.8
0.6
0.6 0.9
0.4 0.7
0.2 0.5
0.25
0.75
0.75 10 0.1
0.5 1 20 0.2
0.1
0.2
0.3
0.1 0.1
0.2 0.2
0.3 0.3
0.1
0.2
0.3
1.25 30 0.3
Appropriate output Measures of Effectiveness were chosen to illustrate the effect of the changing variables. The surviving percentage of each aircraft was tracked, as well
90
Danielle Soban
as the number of each type of SAM weapon fired. Finally, the number of runways and aircraft in the open destroyed was tabulated. These outputs are shown in Table 6.
Table 6 - Output Variables for Theater Level Survivability Study
Results from Case II Figure 31 shows the prediction profile for the survivability test case. The first thing to notice is that none of the repair rates for the airbase components have a significant effect on the responses. This is shown by a relatively flat line. There is a slight slope on the repair rate for runways when compared to the number of runways destroyed that does show a small effect of increasing runway repair rate and decreasing number of runways destroyed.
If the variable ranges on the repair rates had been
increased, or the air campaign plan decreased, the repair rate could have had more impact. The same result is seen with turnaround time for the two aircraft. The range around turnaround time was just too small to have an impact, given the air campaign plan. Track range for the SAM sites shows considerable effect on the responses. As the track range increases, more weapons are fired, and more aircraft are killed. This is
Danielle Soban
91
logical and intuitive. In addition, the destruction of the airbase components decreases. Note the deflection of the runway response to track range at its aft end. This indicates that the runway destruction begins to increase in rate towards the end of the air campaign. This could indicate that SAM site damage has an effect on its ability to protect the airbase components.
Interestingly, more SAM-2 weapons are fired at the aft end of the
campaign, and the number of SAM-1 weapons appears to decrease slightly. Maneuverability can be seen to have a slight effect on the number of airbase components destroyed, with the Strike 2’s maneuverability playing a slightly more significant role. Because maneuverability primarily affects the interaction between aircraft and SAM site, this result is somewhat intuitive. Once the aircraft has penetrated the defenses successfully, the maneuverability ceases to affect the actual kill. But it is interesting to note that increasing the maneuverability of Strike 2 (the slower, higher load-carrying aircraft) has a more significant impact than increasing that of Strike 1. Increasing the maneuverability of both aircraft does have a direct impact on the survivability of that aircraft. Detectability shows less promise. For Strike 2, increasing the detectability has negligible effect on the airbase components destroyed, as well as no discernable effect on increasing its survivability. For Strike 1, increasing its detectability does have a small impact on the number of airbase components destroyed and a small, interesting impact on its own survivability. Overall, it was discovered that the air campaign plan had a significant effect on the quality of the results. Force mix was tried as a variable and was found to
92
Danielle Soban
depend too heavily on the specific order of air strikes.
This points to significant
interactions in the code that need to be explored more fully. For a very simple air campaign, it was shown that increasing the percentage of Strike 1 aircraft in the overall number of aircraft did increase that aircraft’s survivability.
However, this increase
decreased the survivability of the Strike 2 aircraft, and at a more significant rate.
% Strike 1 Survived % Strike 2 Survived
SAM-1 Weapons Fired SAM-2 Weapons Fired Runways Destroyed
Aircraft in Open Destroyed
Detect. Strike 1
Detect. Strike 2
Maneuv. Strike 1
Maneuv. Strike 2
Turnaround Time Strike 1 (hrs)
Turnaround Track Range Repair Rate Repair Rate Repair Rate Repair Rate Repair Rate SAM SAM Radar Runways Shelters A/C in Open Time SAM Sites Fire Control (hr) (hr) (hr) (hr) Strike 2 (nm) (hr) (hrs)
Figure 31 - Prediction Profile for Theater Level Survivability Study
Summary of Preliminary Investigation Overall, it was shown that in general, statistical methods could be applied singularly to a campaign level code.
Most inputs to the code could be varied
appropriately, and metamodels could be formed relating the Measures of Effectiveness to those inputs. There were several issues, however, that were identified that could limit the usefulness of simply applying the current methods to the campaign level code. These issues, and their proposed solutions, are discussed more completely in the next chapter.
93
Danielle Soban
CHAPTER V
RESULTING ISSUES AND SOLUTIONS TO PRELIMINARY INVESTIGATION The previous chapter discussed the application of existing statistical methods directly and only to a campaign level code.
Although some specific results were
discussed in the previous chapter, the current chapter will focus on issues that were identified that could possibly limit the usefulness of a direct application of these probabilistic methods.
Solutions to these issues will be proposed, and a new
methodology will be formulated based upon these solutions.
Identification of Three Primary Issues
Level of Detail The first key observation that was made applying the current probabilistic methods to ITEM was that there were insufficient variables at the campaign level in place to model the aircraft in the detail that the analysis warranted. Kenneth Musselman, in a panel discussion at the 1983 Winter Simulation Conference, noted, “Aggregated measures are used to draw conclusions about system performance, while the detailed dynamics of the system go virtually unnoticed…A decision based solely on summary performance could lead to unacceptable results…It makes practical sense for us to learn how to properly amplify these details and to incorporate them into the evaluation process.” This is an illustration of a larger issue first discussed when describing the
Danielle Soban
94
military code continuum (Figure 21). Because of the increased complexity necessary to a campaign level code, a sacrifice is made in code detail. It would be impractical in terms of code development and run time to model each and every entity and interaction in the campaign code to an engineering level. Yet, how can one accurately assess the impact of, say, a technology applied at the engineering level, if the technology is not modeled in sufficient detail to have an impact? This is a currently a prominent issue in military campaign analysis [46,56,57,59,60]. Model Integration With the military code continuum fresh in mind, the most obvious solution might be to simply link detailed codes together. Recent efforts have been concentrated in this area [46,57,59], called “model integration”. This concept involves replacing a coarse model of an entity or technology with a more detailed model. In other words, it is moving a model or a part of a model “up” the continuum, from the engineering level to the campaign level. The result is a mixed fidelity simulation, and care must be taken that all of the links and hooks between the codes are complete and compatible. The drawbacks to model integration are similar to those associated with increasing the detail in a campaign analysis itself. If too many entities are modeled in too much detail, run time suffers dramatically. Also, creating the links between the models can be costly in terms of time, and validation must occur to ensure that the detailed model is indeed modeling the same thing as the coarse model.
Danielle Soban
95
Zooming Model integration is in reality nothing more than the time-honored method called zooming. Zooming can be defined to be “the act of expanding the view of a specific area of interest [56].” A software zoom, therefore, can be described as varying the fidelity of a model, or parts of it, in order to gain a more focused examination. An example of zooming as applied to the current probabilistic methods discussed in Chapter II is shown in Figure 32 [61]. In this example, an economic modeling code was used, and shows the direct operating costs and dollars per revenue passenger mile as a function of RDTE (research, development, test, evaluation) cost, production cost, and operating and support costs. For analysis purposes, it was desired to have a software zoom of the RDTE cost. This is shown in the bottom of the figure, where RDTE becomes a function of nine other economic variables. The original metrics, thus, are no longer just functions of the initial three economic variables, but also of the zoomed RDTE variable, with its nine components. The conceptual model, as the first step in the analysis procedure, is used to identify which entities in the analysis are viable candidates for model integration and zooming. Overall, these techniques are used to amplify the effects of entities of interest and to provide insight into their behavior.
96
Danielle Soban
1.3313
DOC 0.7028 1.3426
DOC+I 0.6959 1.3523
$ / RPM 0.7228 -.2
.2 k_RDTE Cost
-.2
.2 k_Production Cost
-.2
.2 k_O&S Cost
0.0367
k_RDTE
. 0189 -0.0092 0.33
0.02
2nd Tier Electrical Group
Raw Mat Test Rotor
0.24
0.49
Initial Tooling Hardware Design Rotor Rotor
0.33
0.17
Manufacturing Software Design Rotor Flight Mngt & Controls
0.21 Iron Bird
0.21 Hardware/ Software Integration
0.21 Wind Tunnel Hrs
Figure 32 – A Zooming Approach: Breakdown of RDTE Model Abstraction A new approach to solve this level of detail problem is called “model abstraction”. This approach does not involve the linking of detailed codes together with the aggregate model, but instead capturing the essence of the codes and linking that essence to the aggregate model [57]. This kind of formulation could improve model accuracy at a reduced execution time. Indeed, as stated in Reference 57, “the goal of model abstraction is to sufficiently reduce the complexity of a model, without suffering (too great) a loss in accuracy.” Although a comprehensive taxonomy of model abstraction techniques can be found in References 62 and 63, in general these techniques can be classified into three broad categories: model boundary modification, model behavior modification, and model
97
Danielle Soban
form modification.
The first, model boundary modification, involves relaxing the
boundaries placed on input variables, sacrificing some accuracy and detail for decreased complexity and run time. Given an aircraft aerodynamics analysis code, an example of this would be reducing the number of analysis points along the surface of the wing. The second model abstraction technique is model behavior modification. This technique would involve fixing input variables that have lesser impact on the final analysis metrics to a median or baseline value. By fixing these variables, complexity is again reduced. Finally, model form modification involves a full replacement of the model with some sort of surrogate. This could be in the form of a table lookup or a metamodel. The probabilistic methods described in Chapter II themselves involve, at their heart, model abstraction techniques. Model behavior modification is employed through the use of screening tests, which identify which input variables contribute the least to the variability of the response. Metamodels are used in the true model abstraction ideal: they are used to reduce the run time and complexity of the model in order to facilitate the incorporation of the probabilistic techniques.
Human in the Loop Dilemma The second issue identified in the preliminary investigation is called the “Human in the Loop Dilemma”. A major difference between engineering level and campaign level analysis codes is how the user interacts with the code. In a traditional vehicle sizing code, for example, the user will supply a set of inputs and the code will iterate on a sizing scheme to converge the vehicle according to the laws of physics and empirical
98
Danielle Soban
relationships. In ITEM and other similar theater codes, however, the user becomes an integral part of the analysis process. This means that the user periodically evaluates the effect of her/his decisions and can then change the parameters (either from that point or change initial input parameters and rerun the simulation) to provide improved results. ITEM was specifically designed to incorporate the use of human judgement to make strategic decisions based on the state of the forces at any given time. Figure 33 shows a typical analysis scheme for using a theater level code that includes the human interaction.
Define Campaigns
Set Up Forces for Single Campaign
Schedule Events for Time Step
Run Events for Time Step
Satisfactory Results?
Y Campaign
Y
Done?
N
N Modify Events for Time Step
ITEM Analyze Data from Campaigns
Figure 33 - Flowchart for Decision-Making for ITEM [58]
The major advantage of a human in the loop model is that it most closely mimics reality. Actual wargamers are employed, and the decisions that they make tend to be realistic and the resulting analysis is useful. However, the logistics of running such a model can be daunting. It often takes weeks to prepare to run a single simulation. The model databases must be properly loaded, the gamers thoroughly briefed, and a proper simulation facility, with all of the appropriate software and hardware, must exist and be operable. If there are any flaws in any of these “systems”, the gaming is delayed. In addition, the model itself is often run in real time, and the analysis must wait until the
Danielle Soban
99
results have been compiled, downloaded, and presented in a format compatible for analysis. The analysis of a single scenario, thus, could take days, weeks, or even months to complete, start to finish. A full human in the loop model is the epitome of complexity. The alternative to having the human in the loop is to use some sort of embedded rules (expert systems) to make decisions. There are some theater level codes that do this. The key drawback to this is that the rules have an inherent lack of flexibility to simulate real operational plans. In addition, these rules lack transparency in assessing cause and effect relationships. An example of this drawback in illustrated in the following example. Say that an embedded rule system is used to model the decisions made in a particular scenario. The results are summarized as follows: “The analysis shows that there is an 85% probability that this scenario (with its inputs) results in the loss of two aircraft carriers in the first four days of the event.” What is wrong with this statement? In the real world, losing two aircraft carriers is so completely unacceptable that, after the loss of the first carrier, the decisions (inputs) would be changed in order to ensure that a second carrier would not be lost. With embedded rules, unrealistic results such as these could be modeled and decisions based upon these results. The ideal model situation, therefore, is one in which the flexibility and capability of the human gamer/analyst is kept intact, yet the model itself has a minimum of complexity and realistic runtimes.
100
Danielle Soban
Scenario Significance The final issue identified by the preliminary investigation was the impact of the threat environment itself. It was found that the campaign scenario had a significant effect on the outputs. Usually when conducting a campaign analysis, a particular scenario is specified and used for the remainder of the analysis. Given a particular scenario, it can be fairly straightforward to optimize a technological and tactical solution that is applies only to that scenario. However, as discussed in the introduction, today’s world is full of rapidly changing threats and technologies. An optimized solution for one particular scenario may differ dramatically from the solution to a subtly different scenario. This lack of robustness can have significant implications to today’s decision makers, who need to consider a wide variety of ever changing threats and technological advances. Thus, the proposed methodology must be able to take into account a wide variety of changing threat environments and conduct analysis accordingly.
Proposed Solutions The previous section identified three primary issues resulting from the preliminary investigation of applying the existing probabilistic methodology directly to a campaign level code. These issues pose the challenge, and their solutions will become the backbone of the formulated system of systems effectiveness methodology. Two key framework concepts are proposed that result in an analysis environment that is both fully linked (in a combination of model abstraction and integration) and is fully probabilistic in nature.
Danielle Soban
101
“Abstragration”: A Linked Analysis Environment Although the word is rather awkward, “abstragration” does imply the literal combining of both the model abstraction and the model integration philosophies. It is proposed that just such a melding must occur, in the form of a linked analysis environment, in order to address some of the issues raised in this and previous chapters. Returning to the first chapter, it must be remembered that the primary goal of this research was to develop a methodology that could rapidly assess system effectiveness and technology tradeoffs for today’s complex systems, in order to aid decision makers. It was further postulated that this methodology must rely on a “system of systems” formulation, which would include some way of mapping vehicle level design variables and technologies to campaign (system) level Metrics of Effectiveness. From this, it is clear than a modeling/analysis environment must be created that links together models from the engineering level, through the mission level, to the campaign level. Chapter III outlined the basic hierarchical structure of the traditional pyramid of military models and discussed why a military code continuum was a better formulation of the hierarchy. Attention then turned to the problem of how to link these models together in the creation of just such an environment. Confirmed in the preliminary investigation of Chapter IV, it was found that the varying levels of detail and complexity from one end of the continuum to the other was a significant issue. The concepts of model integration (varying fidelity) and model abstraction (replacing codes with their “essence” to reduce complexity) must both be applied in order to create an appropriate environment. The
Danielle Soban
102
linked analysis “abstragration” environment will be created using the following tools and concepts. Creation of the Conceptual Analysis Model While this step will be included as a first step in the overall methodology, its completion is crucial before attempting to create the linked analysis environment. A conceptual model was defined in Chapter III to be an idea, algorithm, or “game plan” for the analysis. Thus, the first thing an analyst or decision maker needs to do is to clearly define the problem under investigation, understand and note which variables and technologies will be studied, and which metrics of effectiveness will provide the most appropriate information. This process will identify which models will be needed, where they are in the military code continuum, and to what detail each entity needs to be modeled. An example is now provided of a “first cut” at a conceptual model. Suppose the problem under consideration is to gain insight into the effect of a new aircraft radar. Let’s start at one end of the military code continuum. An engineering code that models the physics of the radar would seem like a good and necessary tool to use. However, just by itself, this tool could only give performance data of the radar. But in order to really assess the effect of this new radar system, it needs to be placed in its correct context: the radar needs to assigned to a platform, and that platform needs to be assessed as a component in the larger system, the warfighting environment. So by itself, this code does not provide the necessary information, even though the level of detail is superb. Moving to the other end of the continuum we may be tempted to start with a full blown campaign
Danielle Soban
103
analysis code. This would give us information (metrics) at the needed system level. However, such a code will be so “top level” that any inputs for aircraft radar would be limited to one or two variables at most, if any inputs exist at all. What is needed, therefore, is a link between the two extremes. The detail of the engineering code is needed yet the data needs to be assessed at the system (campaign) level. It should be noted that a direct link between the two extremes is not practical, and indeed violates the military code continuum. There needs to be an intermediate code at the mission level. The radar needs to be placed on an aircraft and that aircraft needs to be placed into a one vs. one or few vs. few situation in order to assess this new system’s performance. This data is then passed on to the campaign code. Because there is a clear analysis path from the campaign code all the way back to the radar code, transparency is enhanced and a proper assessment may be conducted. This example of a conceptual model does not include the definitions of the specific inputs and outputs, but the concept is clear. Applying “Abstragration” In some cases it may be that an appropriate analysis environment may be created by simply identifying and linking codes within the military code continuum. This would be an example of pure model integration. More often, however, this could be impractical for a variety of reasons, such as increased run time. In that case, the concept of model abstraction is brought into play and applied to the conceptual model. The screening test method of Chapter II could be employed to identify key variables, and parts or all of various models could be replaced with the “essence” of those entities. Because the new methodology is an extrapolation of those methods discussed in Chapter II, these
104
Danielle Soban
abstractions would be in the form of metamodels. Zooming technique are applied to those areas identified in the conceptual model as needing a more detailed analysis, or to enhance transparency in cause-and-effect relationships.
Full Probabilistic Environment The linked analysis environment created as a result of the conceptual model defines the codes and models to be used, as well as how they are to be linked together and which of the models may be “replaced” by metamodels. This completes the first framework concept used to address the issues identified in the preliminary investigation. The second framework concept is the inclusion of a full probabilistic environment. The original probabilistic methods described in Chapter II were developed to be applied to engineering level codes, specifically, aerospace vehicles. This was done in order to understand the cause and effect relationships between design variables and system responses.
The methods were then enhanced to allow the incorporation of
uncertainty and risk into design decisions as well as assess the impact of new technologies. The first part of the full probabilistic analysis environment will retain these capabilities at the engineering level.
Distributions will be assigned to key design
variables and the resulting performance metrics will be in the form of cumulative probability distributions. This first element of the full probabilistic environment will retain all of the key characteristics described in Chapter II and will be implemented in a similar fashion.
Danielle Soban
105
The second element of the full probabilistic environment is developed in response to the Human in the Loop problem. The primary characteristic to having a human as an integral part of the modeling and analysis process is the uncertainty that the human brings. Decisions and assumptions are made based on human experience, knowledge, and judgement. Yet even though these things can be mimicked by a rule-based process, the human being still retains the flexibility and creativity to throw in an anticipated decision. The ideal modeling environment will seek a compromise: retain the experience, knowledge, and judgement of the human while also allowing for flexibility and uncertainty that will not limit the solution. This will be accomplished in the proposed methodology by implementing a two step process: identify the key decision nodes in the analysis, and place probability distributions around the considered paths. Tree Diagrams/Decision Trees The tool that will be used to identify the key decision points in the analysis will the tree diagram/decision tree concept. A tree diagram is an organized list that shows all possible outcomes of an event. Reference 64 defines it as a “tool that systematically maps out in increasing detail the full range of paths and tasks that need to be accomplished in order to achieve a primary goal and every related subgoal”. A typical graphical representation of a tree diagram is shown in Figure 34. A decision tree is an extrapolation of the basic tree concept, and represents a decision process [65]. The nodes represent points in time where a decision can be made, and the branches emanating from the node represent the specific decisions. Often the probability of an event occurring can be associated (written beneath) the corresponding
106
Danielle Soban
branch. The most common way that decision trees are used is to determine an optimal decision for a complicated process. This is achieved by moving sequentially backwards from the terminal nodes in the network, and calculating the expected gains at the intermediate nodes. The “best” decision is the one with the maximum expected gain. In addition, decision trees are commonly used to determine the rules for rule-based algorithms.
In this sense, they are almost to be considered deterministic, and the
resulting rules are most definitely deterministic in nature.
Terminal Node
Decision Node
Probability Assigned (.5) to Node
Decision Paths
Terminal Nodes
Figure 34- Notional Example of Decision Tree Diagram
The proposed methodology will use the tree diagram/decision tree process to chart and identify key decision nodes in the analysis process. Typically, these would be those decision points in the scenario that have the most impact on the Measures of Effectiveness. Preliminary analysis could take advantage of screening test processes to refine the nodes. Once identified, analyst experience and knowledge is used to assign
107
Danielle Soban
probability distributions to each decision branch. This differs from traditional decision tree analysis in that a distribution will be assigned to the branch, not a single probability. These distributions will then be carried through the subsequent analysis. The final element of the full probabilistic environment addresses the issue of the changing threat environment.
As stated earlier, it can be fairly straightforward to
optimize a solution given a particular specified threat. Yet the ideal solution will always be the most robust solution. In order to obtain the most robust solution, a probabilistic threat environment will be used, in which threat parameters will be allowed to vary probabilistically.
This will aid in assessing the sensitivity of the Measures of
Effectiveness to those threat parameters. The three basic elements involved in the probabilistic threat environment are shown in Figure 35.
geometry, technology, requirements maps vehicle measures of performance to effectiveness values
Design Variables Mission Level Scenario
Threats Decision Nodes
System Measures of Effectiveness
Figure 35 – Proposed Full Probabilistic Environment
Danielle Soban
108
Impact Dials The use of the full probabilistic environment allows the creation of an analysis environment that is analogous to the technology scenarios discussed as part of the Technology Impact Forecasting techniques of Chapter II. The previous section discussed the concept of having probabilistic inputs as various locations in the analysis continuum (the engineering level, the decision nodes, and the threat inputs). To capture the effects of these probabilistic inputs, metamodels that relate the outputs at different system levels to these varying inputs will be created.
These metamodels can then be combined,
independent of the modeling environment in which they were created, into a new analysis environment. Because the metamodels are portable and have very quick execution times, they can be ported into, say, a spreadsheet format. The analyst will then have at their disposal a tool that can be manipulated rapidly, accurately, and efficiently. Because the metamodels were created using variable inputs, the spreadsheet analysis environment would allow the analyst the ability to vary all of the inputs (within the ranges to which the metamodels are valid) and assess their impact on the system outputs. In effect, the environment will allow the analyst to manipulate “impact dials”, similar to the k-factors discussed earlier.
Summary of Solutions The key issues were identified during the preliminary investigation discussed in Chapter IV. These were the level of detail problem, the human in the loop, and the significance of the scenario. Solutions to each of these issues were proposed and were
109
Danielle Soban
often based on extrapolations of existing probabilistic techniques. Figure 36 summarizes the solutions and the techniques that will be applied to each issue. The next chapter discusses the formal methodology for bringing all of these concepts together into a cohesive framework.
Solution Issue
Linked Analysis Environment
Full Probabilistic Environment
“Abstragration”
Level of Detail
Human in the Loop
Scenario Significance
Metamodels Zooming Tree Diagrams/Decision Trees Distributions around key decision points Impact Dials Probabilistic Threat Environment Impact Dials
Figure 36 – Summary of Issues and Proposed Solutions
110
Danielle Soban
CHAPTER VI
PROPOSAL OF NEW METHOD- POSSEM
Summary of Research Before presenting the new methodology, it will be useful to stop and summarize the research up to this point. Chapter I provided the motivation for the research. It discussed how the changing world climate affects today’s military decision makers. These decision makers need a way to quantify system effectiveness for three primary purposes: resource allocation, requirements definitions, and trade studies between system components. This can be accomplished by utilizing the system of systems concept. Rather than designing/optimizing an aerospace vehicle to its own performance, the vehicle instead needs to be placed in its correct context- the warfighting environmentand optimized to the new system (theater) level Measures of Effectiveness. In order to do this, there must exist a continuous mapping from the vehicle level to the theater level. The creation of this mapping is a key component of the research. The second chapter reviewed current probabilistic methods that have been applied to the aerospace vehicle level. The impetus for the development of these methods was to incorporate uncertainty and risk into the preliminary design phase of the vehicles. The methods were further expanded to allow the incorporation and effect of new technologies to be analyzed, resulting in the Technology Impact Forecasting environment and the
111
Danielle Soban
concept of k-factors. Extrapolations of these methods will be used in the formulation of the new research methodology. Chapter III discussed the nature of military modeling. It defined different types of models and discussed the difference between a conceptual model and a computer model. Classification of military models was outlined, as well as some taxonomies in use today. Hierarchical modeling concepts led to the idea of military code continuum, which classifies models into three levels: engineering, mission, and campaign.
It is this
continuum which became the foundation of the analysis environment used to formulate the system of systems effectiveness methodology. In Chapter IV, the current probabilistic methods of Chapter II were applied directly to a campaign level code. This was done in order to validate the appropriateness of applying probabilistic methods to a campaign level code, and to identify any issues which might arise. Three issues of concern were identified: the level of detail problem, the human in the loop issue, and the significance of the scenario (ever-changing threat environment and robust design). Chapter V offers solutions to the issues identified in Chapter IV. The concepts of model abstraction and model integration were discussed and combined into the awkwardsounding “abstragration”. The concept of zooming was explained, and together a linked analysis environment was proposed.
Further, a fully probabilistic environment was
envisioned, with probabilistic design variables, a probabilistic threat environment, and the identification, through the use of tree diagrams, of key decision nodes in the scenario, to which probability distributions would be applied.
112
Danielle Soban
The POSSEM Flowchart All of the preceding concepts and ideas are now combined into one cohesive methodology. This chapter will discuss, in a general and intuitive fashion, the proposed methodology.
The actual details of the implementation will be more thoroughly
explained through the use of an example, which is the subject of Chapter VII. Called the PrObabilistic System of Systems Effectiveness Methodology, or POSSEM, the framework outlines a step by step process to assess the effectiveness of a complex military system. The entire framework is shown in Figure 37, and each component of the process will be discussed in detail.
Difference Between Analyst and Analysis Tool At this point an important distinction needs to be made between the role of the analyst and the role of the analysis tool. The POSSEM framework is an analysis tool. It does not conduct analysis. Rather, it provides a clear, concise path to follow to aid the analyst in their assessments. As discussed in Chapter III, too often today modeling tools are confused with the actual analysis process. Just because a tool has been created and validated does not mean that anyone who can operate the tool is automatically going to generate useful, correct, and pertinent analysis.
The tool can only be successfully
operated by someone who is thoroughly familiar with the problem and has some understanding of both the inputs and the outputs used by the tool. The important, implicit assumption in the POSSEM framework is that it is to be used by an appropriate analyst.
113
Danielle Soban
POSSEM 2.0
Probabilistic System of Systems Effectiveness Methodology
Create Conceptual Model Inputs/Outputs
Establish Baseline(s)
Key Questions:
Design/Control Variables Impact Factors Measures of Effectiveness Mission Requirements Tactics
Vehicles Technologies
What problem are we trying to solve?
Define Scenario
Sub-systems
What level of detail do we need? What tools are needed and available?
Operational Situations Threats (probabilistic)
Identify Key Decision Nodes Decisions Assign Probability Distributions
Decision Trees
Create Linked Analysis Environment RCVR
Receiver
Software Zoom
Radar Code
a ftw So
Choose codes based on Conceptual Model
re nk Li
Weapon Code
Force on Force
Many vs. Many
Aircraft Synthesis
Missile Synthesis
Software Link One vs. One
“Abstragration” Zooming
Theater
Weapon Code
Aircraft Synthesis
Engineering Codes
Mission Codes
Campaign Codes
Create Full Probabilistic Environment Using Linked Analysis Environment Design of Experiments CASE 1 2 3 4 5 6 7 8
SREF -1 -1 -1 -1 1 1 1 1
SWEEP -1 -1 1 1 -1 -1 1 1
TURBO -1 1 -1 1 -1 1 -1 1
R1 y1 y2 y3 y4 y5 y6 y7 y8
Assign Distributions Design Variables Technologies Threats
Prediction Profile
R2 y11 y12 y13 y14 y15 y16 y17 y18
Monte Carlo
Probability Distributions
R1
R2
R3
Analysis Campaign Level
Responses = fn(Theater Level MoEs) System MoE = fn(MoP1, MoP2, etc)
Mission Level
MoP = fn(Xreq, Xdes, Xtech)
Engineering Level
X = fn(time-dependant variables)
Impact Dials
RESOURCE ALLOCATION REQUIREMENTS DEFINITION TRADE-OFF STUDIES BETWEEN SUB-SYSTEM COMPONENTS
Figure 37 – The POSSEM Flowchart
114
Danielle Soban
Create the Conceptual Model The first step in the POSSEM process is the most crucial: the creation of the conceptual model. As defined in Chapter III, the conceptual model is the “plan of attack” used towards solving a particular problem. It is the necessary up front work that the analyst needs to do before even considering running a single computer code.
The
conceptual model is a careful consideration of the problem at hand, and results in the identification of key elements that are subsequently used in POSSEM.
As part of
POSSEM, the conceptual model is created by answering three key questions: What problem are we trying to solve? What level of detail do we need? What tools are needed and available? The first question, what problem are we trying to solve?, serves to aid the analyst in identifying the basic goals of the analysis. Answers to this question provide information that aids in identifying what Measures of Effectiveness are needed, what input variables are appropriate, and, to some extent, what modeling tools may be necessary. A clear understanding of what the analysis goals are is crucial to a successful analysis. The next question, what level of detail is needed?, is an often overlooked element. Too many times the analyst will let the capability of the tools drive the analysis, rather than the other way around. The analyst needs to decide, before conducting any code executions, how good is good enough. What level of fidelity on the answer is needed? What basic assumptions can be made that simplify the problem without placing the analysis at risk? Which components need to be modeled in great detail and which can be
Danielle Soban
115
modeled more coarsely? The answers to this question will determine which types of codes and at what level in the continuum are needed. The final question, what tools are needed and available?, serves to recognize that as much as we would like to stay philosophically pure, analysts do sometimes have limitations on their available resources. A survey of appropriate modeling tools needs to be conducted, and the appropriate tools, at the appropriate level of detail, need to be selected. If an appropriate tool does not exist that meets the pure requirements of the analyst, a less suitable tool may be substituted. But this pre-analysis will allow the analyst to understand the limitations of their tool, and adjust their analysis accordingly. Once these three questions have been answered, the analyst will then have the resources and information to conduct the initial problem setup.
This involves
establishing the baseline vehicles and technologies, determining the specific inputs and outputs of the problem, and defining the scenario most suitable for the investigation of the problem.
But, as shown in Figure 38, the answers to the questions and the
establishment of the problem setup is an iterative process. Tradeoffs must be conducted between the three questions and the resulting three areas of setup. For example, knowing what problem is trying to be solved keys directly into what level of detail is needed to solve that problem. The level of detail needed may or may not be driven by what tools are available. The scenario that is defined must include in its inputs and outputs those entities that are to be studied.
116
Danielle Soban
A solid conceptual model creates a solid foundation for subsequent analysis. It allows the analyst to more thoroughly understand the problem at hand, and provides crucial insight and information useful to the remainder of the analysis. Create Conceptual Model Inputs/Outputs
Establish Baseline(s)
Key Questions:
Vehicles
What problem are we trying to solve?
Technologies Sub-systems
Define Scenario
Design/Control Variables Impact Factors Measures of Effectiveness Mission Requirements Tactics
What level of detail do we need? What tools are needed and available?
Operational Situations Threats (probabilistic)
Figure 38 – Create Conceptual Model Step of POSSEM
Identify Key Decision Nodes The next step to POSSEM is shown in Figure 39. This step works with the scenario defined during the creation of the conceptual model, and is used to help combat the human in the loop problem. The goal is to retain the flexibility and uncertainty of having a human involved in the decision and assumption making process, yet create an environment in which the computer codes may be run quickly and efficiently. To do this, the analyst conducts a pre-processing of the scenario/campaign.
Tree diagrams are
constructed and used to identify the key decision nodes. For a very complex scenario, the screening techniques of Chapter II may be employed to help identify which of the decision nodes contribute most to the variability of the response, and which can be set to their most likely value.
117
Danielle Soban
Once the decision nodes have been identified, the analyst uses their skill and experience to assign probabilities to each path. This completed environment will then be used as part of the full probabilistic environment.
Identify Key Decision Nodes Decisions Assign Probability Distributions
Decision Trees
Figure 39 – Identify Key Decision Nodes Step of POSSEM
Create Linked Analysis Environment The creation of the modeling environment in which to conduct the analysis is the next step (Figure 40). Using information generated in the conceptual model, modeling codes are selected that, together, will create an environment to which the answers to the problems posed in the conceptual model may be answered. During this step, the concepts of both model abstraction and model integration must be applied. Starting first with model integration, models are selected that form a continuous modeling path through the continuum, from the engineering level to the campaign level. Care must be taken to select the appropriate codes at the appropriate level of detail. Software zooming may be necessary to isolate and highlight a particular effect. Once the codes have been selected, the concept of model abstraction is applied. Those codes and areas that may be replaced
118
Danielle Soban
by metamodels will be chosen, in order to increase efficiency and runtime, with an acceptable loss of fidelity. The final step in the creation of the linked analysis environment is to link the various codes together in a computing environment. This could take the form of scripts that take the outputs of one code and feed it into the other, or the creation of a graphical interface or shell that conducts all the necessary data transfer. This step is not to be considered trivial by any means, and the successful creation of a linked analysis environment is a major achievement in the process.
Create Linked Analysis Environment RCVR
Receiver
Software Zoom
Radar Code
e ar ftw So
Missile Synthesis
Choose codes based on Conceptual Model
nk Li
Weapon Code
Force on Force
Many vs. Many
Aircraft Synthesis
Software Link One vs. One
“Abstragration” Zooming
Aircraft Synthesis
Theater
Weapon Code
Engineering Codes
Mission Codes
Campaign Codes
Figure 40 – Create Linked Analysis Environment Step of POSSEM
Create Full Probabilistic Environment Once the linked analysis environment has been created, it can be used to implement the full probabilistic environment of Figure 41. This involves applying the probabilistic methods described in Chapter II to the linked analysis environment. To this end, ranges are placed on the selected input variables, and a Design of Experiments is
119
Danielle Soban
conducted. Metamodels are created for those parts of the linked analysis environment identified in previous steps. Intermediate prediction profiles may be created at each juncture point for analysis purposes. Distributions are also placed around key threat variables, to model a changing threat environment.
These distributions are carried
throughout the entire analysis. Finally, code runs are conducted around the decision points in the scenario. The results of the code runs conducted in this step will be a series of linked metamodels. These metamodels are then imported into a spreadsheet environment for final analysis.
Create Full Probabilistic Environment Using Linked Analysis Environment Design of Experiments CASE 1 2 3 4 5 6 7 8
SREF -1 -1 -1 -1 1 1 1 1
SWEEP -1 -1 1 1 -1 -1 1 1
TURBO -1 1 -1 1 -1 1 -1 1
R1 y1 y2 y3 y4 y5 y6 y7 y8
R2 y11 y12 y13 y14 y15 y16 y17 y18
Prediction Profile
Assign Distributions Design Variables Technologies Threats
Monte Carlo
Probability Distributions
R1
R2
R3
Figure 41 – Create Full Probabilistic Environment Step of POSSEM
Analysis The final step of POSSEM is to use the generated metamodels and data to conduct the analysis (Figure 42). This is done by creating a spreadsheet environment that uses the metamodels to create analysis paths that link the outputs of one level of the continuum to the inputs of the next level. In this way, there is a traceable computational path that links the final Measures of Effectiveness down through the engineering level inputs. At each point along the analysis path, wherever there were probabilistic inputs, the spreadsheet
120
Danielle Soban
will allow those inputs to be changed (within their ranges of applicability) and the results updated in real time through the use of the metamodels. This is the “Impact Dial” environment, and is a valuable tool for the analyst. With this tool the analyst can explore the impacts of various assumptions rapidly and efficiently. The final goal of the method is for the analyst to use this information to answer the questions posed in the conceptual model, aiding in resource allocation, trade studies between system components, and requirements definitions.
Analysis Campaign Level
Responses = fn(Theater Level MoEs) System MoE = fn(MoP1, MoP2, etc)
Mission Level
MoP = fn(Xreq, Xdes, Xtech)
Engineering Level
X = fn(time-dependant variables)
Impact Dials
RESOURCE ALLOCATION REQUIREMENTS DEFINITION TRADE-OFF STUDIES BETWEEN SUB-SYSTEM COMPONENTS
Figure 42 – Analysis and Final Step of POSSEM
This chapter discussed and outlined, in an intuitive manner, the steps of the proposed Probabilistic System of Systems Effectiveness Methodology, or POSSEM. The
Danielle Soban
121
actual details of the implementation of POSSEM are best explained through the use of an example, which is the subject of the subsequent chapter.
122
Danielle Soban
CHAPTER VII
POSSEM PROOF OF CONCEPT In order to illustrate the details involved in the implementation of POSSEM, a representative problem was chosen and the methodology applied. The chosen example involved assessing the effects of survivability enhancements on a baseline aircraft. This particular problem was chosen because survivability enhancements are most easily and beneficially applied to the design stage of an aircraft, yet their impact is most properly assessed at the campaign level.
This allows an analysis across the full military
continuum, making it an ideal example. The chapter begins with a discussion of aircraft survivability and provides a detailed motivation for the selection of this problem. The chapter then continues with the full implementation of POSSEM as applied to aircraft survivability.
Survivability Concepts Aircraft combat survivability is defined to be “the capability of an aircraft to avoid and/or withstand a man-made hostile environment” [3]. Survivability is made up of two key components, susceptibility and vulnerability. Susceptibility is the inability of the aircraft to avoid threat elements that make up the hostile environment. It is measured by the quantity PH, the probability that the aircraft can be detected and hit. Vulnerability, the inability of the aircraft to withstand damage caused by a hostile environment, is measured by PK/H, representing the probability that the aircraft is killed if hit. The overall
123
Danielle Soban
probability of kill for the aircraft, PK, is related to susceptibility and vulnerability by the following equation: PK = PH PK/H The overall metric representing survivability is the probability of survival PS. It is related to the probability of kill by: PS = 1-PK Volpe [2] combines the concepts of susceptibility and vulnerability into three key tasks that must be accomplished in order to increase survivability: deny the targeting, degrade the threat, defeat if hit. The various elements of susceptibility and vulnerability are grouped appropriately under these headings (Figure 43).
Deny Targeting
Degrade the Threat
Defeat, if Hit
•Signatures
(if targeted) •Jamming
•Hardening
•Long range sensors
•CM/ECM
•Shielding
•Stand-off weapons
•Maneuverability/Agility
•Separation
•Passive attack
•Tactics
•Redundancy
•Mission Planning
•Sealing
•Altitude
•Filtration
•Speed
SUSCEPTIBILITY
VULNERABILITY
Figure 43 - Volpe’s Survivability 3-D Approach [66]
124
Danielle Soban
Ball [3] translates Volpe’s tasks into specific survivability enhancement concepts (Table 7). If care is taken during the preliminary design process, survivability features such as these can be added to the aircraft at little or no penalty in terms of cost, weight, or performance.
Table 7 - Survivability Enhancement Concepts [3] Susceptibility Reduction Threat Warning Noise Jammers and Deceivers Signature Reduction Expendables Threat Suppression Tactics Maneuverability Enhancements High Altitude Flight High Speed
Vulnerability Reduction Component Redundancy (with seperation) Component Location Passive Damage Supression Active Damage Supression Component Shielding Component Elimination
The Need to Bring Survivability into Preliminary Design Process In today’s environment of military conflicts, political instabilities, economic uncertainties, and limited resources, we are being asked to do more with less [2]. Shrinking budgets limit the opportunities available for development and procurement of new warfighting tools. Concepts that do successfully compete for monetary support must guarantee a high probability of success as well as a substantial useful life span. In addition, our forces are being asked to generate more sorties for more diverse missions with fewer assets [2]. Each aircraft is experiencing increased utilization. In this high demand environment, aircraft survivability is key.
125
Danielle Soban
During peacetime operations, a more survivable aircraft is safer and more reliable. This translates into lower maintenance costs and increased availability. This in turn makes the aircraft very affordable over its life cycle (Figure 44). The same argument can be made during wartime. A more survivable aircraft will need less maintenance. More of the aircraft will return from a given sortie, and these aircraft will have less damage than their less survivable counterparts. More unharmed aircraft means a larger reserve of aircraft, and availability increases. All of these contribute to the increase of aircraft life cycle, increasing its affordability. This increase in aircraft life cycle aids in offsetting the research and development cost of the more survivable aircraft.
Survivable
Safer More Reliable
Lower LCC
(during peacetime operations)
affordable
Figure 44 - Relationship between Survivability and Life Cycle Cost
Although survivability is a relatively new field, there have been studies that show that small changes in survivability can reap huge benefits. Data from [67] shows that during Desert Storm, the allies flew an average of 51 missions per aircraft (103,200 sorties with 2021 aircraft). Figure 45 shows the affect of survivability on force size as a function of missions flown [2]. For these 51 sorties, changing the survivability from 98% to 99% changes the surviving force from 36% to 60%. Figure 46 shows that an initial force of 100 with a 98% probability of survival attacks 3151 targets.
When the
126
Danielle Soban
probability of survival is raised to 99%, the number of targets attacked jumps to 3970, indicating that a 1% change in survivability produces a 26% change in force effectiveness. Such high potential benefits clearly illustrate the need to increase aircraft survivability. To summarize, increased aircraft survivability translates into an aircraft that lasts longer in combat and has a higher probability of returning unscathed or with less damage. The more aircraft that return, the larger the size of the attack force at any given time, and the larger the attack force, the more weapons are fired and the more targets are attacked. Finally, and perhaps most importantly, the greater the probability of survival for the
SURVIVING FRACTION OF A/C
aircraft, the greater the probability that the pilot will return safely. 1
PROBABILITY OF SURVIVAL
0.9
0.999
0.8 0.7 0.6
0.995
0.5 0.4
0.99
0.3 0.2 0.98 0.95 0.9
0.1 0 0
10
20
30
40
50
60
70
80
90
100
NUMBER OF SORTIES MR93-1076-003
Figure 45 - Effect of Survivability on Force Size [2]
127
Danielle Soban
PROBABILITY OF SURVIVAL
10,000
- STARTING FORCE 100 A/C
9000
0.999
- 1 TARGET PER A/C 0.995
TARGETS ATTACKED
8000 7000
0.99 6000 5000 0.98
4000 3000 2000
0.95
1000
0.9
0 0
10
20
30
40
50
60
70
80
90
100
NUMBER OF SORTIES MR93-1076-004
Figure 46 - Effect of Survivability on Force Effectiveness [2]
The Paradigm Shift and Survivability The paradigm shift of Chapter I is now invoked. The goal of the paradigm shift is to bring more and better information to light earlier in the design process. This concept applies to the design of an aircraft incorporating survivability. Until recently, preliminary aircraft design focused on tradeoffs between the traditional disciplines: aerodynamics, propulsion, structures, etc. However, issues such as technological advances and life cycle considerations have induced a shift of emphasis in preliminary design to include nonconventional disciplines. These disciplines, often called the “-ilities”, include disciplines such as maintainability, reliability, and safety, as well as crossover disciplines such as economics and stability and control. One of these disciplines is that of survivability.
Danielle Soban
128
It is crucial that survivability be considered during the preliminary design process. As stated by Sapp [68]: “But, perhaps the most important thing to realize at this point is that it (survivability) must be addressed from the first day the need for a new aircraft is envisioned. That need must be tied to a clear-cut mission requirement and must, at the same time, be matched with the threat it can expect to face…Once a design is frozen, once the lines are drawn in ink, the chance to optimize is lost.” This need is recognized by such entities as the Joint Technical Coordinating Group on Aircraft Survivability (JTCG/AS), whose primary goal is to advance and establish the design discipline of survivability [69]. Survivability is a function of both susceptibility (denying the target and degrading the threat) and vulnerability (the inability of the aircraft to withstand the damage caused by the hostile environment) [3]. Susceptibility and vulnerability, in turn, are functions of characteristics such as shape, performance, agility, stealth, mission requirements, and threat environment. Because these characteristics are themselves functions of basic design parameters and requirements, as are the other more traditional design disciplines, it becomes both necessary and cost-effective to consider survivability as a design discipline in its own right. The advantages to considering survivability during the preliminary design process are clear. The first of these is reduction in design cycle time. While it was common up until WWII to have very short time spans between conception and production, today’s complex systems have gestation periods measured in years [68]. The shorter realization times were a necessary function of the wartime era that fostered them, but didn’t allow
129
Danielle Soban
adequate research and development time to optimize the configuration.
Today’s
realization times are so long that often technology advancements surpass the production timelines of the aircraft, and the aircraft is produced with inferior technology simply because the conception phase occurred such a long time ago.
By considering
survivability earlier in the design process, the effects of survivability concepts can be predicted earlier and trade-offs conducted much sooner in the process. Like the paradigm shift illustrates, more information earlier in the design process reduces design cycle time. Additionally, adding survivability features to aircraft after they are in production is costly and inefficient. When it was realized during the conflicts in Southeast Asia that US aircraft were being shot down in large numbers, there began a frantic rush to develop and add survivability concepts to existing airframes [3,68]. The resulting additions were usually in the form of “humps, bumps, and bulges” [68] that added weight and drag even as they decreased performance and payload. Often the trade-off between decreased performance and increased protection eliminated potential concepts that could have easily been integrated into the original design.
Finally, if the survivability concepts are
considered in the preliminary design process, optimization of the overall configuration can be made. For example, an optimal stealthy configuration for an aircraft is not necessarily the most aerodynamic one. If the trade between stealth and aerodynamics was conducted at the preliminary design phase, an optimal setting of both could be implemented.
130
Danielle Soban
The Link to System Effectiveness Although the concepts and decisions that affect survivability are functions of preliminary design variables, the effect of these concepts are realized most not at the aircraft performance level but at the theater level. The primary metric for survivability is the probability of survivability (Ps), a metric that needs to be evaluated by putting the aircraft in an appropriate scenario and then analyzing its performance. Measures of this performance can be represented by system level Measures of Effectiveness: number of returning aircraft, number of targets attacked, etc. In this way, there is a clear connect between survivability concepts and system effectiveness metrics.
To consider one
without the other is to miss a significant piece of the overall puzzle. As stated in [3]: “The survival of a military aircraft operating in a hostile environment depends upon many diverse factors, such as the design of the aircraft, the skill and experience of the crew, the armament carried, the onboard countermeasures equipment, the offboard supporting systems, and the tactics employed. The cost of modern aircraft weapon systems, coupled with the requirement that the system be effective, makes imperatives the consideration of the aircraft’s survivability throughout the life cycle of the system.”
Example: Create the Conceptual Model An example of the proposed methodology, POSSEM, will now presented. The following sections will show a step by step implementation of the method. Each section will be comprised of one complete step in POSSEM, followed by the results and analysis of the problem.
131
Danielle Soban
The first step of POSSEM is to create the conceptual model. This is the most crucial step of the process, and is indeed, the most crucial step in any analysis problem. The conceptual model clarifies to the analyst what the key components of the problem are, and helps define a clear analysis pathway for its solution. In addition, it aids in identifying the tools and resources needed to solve the problem. The process of creating a conceptual model is iterative, with preliminary tradeoffs conducted that balance inputs and outputs, scenarios, and baseline characteristics. Figure 47 shows the steps in the creation of the conceptual model, repeated here for easy reference. Create Conceptual Model
Key Questions: What problem are we trying to solve?
Inputs/Outputs
Establish Baseline(s) Vehicles Technologies Sub-systems
Define Scenario
Design/Control Variables Impact Factors Measures of Effectiveness Mission Requirements Tactics
What level of detail do we need? What tools are needed and available?
Operational Situations Threats (probabilistic)
Figure 47 – Create Conceptual Model Step of POSSEM
Answers to Key Questions The first step in the creation of the conceptual model has the analyst considering three key questions. Iterations on the answers to these questions provide the foundation for the model. What problem are we trying to solve? Given the discussion in the first part of this chapter, the following conclusions can be drawn. First, survivability is an important capability for a military aircraft to possess.
Danielle Soban
132
Secondly, survivability can be divided into susceptibility and vulnerability. Susceptibility is function of aircraft geometry, aircraft performance and tactics, and stand-off weapons and sensors that the aircraft carries. Vulnerability is a function of the detailed design of the aircraft, and is affected by structural design, system redundancies, and materials selection. Finally, it is difficult or impossible to determine the survivability of an aircraft without placing it in a combat situation and assessing it there. From these conclusions we pose the problem: What is the effect of adding survivability concepts to an aircraft? Note that not all of the information used in the first paragraph was necessary to formulate the question, yet this information will be used to refine the conceptual model as part of the iterative process. What level of detail do we need? In order to assess the effects of survivability, it is obvious that some sort of modeling capability is needed that can model survivability concepts. If these concepts are to be placed on an aircraft, a modeling tool that provides a detailed model of the aircraft and can distinguish between various survivability concepts added to that aircraft is necessary. Assuming that the survivability concepts will affect the shape of the aircraft, as well as its size and its performance, the tool must be able to account for such effects. All of these ideas are pointing in the direction of an aircraft sizing and synthesis code. This code must be able to resize the aircraft given some geometric and payload changes, and must output performance characteristics as well as sizing results.
Danielle Soban
133
Moving to the other end of the analysis spectrum, these changes to the aircraft must be assessed at the appropriate level. It is postulated that this is the campaign level. Remembering the fluidity of the military code continuum, it may be that the effectiveness of the aircraft with the survivability concepts may be properly assessed at some sort of mission level, or a level in between. This information will be used in conjunction with the tools and resources that are available to iterate on a solution. In any case, the question of level of detail that is needed can be answered. A modeling tool capable of having as its outputs metrics that reflect a change in aircraft survivability must be used. Thus, the tool must model, in a coarse degree, some basic features of an aircraft carrying out its mission. Details of the aircraft are not as important to have modeled as the capabilities of that aircraft. Also, the code must take as its input some sort of scenario or combat situation. Finally, there must be a cohesive analysis path between the inputs and outputs of the problem. If a sizing and synthesis code is used for the inputs, and a campaign level code used for the outputs, then some sort of middle level linking might become necessary. This will be determined once the codes have been chosen and the specific inputs and outputs identified. The answer to the second key question then becomes: At the aircraft level, a significant level of detail is needed to properly model the effects of survivability concepts added to that aircraft. This implies a standard sizing and synthesis code. At the campaign level, the modeling code must be able to take aircraft capabilities as inputs, as well as provide a somewhat detailed scenario. Outputs must include metrics that reflect the effects of changes in aircraft
134
Danielle Soban
survivability. Middle level links between these two extremes may become necessary. What tools are needed and available? Finally, the available tools and resources must be taken into account. In a perfect world, availability would not drive analysis requirements. However, it must be realized that sometimes analysis must be compromised for sake of expediency and, indeed, the capability of performing any sort of analysis at all.
The beauty of making these
considerations part of the conceptual modeling process is that these potential negative effects can quite possibly be mitigated. By acknowledging modeling limitations, the analysis pathway can be designed to minimize these drawbacks, while still providing quality analysis. In addition, knowing which tools are available and being familiar with their capabilities, a more robust and complete analysis could result. The capabilities of the tools could point to analysis pathways not previously considered. This said, a return to the question is made in light of the specific example at hand. Several tools that were appropriate to the study were available and familiar to the researcher. These tools and their capabilities are described below. FLOPS So far in the development of the conceptual model it was determined that an aircraft sizing and synthesis code was needed. The code available and chosen by the researcher was FLOPS (Flight OPtimization System). FLOPS is a preliminary design and analysis program developed and supported by NASA Langley Research Center [70]. It is capable of modeling most subsonic and supersonic fixed-wing aircraft, including notional designs.
FLOPS is comprised of several interconnected analysis modules,
135
Danielle Soban
including weight and balance, mission trajectory, mission performance, aerodynamics, engine cycle analysis, takeoff and landing, community and airport noise, and economics. The code is capable of sizing an aircraft by performing a fuel balance given a specific geometry, scalable engine data (or using the cycle analysis), and mission. Once the aircraft is sized, FLOPS can perform standard mission analysis tasks, such as integral and point performance (range, endurance, fuel burn, excess power, etc). Figure 48 shows the analysis flow for FLOPS. Information that defines basic geometry, aerodynamics, propulsion, and the sizing mission are input. Drag polars are then calculated using input data, with internal algorithms being used to account for any missing or undefined elements or data in the regime of interest. An engine deck is then either read as input or the internal cycle analysis package is used to create propulsion data and to calculate tables for optimal maneuver schedules. The mission module uses this information to “fly” the aircraft through the mission, conducting a fuel balance until the aircraft is converged and thus sized. FLOPS may also be used to fly a previously sized aircraft through a mission to determine performance characteristics.
At this point,
additional analysis modules may be called to calculate such things as takeoff and landing performance, noise analysis, and economics. FLOPS was written as a single, fully portable program in Fortran. The source code is easily manipulated to add additional analysis capabilities. The input and output formats are intuitive and helpful, with a variety of debugging flags available.
136
Danielle Soban
FLOPS Input File
FLOPS
Input Block 1: Control Information Analysis Type = . . . Analysis Options = . . . ... Input Block 2: Weights & Geometry Wing Area = . . . Wing Weight = . . . ... Input Block 3: Aerodynamic Data CL = f(alt,Mach) CD = f(alt,Mach) ... Input Block 4: Propulsion System Data Thrust = f(alt,Mach,PC) Fuel Flow = f(alt,Mach,PC) ...
Aero. Analysis Routines
Propulsion Analysis Routines
Optimal Maneuver Schedules
Piecewise Integration Through Time: ∆t
∆(Distance)=(Speed)∆t
Distance=Σι ∆(Distancei)
∆(Weight)=(dFuel Burn/dt)∆t
Weight=Σι ∆(Weighti)
∆(Altitude)=(dAltitude/dt)∆t
Altitude=Σι ∆(Altitudei)
∆(Speed)=(Acceleration)∆t No
Speed=Σι ∆(Speedi)
Vehicle Sizing Loop: Revise Weight Estimates
Analyze Next Mission
Input Block 7: Performance Points SEP @ . . . Max Mach @ . . . ...
New Guess on Fuel Weight
No
Output Block 1: Input Echo Analysis Type = . . . Analysis Options = . . . ... Output Block 2: Aerodynamic Analysis Analysis Results = . . . Drag Polars = . . . ... Output Block 3: Propulsion Analysis Analysis Results = . . . Engine Deck = . . . ... Output Block 4: Mission Analysis Data Optimal Maneuver Schedules = . . . Mission Time History = . . . ...
Etc.
Input Block 5: Mission Description Cruise Distance = . . . Loiter Time = . . . ... Input Block 6: Performance Points SEP @ . . . Max Mach @ . . . ...
FLOPS Output File
Last Mission? Yes
No
Total Fuel Used = Available Fuel? Yes
Call Other Analysis Modules, Write Case Output to File
Output Block 5: Performance & Weights Point Performance = . . . Vehicle Weight Breakdown = . . . ... Output Block 6: Other Analyses TO/Land Time Histories = . . . Acoustic Noise Analysis = . . . Vehicle LCC Analysis = . . . Output Block 7: Additional Missions Mission Time History = . . . Point Performance = . . . ...
Figure 48 – FLOPS Analysis Flowchart [71] ITEM The development of the conceptual model points to the need for a campaign level code capable of modeling, to a somewhat coarse degree, aircraft and their capabilities and performance. The code additionally must be capable of flying these aircraft in some sort of a scenario, and output indicative effectiveness parameters. ITEM was chosen and was briefly described in Chapter IV. ITEM was originally developed in response to a request from the staff of the Commander in Chief Pacific (CINCPAC) to the Defense Nuclear Agency. The request was for a capability that would allow them to perform theater level analysis of joint force
137
Danielle Soban
expeditions taking place in the Pacific Theater [58]. Initial studies of this request led to the conclusion that the model must have the following capabilities: Simulate campaigns that reflect any Red or Blue campaign strategy or combat doctrine Integrate air, land and naval operations Simulate alternative campaigns quickly to explore many force mixes or courses of action Provide maximum transparency decisions and campaign outcomes
between
strategic
These desired capabilities led to modeling decisions in the development of ITEM, which are summarized in Table 8.
The first capability led to the decision to use human
judgement in the making of strategic decisions. Embedded rule systems were excluded because of their lack of flexibility and transparency (leading to the problems discussed in Chapter V). As a result, ITEM is an interactive model that may be stopped at any time in the analysis to allow the user to control her/his decisions (refer back to Figure 33), and full output of the model is displayed at these stopping points. A full Monte Carlo approach was ruled out. The second capability required that the simulation events cover a wide range of warfare areas, with these events interlaced in time to simulate realistic sequencing of events. This led to a relatively short event period of time (on the order of an hour). The third capability led to the goal of including no unnecessary level of detail, minimizing setup time and long run times. This is a common goal and feature of theater level codes, as discussed in Chapter III and shown in the military code continuum.
138
Danielle Soban
Table 8 – Desired Capabilities and Resulting Model Features in ITEM
Desired Capability
Model Feature Use of human judgement
Simulate campaigns that reflect any red or blue campaign strategy or combat doctrine
Embedded rules excluded Interactive model with mid-campaign outputs Full Monte Carlo analysis excluded
Large warfare areas for events Integrate air, land and naval operations
Events interlaced in time Short event period (one hour)
Simulate alternative campaigns quickly to explore many force mixes or courses of action
Provide maximum transparency between strategic decisions and campaign outcomes
No unnecessary detail to minimize setup and run times Event scheduling as straightforward/easy as possible
Achieved through incorporation of all of above
ITEM was written in C++ within a UNIX environment. It thus uses an objectoriented approach and is highly graphics intensive. ITEM uses a hierarchical structure of its database, allowing for the rapid creation of objects. This structure is shown in Figure 49. To create an object, all other objects that depend on that one must clearly be defined. For example, in order to create an aircraft, first the supplies, weapons, standard conventional loads, and emitters for that aircraft must be defined. Once all objects have been defined, scenarios are developed that include the geographic locations of the
139
Danielle Soban
campaign, as well as maneuver schemes and plans (for example, an air campaign plan or an amphibious assault plan). The war game is then executed by specifying the number of time steps to progress. Output files from the execution are available for post-processing analysis.
WEAPON
SUPPLY TYPE
TABLE CLASS OBJECTS SENSOR C O R E
SCL
AIRCRAFT
GROUND COMPONENT
LANDING CRAFT
SHIP CLASS
C AIRBASE O M MPA M CAP A STATION N D ACI / STATION C A MCM M AREA P A I AREA OF G INTEREST N
GROUND UNIT TYPE
SHIP SURFACE SHIP
GROUND UNIT
GENERIC COMPONENT
MOBILE SSM SITE
USER DEFINED INSTALLATION
SUPPLY DEPOT SEAPORT
TASK GROUP TG OPAREA
FIXED SSM SITE
GROUND FORCE
SUBMARINE
GROUND CORPS
AMPHIBIOUS ASSAULT AREA
SAM SITE
MOBILE SSM FIELD
GROUND ARMY
SPA
SEA AREA
EW/GCI SITE SENSOR COVERAGE AREA
LAND AREA
WEATHER AREA
SIDE INDEPENDENT AREAS
MINEFIELD
Figure 49 – Hierarchical Structure of ITEM Database
At this point it would be helpful to convey to the reader the basic computational structure of ITEM. ITEM uses the concept of fractional aircraft and weapons in its calculations. Damage assessments, strike capabilities, and other elements are calculated using multiplicative probabilities. This results in fractional elements being sent to attack, fractional weapons being launched, and fractional damage being done. While at first this may not seem intuitive, these numbers can be made analogous to a percentage. For
Danielle Soban
140
example, if the damage report comes back for an aircraft and is reported as 0.3, that can be interpreted as 30% of an imaginary fleet of 100 aircraft are damaged. This way of looking at things allows the analyst to conduct smaller, scaled down versions of actual combat scenarios. A summary of the logical structure involved in an air strike and defense will now be presented as an example of ITEM’s computational structure. The information given below has been summarized from Reference 58, and the interested reader is referred there for a more detailed discussion. First, air launch events are examined. Air launches are scheduled in the air campaign section of ITEM, and consist of selecting a time for the launch to occur, the aircraft and weapons package to be used, and the target(s) and their priority to be attacked. The raid elements are processed sequentially and the first step is to obtain the number of aircraft and weapons at the source. If the raid element cannot be supported by the source, then the raid is cancelled. (For example, a user might have specified launching an aircraft from a SAM site). The following is an example of the logic structure used to determine the raid element availability:
If, attacker is an aircraft, and if, source is an air base aircraft_avail = ac_readyairbase, aircraft and for every weapon, wep_i, in scl_load, scl_wep_i_avail = num_wepsairbase, wep_i Else, if source is a ship aircraft_avail = ac_readyship, aircraft and for every weapon, wep_i, in scl_load, scl_wep_i_avail = num_wepsairbase, wep_i Else attacker is a missile num_missile_avail = num_wepssm site, missile or num_missile_avail = num_wepship, missile
Danielle Soban
141
Similar logic structures are used to test for inclement weather. If the raid is launched, raid elements and weapons are subtracted from the inventory. The interactions between the raid elements and the defense assets are now modeled. A simple attack by aircraft against a defending SAM site protecting an airbase will now be described. The process is summarized in Reference 58: The module first determines whether the raid is detectable and engageable by checking to see if the closest point of approach (or minimum distance to the route segment end points) is inside the envelope of the SAM system (i.e., less than maximum range, less than maximum altitude, and greater than minimum altitude). If it is not, the SAM system versus raid event terminates. Next, it computes the location of the first intercept point against the most detectable raid element (i.e., the raid element with the longest detection range). If no intercept is possible for this raid element, then no intercepts are possible against any of the less detectable raid elements. If the intercept is beyond SAM range, firing is delayed until an intercept is possible. If the intercept is inside the envelope, the outcomes of the intercept (i.e., expected raid element kills and SAM expenditures are computed). Next, other raid elements are selected sequentially and the outcomes are computed. After all of the raid elements have been processed for the current intercept point, the module computes the next intercept point and the process continues until: (a) the site is out of ammunition, (b) all of the raid elements are destroyed, or (c) the next intercept point falls outside the SAM envelope. If the intercept falls inside the minimum SAM range, the intercept is delayed until the raid reaches minimum range on the outbound leg, unless the SAM is on a ship, in which case no more intercepts are attempted. The above process is shown in Figure 50.
142
Danielle Soban
Start
Raid elements detected ? yes
no
Compute the next intercept opportunity
Opportunity inside SAM envelope ? yes
no
Get the next raid element
Compute the intercept outcomes
All raid elements yes done ?
no
yes
Figure 50 – SAM Engagement Flow Chart [58]
The first step is to determine the number of raid elements detected. The horizonlimited radar range for the SAM site is computed as follows: Rh = 1. 23 ( Hs + Hr ) where, Hs = the height of the radar for the SAM site Hr = the altitude of the raid For each raid element, the aircraft’s relative detectability is used to adjust the nonhorizon-limited radar range for the SAM site. This is done by adjusting the tracking range, Rtrk,s, of the SAM site:
143
Danielle Soban
Rdet_unlim,e
=
[
Rtrk,s Frel det,r,e
]
1/ 4
with Rdet_unlim,e = non-horizon-limited radar range for SAM site = relative detectability of raid element Freldet,r,e The adjusted detection range for the SAM site is then the lower of the two radar ranges, either the horizon-limited radar range or the non-horizon-limited radar range. The closest point of approach of the raid element to the SAM site, Rcpa, is then calculated.
The number of aircraft in the raid element detectable at the minimum
distance is defined as Ndet and is computed as follows: For each raid element e, If, Rcpa ≤ Rmin_det,e then, Ndet,e, = Nr,e with Nr,e being the number of original raid elements The individual raid elements that are detectable are summed to generate the total expected number of targets detected for the entire raid, Ndetected, Ndetected
=
P op_ radar,s ∑ Ndet,e ∀e
with Pop_radar,s = probability that SAM radar is operational No event occurs unless at least part of a raid is detected at the minimum distance, that is: Ndetected
>
0.0
The geometry of the engagement process is shown in Figure 51. The trajectory occurs in an X-Y plane which crosses the X-axis at the closest point of approach (Xoffset
144
Danielle Soban
in the figure).
The trajectory is therefore parallel to the Y-axis.
The following
definitions are used in the calculations. Rsam = sam range = Rmax,ws Ssam = sam speed = Sws Sraid = raid speed = Sr Sratio = speed ratio = Sraid / Ssam Ssalvo = sam salvo size = Nsalvo,s Treact = reaction time = Teng,ws Nweps_avail number of weapons available = Nrail,s = Nsim_engage number of simultaneous engagements for = the site = Nmax_eng.s Pop_radar, s = Pop_fc, s = Nac_elem = = Nr,e
probability the site's radar is operational probability the site's fire control is operational number of aircraft or weapons in the raid element
Fmaneuver_elem = raid element = Fman,ar
maneuverability factor for aircraft in the
Rdetect, s = site's maximum detection and tracking range against the raid element with the largest relative detectability = max [Freldet,e] Rtrk,s
145
Danielle Soban
Rdetect
Sraid
Y
Ydetect
Rsam
Yfire
Yintercept Yfire
Treact
Treact
Ssam Yintercept
Ssam
X
Xoffset
raid
Figure 51 – Surface to Air Missile Engagement Process [58]
This geometry is used to calculate the first point of detection, Ydetect. The speed of raid multiplied by the reaction time of the SAM site is used to calculate the position of Yfire, which is the first opportunity of the SAM site to fire a weapon. Geometry is again used in conjunction with the speed ratio to calculate the point of intercept, Yintercept. Because ITEM allows up to a specified number of engagements at one time, more than one intercept and firing opportunities may be calculated. These are functions of the number of targets successfully detected and tracked, and the number of weapons available. This is shown in Figure 52. The number of salvos fired is then calculated. Nsalvos_fired = min(Nsalvos_avail, Ndetected, Nsim_engage) Pop_fc,s Ptrk,s
146
Danielle Soban
where, Nsalvos_avail Ndetected
= =
Nweps_avail / Ssalvo P op_ radar,s ∑ Ndet,e ∀e
Y
Sraid longest range intercept
Nmax_eng intercepts
Fire Point 1 Intercept Point 1
Fire Point 2
Ssam
Intercept Point 2
Ssam
Fire Point 3 Intercept Point 3
Ssam
X
Xoffset
Ssam Fire Point 4 Intercept Point 4
raid
Figure 52 – SAM Intercept Opportunities [58]
Finally, the outcome of the intercepts is calculated.
Examination of these
equations makes clear the fractional characteristics of the assessments. The number of salvos launched at each raid element is made proportional to the number of aircraft detected in the raid element divided by the total number of aircraft detectable. Nsalvos_elem = Nsalvos_fired (Nac_elem Pop_radar,s / Ndetectable) only for elements which are detectable.
147
Danielle Soban
The number of aircraft damaged in the element is the product of the number of salvos, the salvo Pk, and a maneuver factor for the raid element. The salvo probability of kill is, Pksalvo_elem
=
1 - (1 - Pksam, elem)salvo
= =
PKsam,ws,re Ssalvo
with Pksam, elem salvo
and, the number of aircraft damaged is Nac_damage, e
= Nsalvos_elem Pksalvo_elem Fmaneuver_elem
At this point the number of aircraft remaining in the raid element is decremented and the weapon inventory of the SAM ship or site is decremented. Mission Level Code The astute reader would now expect some mention of a mission level analysis code, in accordance with the military code continuum. Indeed, the outputs from the engineering level code (FLOPS) do not map directly into the inputs of the campaign level code (ITEM) and there does exist a need for a mapping between these two. But the reader is reminded that the development of the conceptual model is an iterative technique. Because the researcher is presenting an example of POSSEM, it made more sense to address the mission level mapping where it actually came into play: during the creation of the linked environment. The deficit in the analysis pathway is discovered at that point,
Danielle Soban
148
and the reader is urged to continue to that section, after the baseline section, for a more thorough discussion of the middle level mapping techniques used in this example.
Baselines Armed with the answers to the three key questions, the development of the conceptual model is continued. The next three steps: establishing the baselines, defining the inputs and outputs, and developing the scenario, are iterative in nature. Considerations of one influence the others, and decisions for these steps must be arrived at concurrently. However, in order to effectively record the process, the baseline step will be undertaken first. The goal identified of the analysis was defined to be the examination of the effect of adding survivability traits to an aircraft. This implies most obviously a baseline aircraft to which to apply the survivability traits. Several issues were considered during the selection of the baseline. First was the history of survivability itself. Earlier research, documented at the beginning of this chapter, indicated that survivability considerations did not become a primary concern until the conflicts in Southeast Asia beginning in the 1950’s. Because aircraft of that time were not designed to incorporate susceptibility into their features, using a baseline aircraft from this time could be misleading. In other words, the technological jump of redesigning a Korean or Vietnam war era aircraft to have major survivability features might be too distinct. Any change along those lines would be too significant. It would make more sense to choose a more modern aircraft that could incorporate increased survivability as one of its several, integrated features.
Danielle Soban
149
The survivability features would not dominate the aircraft, and a more realistic assessment of those features could ensue. Another consideration of the baseline is the availability of information in order to create and validate a model. Because a detailed sizing and synthesis code will be used, enough information would need to be available to “match” and validate the model. Drag polars, engine data, detailed sizing mission profiles, and geometric aspects would be needed. With these considerations in mind, the baseline aircraft chosen for the example was the Boeing F/A-18C. Aircraft Baseline - F/A-18C The Boeing F/A-18C Hornet was chosen as a representative modern fighter aircraft for this study. In addition to the F/A-18C having the performance characteristics of a modern fighter, geometric and performance data was readily available, allowing the creation of an accurate model. The F/A-18C is a twin engine, mid-wing, multi-mission tactical aircraft [72]. It was first flown in 1978, with operational deployment of the A/B models in 1983 and the C/D models in 1987. The Hornet has a ceiling of 50,000 ft, a maximum range (with external tanks) of 1,379 miles, and a maximum speed of over Mach 1.8. A three-view of the aircraft is shown in Figure 53 and characteristic data is shown in Table 9.
150
Danielle Soban
Figure 53 – Three View of Boeing F/A-18C [72]
Table 9 – Selected Characteristic Data of Boeing F/A-18C [72] Wing Span
37.5 ft
Aircraft Length
56 ft
Aircraft Height
15.29 ft
Wing Area
400 ft2
Max Gross T/O Weight
36,710 lbs
Internal Fuel Capacity
10,860 lbs
Engines Maximum Speed
Two General Electric F404-GE-400 Turbofans Mach 1.8+
151
Danielle Soban
Design Mission The design mission used to size the baseline aircraft was a fighter escort mission taken from the Standard Aircraft Characteristic [73] sheets, and is shown in Figure 54. The mission consists of takeoff and climb segments, a cruise of 311 nautical miles at best Mach number and altitude followed by a one minute combat segment.
The return
segment, also at best Mach number and altitude, includes a 20 minute sea level loiter. The weapons loading for the mission was two AIM-9 missiles on the tips of the wings, and a gun with ammunition.
Intermediate Thrust Climb 42,550 ft
Actual Modeled ( ) Cruise at Optimum Mach and Altitude (39,910 ft)
(40,000 ft)
41,300 ft (37,796 ft)
39,300 ft (38,904 ft)
38,100 ft
Reserves: 20 minutes Loiter at S.L. plus 5% of T/O Fuel Start & Taxi, Accelerate to Climb Speed 4.6 minutes at Intermediate Thrust, SLS
Combat at 10,000 ft 1 minute at Maximum Thrust Mach 1.0 (missiles retained)
Combat Radius =311 nmi
Figure 54 – Design Mission Used for Sizing and Matching the F/A-18C [75]
152
Danielle Soban
Validation The F/A-18C was modeled in the FLOPS synthesis code described earlier. The validation of the modeled aircraft was conducted by matching the output from the sizing code to the data in References 72 and 73. To do this, the geometry of the aircraft was added to the input file, as well as aerodynamic data read from figures in the references above. An external matched engine deck was also linked to the code [74] and its general characteristics are shown in Table 10. The engine is an F404-GE-402, which is an increased performance derivative of the F404. It features a dual-spool mixed flow turbofan architecture, 3X7X1X1 turbomachinery configuration. The engine is shown in Figure 55.
Table 10 – General Engine Specifications for F404-GE-402 Category Thrust
Specification 17,700 lb
SFC (max A/B)
1.74 lbm/lbf-hr
SFC (IRP)
0.81 lbm/lbf-hr
Airflow (SLS)
146 pps
Weight
2,282 lb
Length
159 in
Diameter
35 in
Danielle Soban
153
Figure 55 –F404-GE-402 Engine Used on the F/A-18C Model
The code was then executed and the outputs matched to the data. If there was a discrepancy, internal scaling factors in FLOPS were used to match the data [75]. Table 11 [76] compares the output weights from FLOPS with the actual weights. It should be noted that the weights were matched using scaling factors on the internal weight calculation equations, and were not input directly into FLOPS.
154
Danielle Soban
Table 11 – Weights Matching for the F/A-18C in FLOPS F/A18C Weight Breakdown Comparison Group
F/A18C
Baseline Model
Wing
3,919
3,918
Tail Group
1,005
1,006
Body
5,009
5,009
Alighting Gear
2,229
2,228
4,420
4,417
921
922
1,078 1,061 206 84 351 592 1,864 948 631 641 180 207 114 252 58 25,770
1,078 1,062 206 84 352 592 1,864 948 631 642 180 207 115 252 58 25,771 1,410
Propulsion Group Engines Engine Section Gear Box Controls Starting System Fuel System Flight Controls Auxiliary Power Plant Instruments Hydraulics Electrical Avionics Armament, Gun, Launchers, Ejectors Furnishings, Load/Handling, Contingency Air Conditioning Crew Unusable Fuel Engine Fluids Chaff, Ammunition Miscellaneous Operating Weight Empty Missiles (2) AIM-7F (2) AIM-9L Mission Fuel Takeoff Gross Weight
1,020 390 10,860 38,040
10,857 38,038
The matched drag polars for the F/A-18C are shown in Figure 56 [75].
155
Danielle Soban
Altitude = 36,089 ft 1.6
1.4
1.2
Mach
1
CL
1.8 1.5 1.2
0.8
0.9 0.6 0.2
0.6
0.4
0.2
0 0.00000
0.10000
0.20000
0.30000
0.40000
0.50000
0.60000
CD
Figure 56 – Matched Drag Polars for F/A-18C
System Inputs and Outputs The next step in the iterative process of the conceptual model is system level inputs and outputs. By “system level” it is meant the general inputs and outputs needed to provide the answers to the problem being analyzed. This does not mean that at this point in the process every input and output to every code being utilized needs to be defined.
Those specific inputs will be determined during the mapping and modeling
portion of POSSEM, taking place in the Create Linked Analysis Environment step. The concept of system level inputs and outputs is illustrated graphically in Figure 57.
156
Danielle Soban
Linked Modeling Environment Contains intermediate mappings
Code 1
Code 2
Code 3
Figure 57 – Determining System Level Inputs and Outputs
The reader is again reminded that the conceptual modeling process is iterative. At this point in the process, the analyst would need to have at least a basic idea of the scenario in order to formulate specific outputs. At the same time, having an idea of which outputs would be needed in order to analyze the given problem would help drive the scenario selection. The general problem goal of assessing survivability concepts must now be bracketed into a realistic testing circumstance. While the ideal survivability test case would involve trading not only between individual survivability concepts but between susceptibility and vulnerability concepts as well, it was thought that this would be too ambitious for an initial test case. Vulnerability analysis involves detail design studies, involving tools which are not currently available for use in methodology, but which may be added later. In addition, much work has already been done in the area of vulnerability analysis, with studies on susceptibility trades being more rare. It was thus decided that the initial test case of the systems effectiveness methodology would incorporate survivability trades by considering susceptibility concepts only.
The three key
susceptibility concepts that will be traded and analyzed during the study will be stealth,
157
Danielle Soban
tactics, and maneuverability.
These three concepts were chosen because of their
diversity, as well as their inherent capability to be modeled using existing tools. Inputs: Engineering Level Inputs were sought in FLOPS that would mimic the effects of adding survivability concepts to the baseline aircraft, based on the three areas chosen above. The first consideration was stealth, and the primary indication of stealth is radar cross section. Therefore, inputs were sought that would reflect a change in the baseline radar cross section. Reference 77 contains a study of the effect of changing geometric variables on radar cross section, and also uses the same baseline aircraft and modeling code. In concurrence with Reference 77, the following geometric variables were chosen: leading edge wing sweep, wing area, aspect ratio, taper ratio, and wing thickness to chord ratio. These variables were chosen in response to what was considered the primary radar area of interest: +/- 30 degrees in azimuth and elevation from a nose-on direction [77]. In addition, a varying parasitic weight factor was added as a variable to account for stealth. This technique was used in a previous study of the effects of stealth on notional aircraft, and is meant to model stealth enhancements, such as radar-absorbing paint, added to the aircraft in the form of a weight penalty [78]. Finally, a secondary mission was input for the sized aircraft. The mission altitude of this secondary mission was allowed to vary, mimicking a change in tactics: a low-low mission. While this tactic would decrease an aircraft’s susceptibility to defensive radar, a performance penalty is paid. Maneuverability in FLOPS is not an input, but rather a performance output of the code. As such, it comes into play not as an engineering code
158
Danielle Soban
input, but as an output, and will be allowed to vary at a different point in the analysis, as a function of its lower level system dependencies. The complete set of inputs to the engineering level code FLOPS is shown in Table 12. Ranges for the variables are indicated. For the geometric variables, the same ranges were used as in the study contained in Reference 77. Using the same variable ranges enhances conformity of data and also allows use of the existing resulting response surface equations for radar cross section, discussed in subsequent sections. The parasitic weight variable was allowed to range from 0 lbs to 1000 lbs, a range consistent with the study in Reference 78. Finally, secondary mission altitude was allowed to range from 0 to 10,000 feet, the altitude of the combat section.
Table 12 – System Level Inputs Variable
Low Value
High Value
15 degrees
30 degrees
400 sq ft
500 sq ft
Aspect Ratio
3
4
Taper Ratio
0.3
0.4
Thickness/Chord Ratio
0.04
0.06
Parasitic Stealth Weight
0 lbs
1000 lbs
0 ft
10,000 ft
Wing Leading Edge Sweep Wing Area
Secondary Mission Altitude
159
Danielle Soban
Outputs: Campaign Level The outputs at the system level will be those metrics obtained from the campaign level code that give insight into the questions posed in the first part of the creation of the conceptual model. These Measures of Effectiveness are shown in Table 13.
Table 13 – System Level Outputs % Blue Aircraft 1 survived % Blue Aircraft 2 survived No. of Weapons fired by RSAM1 No. of Weapons fired by RSAM2 Damage to RSAM1 Damage to RSAM2 Damage to RSAM3 Damage to RSAM4 No. of Runways Destroyed No. of Shelters Destroyed No. of Revetments Destroyed No. of Aircraft in the Open Destroyed No. of Cruise Missiles fired
The first two MoE’s are the survivability percentages for two Blue aircraft. The aircraft are identical, yet each carries a different weapons load. This was done to achieve transparency with respect to the weapon: it allows the analyst to isolate the effect of two different weapons. The next metric counts the number of cruise missiles launched, and can be used to determine the cost versus benefit ratio of the missile launch on Day 1 (the scenario will be discussed in more detail in the next section). The remaining metrics are measures of damage done to the Red side. The number of surface to air missiles that are used by each of the two engaged SAM sites is recorded. Damage at the airbase is tallied
Danielle Soban
160
by damage to the runways, the shelters, revetments, and number of aircraft in the open destroyed. Finally, the damage count to each of the four SAM sites is totaled.
Scenario The next step in the conceptual model was to determine the scenario to be used. Recall that this step is done iteratively with the selection of the inputs and outputs, and the determination of the baselines. To that end, realize that the outputs discussed in the previous section had to be determined from a basic outline of the scenario to be used. This just reinforces the iterative nature of the technique. The developed scenario had to fulfill some basic premises. First and foremost, it had to provide the environment in which the basic conceptual question(s) could be answered.
In addition, it needed to be detailed enough to showcase significant
interactions, yet simple enough to aid in transparency, and also to keep computational time (expense) to a minimum. In light of recent world events, and in order to focus attention on the method itself (avoiding preconceptions associated with certain regions of the world and military engagements), a neutral scenario location was selected. Just as Florida was used in the preliminary investigation, the test scenario will occur in the state of California, signifying a fictional confrontation between North and South California. Southern California was selected as the Blue side, leaving Northern California to represent the Red side. To build on previous knowledge, a somewhat similar geographical setup was developed.
However, the new scenario is considered more robust, and significant
161
Danielle Soban
analysis went into the selection of its details and geography. This geography is shown in Figure 58.
RSAM-4
RSAM-3 RSAM-1 RSAM-2
Southeast Airbase
Surface Ships South Airbase
Figure 58 – Scenario Geography for POSSEM Example
A Red airbase is located in the vicinity of Sacramento, and is protected on roughly cardinal points by four surface to air missile sites, named RSAM1, RSAM2, RSAM3, and RSAM4. The Blue side has a Task Force and surface ship stationed just off the coast of Monterey Bay. The ship is armed with cruise missiles. In addition, Blue has
162
Danielle Soban
two airbases located south and southeast of the Red airbase. There are two types of Blue aircraft stationed at each airbase.
Both are based on the notional baseline aircraft
described earlier, and differ only by their weaponry, which are in the form of air to ground bombs. Scenario element details are shown in Figure 59, Figure 60, Figure 61, and Figure 62. It is recognized here that some of the variable values are considered unrealistic.
These values were based on the numbers used in the preliminary
investigation, which were selected by JHAPL in order to keep the scenario unclassified. Because of these numbers, the reader is cautioned to view the results as trends and as a proof of concept of the method rather than as absolute numbers.
Type Min. Range Max. Range Speed Probability of Hit Launched Anti-
SLCM-1 ATG-1 ATG-2 SAM-1 SAM-2 cruise missile air to ground bomb air to ground bomb surface to air missile surface to air missile 0 0 0 0 0 450 10 10 30 25 * * * 500 500 0.9 0.9 0.7 0.6 0.8 ship air air land land land land land air air * not a factor
Figure 59 – Scenario Weapons Track Range (nm) Track Azimuth Limit (deg) Prob. of Track Max Engagement Salvo Size Number of Rails Stockpile Time to Reload (min)
RSAM-1 60 360 0.9 2 1 2 100 10
RSAM-2 50 360 0.8 4 1 4 100 15
RSAM-3 60 360 0.9 2 1 2 100 10
Figure 60 – Scenario SAM Site
RSAM-4 50 360 0.8 4 1 4 100 15
163
Danielle Soban
Mission Duration Speed Range Relative Detectability Maneuverability Max. Altitude (ft) Standard Conventional Load
Blue 1 Aircraft 2.5 750 750 variable variable 30,000 4 ATG-1
Blue 2 Aircraft 2.5 750 750 variable variable 30,000 4 ATG-2
Red Aircraft * 2.5 550 750 0.75 0.05 25,000 0
*in this scenario, Red Aircraft is target only. Stationed at Red airbase
Figure 61 – Scenario Vehicles
Runways Shelters Revetments Aircraft in Open No. of Blue 1 Aircraft No. of Blue 2 Aircraft No. of Red Aircraft Weapons (ATG-1,2)
Blue Airbase 1 4 20 10 70 50 50 0 400
Blue Airbase 2 4 20 10 70 50 50 0 400
Red Airbase 3 4 20 10 70 0 0 100 0
Figure 62 – Scenario Airbases
On Day 1, cruise missiles are sent from the surface ship to attack the four SAM sites. One missile is sent to target each site sequentially, and they are sent an hour apart. A twelve hour span occurs with no attacks, during which time the SAM sites attempt repairs. A second volley, identical to the first, begins again. The first day attacks are meant to damage the SAM sites, increasing the success of the Day 2 mission. On Day 2, the Blue airbases use the notional aircraft to attack both the SAM sites and the protected airbase. Four strike packages are created, with six aircraft in each. The first two aircraft, of the same type, are sent to target the first SAM site, and are tasked
Danielle Soban
164
with attacking the radar and the fire control systems associated with that SAM site. The next four aircraft of the strike package are identical, but of the other type than that attacking the SAM site. These four aircraft target airbase facilities; runways, shelters, revetments, and aircraft in the open. For the next strike package, the next SAM site is targeted, and the type of aircraft sent to the targets alternates. Only RSAM1 and RSAM2 are legitimate targets, as the geometry shows. The strike packages occur four hours apart, and both airbase and SAM sites are allowed to attempt repair during the no-strike hours. The attacks from the second airbase are identical to the first.
Summary of Conceptual Model The conceptual model is now completed for the example survivability case. It is the most important step in the process, and one that entails quite a bit of preparation and pre-analysis, as the length of this section attests. To summarize, the conceptual model is an iterative process that defines the entire analysis pathway for the problem. First, the problem must be well defined. This is accomplished by answering three key questions. In so doing, the scope of the problem is clearly outlined, and the necessary tools are identified. Next, baselines are established that will be used in the investigation. System level inputs and outputs are defined which are applicable to an appropriate scenario. Once the conceptual model is completed, the analyst may return to the rest of the POSSEM process. The example survivability problem is continued with the next step.
165
Danielle Soban
Example: Identify Key Decision Nodes The next step in POSSEM is to use the scenario developed in the conceptual model and identify the key decision nodes that occur. The motivation for this is to attempt to rectify the Human in the Loop problem discussed in Chapter V. These decision nodes will represent the places in the code that a human operator would normally stop and evaluate their inputs and decisions up to that point in the model run. Decision trees are used to aid in the mapping of the decisions. Figure 63 is a repeated figure of the decision node step in POSSEM.
Identify Key Decision Nodes Decisions Assign Probability Distributions
Decision Trees
Figure 63 – Identify Key Decision Nodes Step in POSSEM
Using the scenario discussed earlier, the first decision point occurs at the beginning of Day 1. The decision is whether or not to use cruise missiles fired from the surface ship to attack the Red airbase. The motivation for this attack is to damage the defensive SAM sites, increasing the effectiveness of the attacking aircraft on the successive day. The drawback of this decision is the potential cost associated with using the cruise missiles, and the potential depletion of that weapon source.
166
Danielle Soban
The second decision concerns which Blue airbase will be used to attack the Red airbase. Examination of the geometry in Figure 58 shows that an attack from the south would invade the defenses of RSAM-2. However, the attack route would take the aircraft directly over the SAM site, maximizing exposure time to defensive weapons. An attack from the southeast airbase would provide a route between RSAM-1 and RSAM-2, potentially exposing the aircraft to both SAM sites, yet minimizing exposure to both. Therefore, the second decision point is whether to attack from the south or the southeast. Because the example presented here is a proof of concept, it was decided that two decision points would be enough to demonstrate the efficacy of the method.
The
resulting decision tree is shown in Figure 64. The next step was to provide probabilities to each of the decision nodes. Because the use of cruise missiles do not put human life at risk, and comparing the relative costs of a cruise missile and a military strike aircraft, it was thought the most likely decision would be to use the cruise missiles on Day 1. Thus, a probability of 0.7 was assigned to the use of cruise missiles. Concerning the selection of the attacking airbase, it was thought that the southeast airbase might be less risky for the aircraft than the south airbase.
However, it is assumed there was insufficient
intelligence about the defensive capabilities (or perhaps even the exact locations) of the SAM sites, so the southeast attack was only considered slightly more favorable than an attack from the south. A 0.4 probability was assigned to an attack from the south, and a 0.6 probability assigned to a southeast attack. The resulting decisions will be modeled using the linked analysis environment developed in the next step.
167
Danielle Soban
Beginning of Campaign
Use Cruise Missiles
0.7
Day 1
0.4
0.6
Attack from South
Attack from Southeast
0.3
0.4 Attack from South
Do Not Use Cruise Missiles
0.6 Attack from Southeast
Day 2 Figure 64 – Decision Nodes for POSSEM Survivability Example
Example: Create Linked Analysis Environment Now that the conceptual model has been created and the scenario and its decision nodes have been identified, it is time in the process to create the linked analysis environment.
The conceptual model was classified the most important step in the
process. The creation of the analysis environment is now labeled the most difficult and time consuming step in the process. Even after the codes have been selected, linking them together in a cohesive, seamless way is by no means trivial, and setbacks at this step could endanger the entire process. The outline of the step is shown repeated in Figure 65. The goal of this step is to take the analysis codes decided upon in the conceptual model
168
Danielle Soban
and create software links between them, in order to use the finished product as an analysis tool.
Create Linked Analysis Environment RCVR
Receiver
Software Zoom
Radar Code
Missile Synthesis
k in eL ar ftw So
Weapon Code
Force on Force
Many vs. Many
Aircraft Synthesis
Choose codes based on Conceptual Model Software Link One vs. One
“Abstragration” Zooming
Aircraft Synthesis
Theater
Weapon Code
Engineering Codes
Mission Codes
Campaign Codes
Figure 65 – Create Linked Analysis Environment Step in POSSEM
The first step in this process is to examine the system level inputs and outputs determined by the conceptual model. The goal is to have a smooth analysis pathway that links the system level inputs to the system level outputs through the military code continuum. Assuming that middle level mappings are necessary, the outputs of one level must map into the inputs of the next. Part of this step, therefore, is to determine both these middle level mappings and the appropriate intermediate inputs and outputs. Each level in the analysis pathway will now be discussed, including inputs and outputs, and the links between them.
169
Danielle Soban
Engineering Level: FLOPS The inputs to the engineering level modeling tool FLOPS were presented and discussed previously, and are shown in Table 12. The selected output parameters for FLOPS are shown in Table 14. These consisted mostly of basic geometric sizing results (various weights) and performance parameters. The performance parameters will be used to map into maneuverability metrics at the middle level. The alternate range output is a result of the secondary mission. It is in effect the combat radius capable of the sized aircraft when performing the new secondary mission. Realize this number changes as the altitude of the secondary mission changes. It is a measure of the capability of the aircraft. While some of the output data will not directly be mapped as inputs into the next level, they were retained for insight and discussed in the analysis section. (Note that parsing outputs parameters from the existing FLOPS output files is a computationally inexpensive task.)
Table 14 – Output Parameters Tracked from FLOPS Tracked Outputs Takeoff Gross Weight (lbs) Operating Empty Weight (lbs) Takeoff Field Length (ft) Landing Field Length (ft) Alternate Range (nmi) Excess Energy Sizing (fps) Excess Energy Alternate (fps) Turn Rate Sizing (deg/s) Turn Rate Alternate (deg/s) Turn Radius Sizing (ft) Turn Radius Alternate (ft)
Danielle Soban
170
Radar Cross Section Mapping As mentioned earlier, Reference 77 contained a study that, in part, mapped aircraft geometric variables to radar cross section. This was accomplished by coupling geometric modeling tools together with a radar cross section prediction code and applying the methods of Chapter II. Because the baseline aircraft of that study was the F/A-18C and the variables and ranges used were identically chosen for use in this study, the resulting response surface equation was considered to be valid and useful in the present study. Figure 66 illustrates how the response surface equation was used. The equation of Reference 77 was a function of not only the wing geometric features but also the horizontal and vertical tail features.
Yet the results of that study, shown in the
accompanying prediction profile, showed little effect from the empennage variables on the radar cross section. Those variables were thus eliminated from the current study. In the equation, the values corresponding to the empennage variables were set to their nominal (midpoint) values and added to the intercept term. The reduced equation was then imported and combined with the other FLOPS output. Results are discussed in the analysis section. This completes the engineering level mapping. Figure 67 shows the engineering level mapping, including the inputs and outputs. The grey areas represent the mappings that still need to take place to achieve the complete cohesive analysis pathway. To try and identify what needs to be in the grey box, the outputs at the campaign level are examined and, working backwards, the inputs to that level are determined. In an ideal
171
Danielle Soban
situation, the outputs from the engineering level would map directly as the inputs to the campaign level. Realistically, however, this does not happen and a middle level mapping becomes necessary. The campaign level is now examined
intercept term (area, aspect ratio, sweep, taper ratio, t/c) RCS = b0 + f( horizontal tail geometry, vertical tail geometry, wing geometry)
retained variables for current study
set to their nominal values and added to intercept
Figure 66 – Use of Pre-existing Response Surface Equation for Radar Cross Section
Engineering
Campaign
Mission (middle)
System Level Outputs
System Level Inputs
% Survived Aircraft B-1
Parasitic Weight Detectability
Platform related
Secondary Mission Altitude Leading Edge Sweep Wing Area Aspect Ratio Taper Ratio Thickness/Chord
Radar Cross Section
Krcs
Excess Energy Sizing Excess Energy Alternate
Maneuverability
Turning Rate Sizing
Detectability
% Survived Aircraft B-2
Maneuverability
Number of SAM-1 Fired
Track Range RSAM1
?
Track Range RSAM-2
% Runways Destroyed % Shelters Destroyed
Repair Rate RSAM-1
% Revetments Destroyed
Repair Rate RSAM-2
% Aircraft in Open Destroyed % Damage RSAM-1
Turning Rate Alternate
Kexcess energy
Turning Radius Sizing
Kturning rate
Repair Rate Runway
Turning Radius Alternate
Kturning radius
Probability of Hit SAM-1
Alternate Range
Number of SAM-2 Fired
Probability of Hit SAM-2
Takeoff Gross Weight Operating Empty Weight Takeoff Field Length Landing Field Length
Figure 67 – Engineering Level Mapping
% Damage RSAM-2 % Damage RSAM-3 % Damage RSAM-4 Number of Cruise Missiles Fired
Danielle Soban
172
Campaign Level: ITEM At the campaign level, the outputs have already been identified. These are the outputs that will help the analyst gain insight into the problem being explored. Having previously identified ITEM as the campaign modeling tool of choice, the task is now to examine the inputs to ITEM and to select those most appropriate. Remembering the elements of the scenario to be used in the analysis, it is clear that the input variables associated with these elements must be examined. The scenario contains two Blue aircraft. These aircraft are the notional aircraft carried forth from the engineering level, and are modeled after the F-18C baseline. The aircraft differ by the standard conventional load each carries. The variables available in ITEM for an aircraft element are shown in Figure 68 [79]. The primary variables of interest in this study are the relative detectability and the maneuverability factors. Relative detectability, as discussed in more detail in the next section, is directly analogous to radar cross section. Both factors vary from 0 to 1. For detectability, a value of 1 means there is no reduction in aircraft detectability. A value of 0 means the aircraft is never detectable and thus never vulnerable to defensive systems.
The aircraft
maneuverability factor is a scaling factor on the probability of hit of defensive weapons. A value of 1 means there is no reduction in probability of hit. A value of 0 means the aircraft will never be hit. (The mathematical calculations used in ITEM involving these factors will be discussed in more detail in the analysis section). These two factors were selected as appropriate aircraft variables in the study. Other aircraft-specific inputs were set to values appropriate for the scenario.
173
Danielle Soban
Figure 68 - Aircraft Data Window in ITEM [79]
It should be noted at this point that the variables used to represent an aircraft in ITEM are quite few. This reinforces the argument put forth in this research that it is indeed necessary to link to a higher fidelity aircraft modeling tool in order to propagate changes made at the engineering level of an aircraft through to their effect at the campaign level. ITEM, and many similar campaign codes, simply do not contain the detail necessary to conduct such an analysis. Of course, as stated in Chapter III, this is a
174
Danielle Soban
normal characteristic of campaign level codes: a tradeoff is made between analysis detail and analysis breadth. The next set of variables was chosen to address one of the three primary issues identified in Chapter V.
It was postulated that a varying threat environment was
necessary in order to obtain the most robust analysis. Inputs were thus selected that would allow variability in the threat environment of the given scenario. These inputs included the tracking range of the RSAM-1 and RSAM-2 sites, and well as their repair capabilities. The last two variables are the probabilities of hit for the SAM-1 and SAM-2 weapons. (Realize that RSAM-3 and RSAM-4 are in essence non-players for the air attack events, due to the geometry of the problem. They were retained to mimic a cohesive defense environment around the airbase, and also come in to play during the Day 1 cruise missile attacks). In summary, the varying threat environment includes tracking ranges that shrink and grow, varying weapons capabilities, and changing repair rates at the defensive facilities. The complete set of ITEM inputs are shown in Figure 69, which also shows the mapping between variables at this point.
175
Danielle Soban
Engineering
Mission (middle)
Campaign System Level Outputs
System Level Inputs
% Survived Aircraft B-1
Parasitic Weight Detectability
Platform related
Secondary Mission Altitude Leading Edge Sweep
Radar Cross Section
Wing Area
Excess Energy Sizing
Aspect Ratio Taper Ratio Thickness/Chord
Excess Energy Alternate Turning Rate Sizing
Krcs
?
Maneuverability
Number of SAM-1 Fired
Track Range RSAM1 Track Range RSAM-2 Repair Rate RSAM-1
Number of SAM-2 Fired % Runways Destroyed % Shelters Destroyed % Revetments Destroyed % Aircraft in Open Destroyed
Kturning rate
Repair Rate Runway
% Damage RSAM-1
Kturning radius
Probability of Hit SAM-1
Kexcess energy
Turning Radius Sizing Turning Radius Alternate
Takeoff Gross Weight
% Survived Aircraft B-2
Maneuverability
Repair Rate RSAM-2
Turning Rate Alternate
Alternate Range
Detectability
Probability of Hit SAM-2
% Damage RSAM-2 % Damage RSAM-3 % Damage RSAM-4 Number of Cruise Missiles Fired
Operating Empty Weight Takeoff Field Length Landing Field Length
Figure 69 – Engineering Level Mappings and Campaign Level Mappings
Mission Level Mapping At this point the current analysis pathway was examined and a missing link discovered: some outputs of FLOPS did not map directly into inputs of ITEM (as is shown graphically in Figure 69). The two links that needed to be established were the mapping of radar cross section into ITEM, and the mapping of performance values, as a measure of maneuverability, into ITEM. These mappings correspond to the mission level in the military code continuum. This does not necessarily imply that a mission level code must be used. In a process similar to that of the conceptual model, candidate modeling codes and mapping capabilities are examined and the necessary level of detail is determined. For this particular problem, it was determined that the primary goal was to
176
Danielle Soban
establish an appropriate linked relationship between engineering level inputs and campaign level outputs. No direct analysis was needed at the mission level. Thus, a mission level code was not required to conduct the necessary mappings specific to the example being presented, but mission level relationships were needed. These mappings are discussed in the following sections. Detectability The first mapping needed was between the output of radar cross section and a similar input variable in ITEM.
The variable of “detectability” was examined.
According to Reference 79, the detectability variable was described as “The relative detectability of the aircraft. This is a scaling factor used to reduce the probability of detection of opposing forces against the aircraft in the engagement modules in ITEM. If the Relative Detectability is set to 1.0 there is no reduction in detectability. If it is set to 0.0, the aircraft will not be detected and therefore will not be vulnerable to defensive systems. Values between 0.0 and 1.0 reduce the probability of detection and, hence, the probabilities of kill for defensive systems proportionately.” This definition implies some sort of proportional relationship between detection range of defensive weapons and the detectability factor.
To truly understand what that
relationship is, and therefore use the variable correctly, Reference 58 was consulted. This reference describes how the non-horizon-limited radar range for a tracking SAM site against a raid element (aircraft) is computed.
The relative detectability of the raid
element is labeled Freldet,r.e and the SAM site’s tracking range is
177
Danielle Soban
Rtrk,s The non-horizon-limited radar range is then calculated as Rdet_unlim,e = Rtrk,s [Freldet,r,e]1/4 To understand the significance of this equation, the radar range equation is invoked [80]. 1
Rmax
Pt G 2 λ2σL 4 = 3 (4π ) Pmin
where Pt is the radar transmitter power G is the gain or electronic amplification of the radar λ is the wavelength of the radar σ is the radar cross section L is a loss factor accounting for atmospheric attenuation of radar energy Pmin is the minimum receiver signal threshold, determined by the background noise of the environment and the number of false alarms allowed and where
Pmin = TNR * kTs TNR is the Threshold to Noise Ratio k is Boltzmann’s constant (1.38 X 10-23 Watt-second/°K) Ts is the system noise temperature and includes internal radar and electronics and external random noise effects
For a given radar system, the equation reduces to
178
Danielle Soban
R = c4 σ Note the similarities with the non-horizon-limited radar range equation from ITEM! From this, it is clear that the detectability variable in ITEM is directly analogous to radar cross section. Thus, the radar cross section values output as a result of the mapping done at the engineering level can be directly mapped to the detectability variable in ITEM. The parasitic weight term added as a variable must also be mapped to radar cross section. In the referenced study in which parasitic weight was added to a notional vehicle, only the detriments of such an effect were catalogued; primarily the changes in takeoff gross weight and performance penalties [78]. It was concluded that to model the positive effects of adding stealth in the form of parasitic weight to the aircraft, a multiplicative scaling factor on the actual RCS should be used. The next issue was what range should be used for this scaling factor. Research in the open literature was not particularly fruitful: assigning values of stealth reduction due to stealth reduction techniques is highly classified. Because the parasitic weight was supposed to model an improvement analogous to radar-absorbing paint, research was conducted in that area. Words such as “significant” and “dramatic” were used to describe the reduction in RCS, but again, no numbers were ever found. Once source, however, did make the claim that “the USAF uses a type of radar-absorbing paint known as ‘Iron Ball’ which reduces the RCS of F-15 and F-16 fighters by 70% to 80%” [81]. Assuming that the F/A-18C is roughly analogous to these aircraft, it was assumed that such an improvement could also be achieved by this aircraft. Therefore, it was thought appropriate to examine an 80%
179
Danielle Soban
improvement in radar cross section due to radar-absorbing paint, modeled as parasitic weight. Realize, however, that this is merely a proof of concept study, and that the number is in fact a variable in the study. If better numbers are found for this range, due to new information or expert opinion, the methodology is such that the environment will already exist and the range may be changed without re-running cases. This is one primary advantage of the methods discusses in Chapter II and used herein. One final note is made on the multiplicative factor used on the RCS equation before discussing its implementation. The radar cross section is reported using the decibel scale. This is a logarithmic unit defined by: P(dB) = 10 log10(P/P0) Usually square meters are used for the reference P0, making the resultant unit dBm2. In order to use a multiplicative factor directly on the RCS equation, the RCS values were converted back into m2, and then the multiplicative factor was applied. The mechanics behind the RCS mapping are now discussed. Recall that the geometric variables from the engineering level were mapped to a radar cross section value using a response surface equation from Reference 77. This value will be referred to as RCSold: RCSold = fn(aspect ratio, wing area, wing sweep, thickness/chord) A multiplicative factor is used on RCSold to account for the change in radar cross section due to the addition of the parasitic weight:
180
Danielle Soban
RCSnew = krcs/pw RCSold
As stated earlier, the range on the krcs/pw factor will range from 0% improvement in radar cross section to 80% improvement in radar cross section. Recalling that the radar cross section will be mapped into the detectability variable in ITEM, and further recalling that by definition the variable must vary from 0 to 1, with 1 being no reduction in detectability and 0 being never detected, the krcs/pw factor must be applied appropriately. Thus, the krcs/pw factor ranges from a value of 1 (no reduction of RCS) to 0.2 (80% reduction in RCS). To calculate the resulting detectability factor, a reference value of RCS must be chosen. The maximum value of RCS possible, given the ranges of the variables, was shown to be 0.0541 m2. All other possible values represent a decrease in RCS, so this value is chosen as the reference value. The detectability value then becomes:
Dectectability =
Detectability =
RCS new 0.0541
k rcs/pw RCS old 0.0541
A comment is made here regarding the value of the RCS used to nondimensionalize the detectability. It is thought that the value here, representing the highest value of RCS given the variable ranges, is off by at least two orders of magnitude. The model fits of this number are discussed later in the analysis section, and possible suggestions for the low value of this number are given. One possibility is that the
181
Danielle Soban
modeling code used to generate this value did not incorporate the effect of the engines and inlets on the RCS. The RCS and corresponding response surface equation were found to be acceptable as incorporated into the research of Reference [77], and was thus considered adequate to use in the proof of concept of POSSEM. When calculating the maximum and minimum values possible for detectability, it is found that the detectability ranges from a value of 1 to a value of 0.157. While this may seem like quite a range, it must be remembered that the detectability factor is directly analogous to radar cross section, and, as such, the track range of defensive systems will be decreased by the fourth root of detectability value. Thus, significant changes in detectability are required to make significant changes in track range. Returning to the original parasitic weight range, the relationship between parasitic weight and the k factor can be described as:
∆ parasitic weight =
(1 - k
rcs/pw
0.8
)
∗ 1000
Thus, it can be seen that the relationship between changes in parasitic weight and radar cross section was assumed to be linear. Finally, one more mapping through radar cross section to detectability is performed. At the engineering level, the effect of tactics was modeled by changing the altitude of the secondary mission. This mission was completed using the sized aircraft from earlier in the process. The rerun, or alternate, mission is shown in Figure 70. The purpose of this mission was to vary the altitude of the cruise section from sea level to 10,000 feet in order to model an aircraft flying close to the terrain to avoid enemy radar.
Danielle Soban
182
Performance metrics were tracked, as well as alternate range, in order to quantify the performance and range penalty the aircraft would pay for this kind of tactic. The altitude of the flight should have a direct, positive effect on aircraft detectability: the lower the aircraft flies, the lower its detectability. However, when an aircraft engages in terrain following, the goal is to try and fly completely under the defensive radar, and thus be completely undetectable. This is a discrete event, and one that is difficult to model using the techniques employed in this research. To combat this, it was decided that an exponential scaling factor on detectability was most appropriate. At altitudes closer to sea level, the detectability is very low. As the aircraft increases its altitude, however, its detectability will rise at an exponential rate. (Realize, however, that when the altitude gets sufficiently high, the detectability again decreases. High altitude reduction in detectability was not modeled). This form of modeling has the advantage of a fairly sharp increase in detectability, attempting to model a discrete change in a continuous way. This formulation has another advantage: because the exact altitude of the defensive radar is unknown, an exponential curve helps to model the uncertainty in that exact altitude when the aircraft goes from being undetectable to detectable.
183
Danielle Soban
Variable Altitude (Sea Level to 10,000 ft)
Combat at altitude 2 minutes at Maximum Thrust Mach 1.0 (missiles retained)
Range calculated using available fuel
Reserves: 20 minutes Loiter at S.L. plus 5% of T/O Fuel Start & Taxi, Accelerate to Climb Speed 4.6 minutes at Intermediate Thrust, SLS
Figure 70 – Secondary Mission Using Variable Altitude
The final formulation for the detectability factor is k rcs/pw RCS old Detectability = (0.1) e (0.00023026 )(altitude ) (for altitude < 10,000ft) 0.0541
Maneuverability The next mission level mapping needing to take place was the mapping of performance variables into the maneuverability variable available in ITEM. According to References 79 and 58, the aircraft maneuverability variable is a direct multiplier of the probability of hit of engaging weapons: “The measure of an aircraft’s ability to evade defensive systems. This is a scaling factor used to reduce the probability of hit of engaging weapons against the aircraft in the engagement modules in ITEM. If the Maneuverability Factor is set to 1.0 there is no reduction in
184
Danielle Soban
probability of hit. If it is set to 0.0, the aircraft will not be hit and therefore will not be vulnerable to defensive systems. Values between 0.0 and 1.0 reduce the probability of hit and, hence, the probabilities of kill for defensive systems proportionately.” In order to conduct an appropriate mapping, it is important to understand what aircraft maneuverability is. There is not, however, a universally agreed upon metric to measure maneuverability.
As stated in Reference 82, “Recent studies have made
considerable progress toward the understanding of aircraft maneuverability, agility, and related control power requirements. Researchers still fail to agree on basic ideas with regard to definitions of these qualities and, consequently, how to measure them.” Stinton [83] uses the following definition: “manoeuverability is the ability to change direction: the greater the change in a given time, the more manoeuverable is the aircraft.” That maneuverability is some function of the ability of the aircraft to change direction is echoed in Raymer [84], who gives a partial list of measures of merit for fighter aircraft: turn rate, corner speed, load factor, and excess power.
A definition for vertical
maneuverability is given by von Mises [85] as the inverse of the minimum radius of curvature for an aircraft performing a vertical loop. Finally, Shaw takes a stand and defines maneuverability in the following way: “Turn performance is the ability of an aircraft to change the directions of its motion in flight. This direction is defined by the velocity vector, which may be visualized as an arrow pointing in the direction of aircraft motion and having a length proportional to the speed of that motion. Maneuverability is defined in this text as the ability of a fighter to change the direction of its velocity vector. The terms maneuverability and turn performance, therefore, may be considered synonymous.”
185
Danielle Soban
Although these definitions differ in specifics, they share a common, intuitive feeling as to the relationship between performance metrics and maneuverability. An appropriate mapping, thus, of performance metrics to a maneuverability factor is proposed as a weighted, non-dimensionalized equation that is a function of excess power, turning rate, and turning radius. Remember that these performance metrics were tracked at two places for each design; once in the sizing mission, and once in the alternate mission. It was felt that analysis of both of these trackings would add insight to the analysis. The resulting formulation of the maneuverability mapping is therefore:
Maneuverab ility =
min Ps , alternate k 2 min Trate sizing Trad alternate min Trate alternate k 3 Trad sizing k 1 min Ps , sizing + + + + + 2 Ps , sizing Ps , alternate Trate alternate 2 Trate sizing 2 max Trad sizing max Trad alternate
with k1 + k 2 + k 3 = 1 where min Ps,sizing is the minimum value of excess energy from the sizing mission Ps, sizing is the current value of excess energy from the sizing mission min Ps,alternate is the minimum value of excess energy from the sizing mission Ps,alternate is the current value of excess energy from the sizing mission min Tratesizing is the minimum value of turning radius from the sizing mission Tratesizing is the current value of turning radius from the sizing mission min Tratealternate is the minimum value of turning radius from the alternate mission
Danielle Soban
186
Tratealternate is the current value of turning radius from the alternate mission Tradsizing is the current value of the turning radius from the sizing mission max Tradsizing is the maximum value of the turning radius from the sizing mission Tradalternate is the current value of the turning radius from the alternate mission max Tradalternate is the maximum value of the turning radius from the alternate mission The three k factors are the weighting factors for each of the performance metrics. This allows the analyst to explore the various effects of each of the performance metrics, and also allows the analyst to use her/his own experience and data to provide appropriate weighting factors.
Linking the Codes Together At this point, there is a clear and uninterrupted analysis pathway established between the engineering level inputs and the campaign level outputs. All necessary intermediate inputs and outputs have been identified, and all essential mappings have been created. Returning to the issues identified in Chapter V, this step addresses the level of detail issue. So far, through the selection of the codes and the links created between them, the analyst has performed model integration. Remember, however, that it has been postulated that the optimum analysis capability comes from a blending of model
Danielle Soban
187
integration and model abstraction techniques (“abstragration”). The analysis pathway is thus examined to see if model abstraction techniques would be beneficial. The goal of the analysis in this example problem is to add survivability enhancements to a baseline aircraft and propagate the effects of these enhancements to the campaign level. The analysis pathway consists of two fairly complex modeling codes and a set of intermediate mapping relationships. A variety of inputs and outputs are necessary at each level. In addition, a probabilistic threat environment is desired, and all resulting pathways from the defined decision nodes must be analyzed. In order to complete the analysis in a computationally efficient way, it was decided that the modeling codes themselves should be subjected to model abstraction techniques. The response surface metamodeling techniques discussed in Chapter II and used for the preliminary results were therefore applied. The complete analysis pathway of the linked analysis environment, employing both model integration and model abstraction techniques, is now shown in Figure 71.
188
Danielle Soban
Engineering
Mission (middle)
Campaign System Level Outputs
System Level Inputs
% Survived Aircraft B-1
Parasitic Weight Detectability
Leading Edge Sweep
Radar Cross Section
Wing Area
Excess Energy Sizing
Aspect Ratio Taper Ratio
Excess Energy Alternate
Krcs
Maneuverability
% Survived Aircraft B-2
Maneuverability
Number of SAM-1 Fired
Track Range RSAM-2 Repair Rate RSAM-1
Turning Rate Sizing
Thickness/Chord
Number of SAM-2 Fired
Track Range RSAM1
Repair Rate RSAM-2
Turning Rate Alternate
Kexcess energy
Turning Radius Sizing
Kturning rate
Repair Rate Runway
Turning Radius Alternate
Kturning radius
Probability of Hit SAM-1
Probabilistic Threat Variables
Platform related
Secondary Mission Altitude
Detectability
Probability of Hit SAM-2
Alternate Range
% Runways Destroyed % Shelters Destroyed % Revetments Destroyed % Aircraft in Open Destroyed % Damage RSAM-1 % Damage RSAM-2 % Damage RSAM-3 % Damage RSAM-4 Number of Cruise Missiles Fired
Takeoff Gross Weight Operating Empty Weight Takeoff Field Length Landing Field Length
Response Surface Equations FLOPS
Middle Level Mappings
Response Surface Equations ITEM
Figure 71 – Complete Linked Analysis Environment Created for Survivability Example
Example: Create Full Probabilistic Environment At this point in the process it is time to start doing some computational work. The conceptual plan has been defined and the analysis environment created. These are the tools to be used to create the data for the analysis. The full probabilistic environment is the way in which this data is created. Figure 72 is repeated here to show the elements of the probabilistic environment. These are, in effect, elements that are described in Chapter II.
These elements will be combined with and applied to the Linked Analysis
Environment.
189
Danielle Soban
Create Full Probabilistic Environment Using Linked Analysis Environment Design of Experiments CASE 1 2 3 4 5 6 7 8
SREF -1 -1 -1 -1 1 1 1 1
SWEEP -1 -1 1 1 -1 -1 1 1
TURBO -1 1 -1 1 -1 1 -1 1
R1 y1 y2 y3 y4 y5 y6 y7 y8
Prediction Profile
R2 y11 y12 y13 y14 y15 y16 y17 y18
Assign Distributions Design Variables Technologies Threats
Monte Carlo
Probability Distributions
R1
R2
R3
Figure 72 – Create Full Probabilistic Environment Step in POSSEM
Creation of the Metamodels First, the metamodels, in the form of response surface equations, must be created for both the engineering level and the campaign level (as identified in the Create Linked Analysis Environment step). The process for doing this is identical to that discussed in Chapter II.
First, a Design of Experiments is created using the statistical analysis
package JMP, for the number of input variables. This file was then imported onto the Unix environment in which FLOPS is run. A Tk/tcl script was then written that used the DoE table and a baseline FLOPS input file to replace the baseline input variables with the values specified in the DoE table. A script was then used to run each of the FLOPS cases and parse the responses from the resulting output files. This data was then imported back into the JMP environment for analysis and the creation of the response surface equations. Creating the response surface equations for ITEM was a much more difficult process. ITEM is normally run using a graphical interface on a workstation. While a batch mode of ITEM did exist and was made available, the batch mode worked only with a binary data file. Thus, the method employed above, using a script to replace the input values, would not work. Yet running each of the cases through the GUI would be rather
Danielle Soban
190
impractical. A process that maximized the efficiency of running the cases was finally developed, utilizing both scripting techniques and graphical interfaces. First, a binary data file was created using the ITEM graphical interface. This data file was saved. At the same time, the graphical interface was used to save an ASCII file. This ASCII file, however, could only contain data pertaining to the elements of the campaign. The campaign itself (the timelines, strike packages, etc) could only be saved in the binary format. Also, the saved ASCII file had to have the same name as the binary file, or the ASCII file would not load back into the graphical interface. The ASCII file was then used as a baseline file and subjected to a script, similar to the FLOPS script, that would use the DoE table to find and replace the variables of interest. Realize that this process limits the choices of variables for use: only element characteristics could be chosen. The results of the script were numbered ASCII files, one for each DoE case. The ITEM program was then started in the graphical interface mode. The original binary data file was loaded. Each ASCII file was then loaded through the graphical interface and saved as a binary. Realize, however, that because the file names had to match, each numbered ASCII case had to be renamed right before it was loaded, then loaded, then saved again in binary format as its numbered name. This changed the name of the original binary file, so that file needed to be reloaded before the next ASCII file was loaded. This was the only way to combine the existing campaign plan with the changed element data into a single binary data file. A script was then used to run the binary files in batch mode, and to parse the resulting data.
191
Danielle Soban
The process described above is discussed here to illustrate a key point in using this metamodeling technique and methodology at the campaign level. Many modeling codes use graphical interfaces for data input and output. The metamodel method relies for its efficiency on the ability to rapidly run hundreds of cases and create a metamodel from the results.
When faced with manually running hundreds of cases through a
graphical interface, careful trade studies must be conducted to ensure that the method does indeed save execution time in the long run. For this example, there were four decision paths that needed to be explored. Each path required 129 code runs. Each set of 129 runs took one hour of run time to create the ASCII files, and the better part of a day for someone to manually create the binary files for the batch mode. Depending on the decision path and the number of inputs and outputs desired, the batch mode took between four and eight hours to run on a workstation for each set of 129 cases, and another two hours to parse the data. Factoring in the “doh!” effect (stupid errors that are a natural result from one person performing a tedious, repetitive task for several hours), plus adding the setup time for the original creation of each scenario and campaign, the cases here took a total of one week to create and run. This does not factor in the learning curve for the use of the code, or the development time for figuring out the how to make the metamodel process work. The lesson here reinforces the importance of the conceptual model. It is imperative to be aware of the benefits and limitations that come with each code selected, and to use it properly and efficiently. In this particular case, the benefits of the analysis capabilities of the code, coupled with the ability to run the code in a semiautomatic way, was clearly worth the somewhat difficult setup process.
Danielle Soban
192
The Complete Probabilistic Analysis Environment The resulting output data from both FLOPS and ITEM were imported back into JMP and the resulting response surface equations were created. An Excel spreadsheet environment was then created utilizing the response surface equations and the middle level mapping relationships. The input/output screen of this spreadsheet is shown in Figure 73. Values for the variables at each level (engineering, mission, and campaign) can be input directly into this screen. The equations are then invoked and the responses are automatically updated. With this tool, the analyst can put in whatever combination of variables (within the ranges for which the equations are valid) they wish to explore, and instantaneously see the effects. No more cases need to be run: an entire analysis space has been captured that links engineering variables all the way through to campaign level metrics of effectiveness. A primary goal of POSSEM is to provide the analyst the method of creating this analysis tool.
Danielle Soban
193
Figure 73 – Complete Probabilistic Analysis Environment
Adding the Probabilistics The real power of the analysis environment of Figure 73 is not the ability to explore single value effects, but rather to use the environment to explore the analysis space in a fully probabilistic manner. To do this, the probabilistic analysis package Crystal Ball [33] is used within the Excel environment. Probability distributions can be placed around any input value, and a Monte Carlo analysis conducted. This capability fulfills the promise of Figure 35: a fully probabilistic environment results. The design variables are allowed to vary probabilistically. At the mission level, k-factors are added
194
Danielle Soban
to allow the user to vary the effect of parasitic weight on radar cross section, and to allow the user to define which performance metrics, and by how much, affect the aircraft’s total maneuverability.
These factors can be set to a specific value, or allowed to vary
probabilistically. And at the campaign level, a probabilistic threat environment is created by assigning distributions to the threat variables. The final infusion of probabilistic variables comes from the decision points. Remember that there were two decision points, with two options each, for a total of four possible scenarios. Response surface equations were created for each of these scenarios and are shown in Figure 73 as Camp1 through Camp4. The next column is called “Combined Campaign”. This value results from applying a binomial distribution around each of the decision points, with the values of their likeliness corresponding to the probability of achieving that decision. When the Monte Carlo is run, results that are tallied come from whichever scenario corresponds to that “chosen” by the binomial distribution values. This results in a cumulative combined campaign output. To summarize, there are four areas in which probabilistic inputs can be applied. Distributions may be placed around the design variables, the mission level k-factors, and the campaign level threat variables. A binomial distribution is placed around the defined values for the decision points, and these, too, may be changed by the analyst to explore the effect of changing the decision likelihood. The resulting analysis environment is powerful, fast, and allows the analyst to play a myriad of exploratory “games”, providing information in a timely, intuitive manner.
195
Danielle Soban
Example: Analysis The final step of the POSSEM process is to perform the analysis (as shown in Figure 74). This step uses the data and analysis environment created in the previous steps to provide insight and quantifications to the analyst. To help the reader follow the progression of results, a roadmap of sorts is provided in Figure 75. First, the response surface equations for the engineering level, the mission level, and the campaign level will be examined and discussed. Even before they are incorporated into the probabilistic analysis environment, they provide a wealth of information and insight. Then the analysis will turn to the full probabilistic analysis environment itself, and some example analysis “games” will be played to illustrate the usefulness of the environment. First, design variables will be varied probabilistically while the threats are held constant. Next, the design variables will be held constant and the effects of a probabilistic threat environment are shown. Finally, both the design variables and the threat variables are varied simultaneously in a probabilistic manner, and the results presented. The reader is reminded at this point that some of the variable ranges used in the scenario were extrapolated from ranges used in the preliminary investigation. To keep with the unclassified nature of this project, some of these variable ranges are considered unrealistic.
The results, therefore, should not be considered as absolute, but rather
indicative of trends and relationships between the variables. This is appropriate as the goal of the implementation example of POSSEM is as a proof of concept, demonstrating its use and potential.
196
Danielle Soban
Analysis Campaign Level
Responses = fn(Theater Level MoEs) System MoE = fn(MoP1, MoP2, etc)
Mission Level
MoP = fn(Xreq, Xdes, Xtech)
Engineering Level
X = fn(time-dependant variables)
Impact Dials Figure 74 – Analysis Step in POSSEM
Engineering Level Results FLOPS
Response Surface Equations
Pareto Plots
Prediction Profiles
Mission Level Results Mapping Relationships
Response Surface Equations
Prediction Profiles
Campaign Level Results ITEM
Response Surface Equations
Prediction Profiles
Full Probabilistic Results Probabilistic Design Variables (Threats Constant) (Design Variables Constant) Probabilistic Threat Environment Probabilistic Design Variables
Probabilistic Threat Environment
Figure 75 – Analysis and Results Overview
Engineering Level Results The first set of results presented will be at the engineering level. The inputs described earlier in this chapter were selected to model survivability enhancements as
Danielle Soban
197
applied to the baseline F/A-18C. The probabilistic methods of Chapter II were applied to this baseline and utilized the sizing and synthesis code FLOPS. Effects of these variable inputs on the vehicle itself may be examined as part of the overall results. Screening Test A screening test was conducted using the input survivability variables and the outputs of choice. The resulting metamodels were first-order (linear) fits and are shown in the form of Pareto plots (Figure 76 and Figure 77). By examining these Pareto plots, the primary contributors to the variability of that response can be identified. For takeoff gross weight and operating empty weight, the contributors are the same. Parasitic weight and wing area both place a significant, almost equal, role followed by aspect ratio. Parasitic weight is expected to be a major contributor to the stealth of the aircraft, and thus the survivability of that aircraft. Note, however, that a key variable in reducing the frontal area of the aircraft, wing sweep, plays a small role in the weights of the aircraft. Now, the weights of the aircraft will not be directly mapped to higher levels, so at first glance keeping weights as an output might not make sense. But aircraft weight ties directly into the cost of the aircraft. The analyst, therefore, can assess the cost penalty for achieving a more survivable aircraft. Mission altitude is seen as a non-player. This is a good sanity check, as the mission altitude variable does not come into play until after the vehicle has been sized, so it should have absolutely no effect on the aircraft weights. The next Pareto plot is that of alternate range. This response is a measure of the capability of the aircraft. It indicates how far the sized aircraft can fly with its current fuel load and with a combat session. It is analogous to the combat radius of the aircraft.
198
Danielle Soban
Mission altitude, the altitude at which the secondary mission is performed, is the primary driver, as it should be. The secondary driver is aircraft wing area, a measure of the aerodynamic capability of the aircraft, and the parasitic weight, which is a measure of the weight of the aircraft.
All of these make intuitive sense.
Mission altitude of the
secondary mission was chosen to model the effects of tactics in survivability. Aircraft will often try to make a low-level mission run in order to evade defensive radar. The output variable alternate range is an indication of the performance penalty the aircraft pays for this tactic. Takeoff field length and landing field length were tracked as a way to assess the impact on performance capabilities of the aircraft. These output metrics will not be mapped to higher levels. The primary drivers are wing area, parasitic stealth, and aspect ratio. These indicate performance tradeoffs for increasing aircraft stealth. The next six Pareto plots (Figure 77) are all performance metrics. The primary metrics were parsed from the sizing mission of the aircraft, during the combat section of the mission (it is necessary in FLOPS to have a combat section from which to pull these performance parameters). The secondary metrics were all parsed from the secondary mission, in which mission altitude was allowed to vary. Not surprisingly, the primary drivers for all six metrics are identical. Wing area, of course, is a measure of the aerodynamic capability of the aircraft, as is aspect ratio. Parasitic weight affects the aircraft weight. performance.
Aerodynamics and weight are always chief contributors to aircraft These performance parameters all play a part in the aircraft’s
maneuverability, and as such will be mapped up to the campaign level.
199
Danielle Soban
Term
Orthog Estimate
Term
Orthog Estimate
Parasitic Stealth
787.7507
Parasitic Stealth
660.0424
Wing Area Aspect Ratio Thickness/Chord Taper Ratio Wing Sweep
738.3215 203.2109 -114.0118 54.5903 41.0836
Wing Area Aspect Ratio Thickness/Chord Taper Ratio Wing Sweep
613.3712 170.1585 -95.4529 45.7062 34.3940
-0.0000
Mission Altitude
-0.0000
Mission Altitude
Operating Empty Weight
Takeoff Gross Weight
Term
Orthog Estimate
Mission Altitude
50.96381
Wing Area Parasitic Stealth Aspect Ratio Thickness/Chord Taper Ratio
-12.88259 3.95956 1.02102 -0.56810 0.27704
Wing Sweep
0.21012
Alternate Range Term
Orthog Estimate
Wing Area Parasitic Stealth Aspect Ratio
-160.5304 72.9344 -27.1130
Thickness/Chord Taper Ratio Wing Sweep Mission Altitude
-10.3036 4.9806 3.7666 0.0000
Term
Orthog Estimate
Wing Area Parasitic Stealth
-233.4648 52.8876
Aspect Ratio Thickness/Chord Taper Ratio Wing Sweep Mission Altitude
13.4164 -7.4086 3.4242 2.5214 0.0000
Landing Field Length
Takeoff Field Length
Figure 76 – Pareto Plots for Engineering Level Screening Test 1
Term
Orthog Estimate
Wing Area Parasitic Stealth Aspect Ratio
-39.63143 -13.83201 -3.52532
Thickness/Chord Taper Ratio Wing Sweep Mission Altitude
1.97200 -0.95098 -0.71129 0.00000
Term Wing Area Parasitic Stealth Aspect Ratio Thickness/Chord Taper Ratio Mission Altitude Wing Sweep
Excess Energy Primary Term
Orthog Estimate
Orthog Estimate -37.87422 -13.44135 -3.43349 1.91908 -0.91985 -0.83736 -0.69261
Excess Energy Secondary
Term
Orthog Estimate
Wing Area Parasitic Stealth Aspect Ratio Thickness/Chord
0.6051875 -0.3085626 -0.0795181 0.0444984
Wing Area Parasitic Stealth Aspect Ratio Thickness/Chord
0.5712028 -0.2998855 -0.0773469 0.0433077
Taper Ratio Wing Sweep Mission Altitude
-0.0213387 -0.0159846 -0.0000000
Taper Ratio Mission Altitude Wing Sweep
-0.0207395 -0.0188406 -0.0155721
Turn Rate Primary
Term
Orthog Estimate
Wing Area Parasitic Stealth Aspect Ratio Thickness/Chord Taper Ratio Wing Sweep Mission Altitude
-143.9388 73.4636 18.8640 -10.4904 5.0428 3.7977 0.0000
Turn Radius Primary
Turn Rate Secondary
Term Wing Area Parasitic Stealth Aspect Ratio Thickness/Chord Taper Ratio Mission Altitude Wing Sweep
Orthog Estimate -150.7171 79.2613 20.3970 -11.4164 5.4086 4.9572 4.1323
Turn Radius Secondary
Figure 77 – Pareto Plots for Engineering Level Screening Test 2
Danielle Soban
200
Prediction Profile Once the screening test was completed, giving insight to the analyst as to the primary drivers of the responses, the model runs were repeated with a central composite design consisting of 143 model runs. Response surface equations were created mapping the responses to the input variables. No variables were eliminated during the screening test: the variables of interest were few enough in number that their retention did not significantly affect the computation run time. The resulting metamodels are presented in the form of a prediction profile (Figure 78). As previously described, the response surface equation from Reference 77 was imported and added to the data, in order that all of the results be seen on one cohesive prediction profile. The prediction profile echoes the relationships seen in the screening test Pareto charts. Its primary advantage is to allow the analyst to see all of these results side by side. The prediction profile tool may be used interactively by the analyst on a computer. The hairlines may be clicked and dragged, changing the values of the variables in real time and the screen will update the values of the response metrics and show the shapes of those relationships in real time. Figure 78 is a snapshot of that interactive screen, with all variables set to their nominal (midpoint) values. The radar cross section response is the first response of interest. Although the data in Reference 77 was presented in decibels, for this study the data was also converted into square meters to provide a more intuitive reference. As shown in Figure 78, the model fits were not affected by the units of the data. The first thing to notice is the
Danielle Soban
201
unusual model fit shapes for radar cross section with respect to aspect ratio, taper ratio, and (to a lesser extent) wing area. At first glance these trends do not make intuitive sense, so the models were examined to see if any reasons for these trends could be identified. First, the correlation of the estimates was examined. The correlation is a function of the geometric design of the experiment, and may introduce error into the design regardless of the accuracy of the data (or how it represents “real life”). Examination of the data did reveal some factors that indicated some correlation. These were most probably functions of the 10 cases excluded from the 287 model runs by Hines [77]. However, they were not of such significance that correlation could be pointed to as the sole cause of the unusual model fits. The next area of investigation was the interactions of the variables. Unlike the correlation, the interactions are a function of the data. Examination of the data showed some undesirable interactions, especially with the horizontal tail geometry variables. Because in the current research the horizontal and vertical tail geometry was not varied (the variables were set to their nominal value and added to the intercept term in the response surface equation), the errors associated with these terms may have had a sort of trickling effect into the error of the resulting response surface equation. It should also be noted that any variable interaction with wing aspect ratio had a highly observable quadratic effect. However, the magnitude of these errors again precludes any substantial effect on the fit of the radar cross section response.
202
Danielle Soban
Finally, the data was examined to see if it was being influenced by higher order effects. An examination of the scatter plot for radar cross section showed a fairly nice scatter, indicating a lack of error from higher order effects. A final source of error could be the modeling itself. The true effects of aspect ratio and taper ratio could have been inadvertently disguised through mathematical relationships used in the set up and running of the models. The source of error could also be in the actual code or codes used to generate the data, although Hines does provide substantiation for the validation of the codes used.
Without knowing the precise
formulation of the model runs, no exact conclusions can be formulated about the odd shape of the model fits for radar cross section with respect to aspect ratio and taper ratio. Examination of the data from a statistical viewpoint indicates that, although there are some minor sources of concern that could contribute to modeling error, overall the model fit is satisfactory. One cause could be errors in the actual modeling itself. Another possibility is that the trends are, in fact, correct yet just not intuitive. It has been surmised that the model did not take into account the effect of the engine inlets. If so, the RCS values gained from the research should be viewed as a delta increment on RCS due to geometric variation, and independent of engine inlets. The next two variables contributing to radar cross section are wing sweep and thickness to chord ratio. Wing sweep can be seen to have a significant effect on radar cross section, and the trend is as one would expect. As wing sweep increases, radar cross section decreases. This is because the frontal area of the aircraft decreases as the wing is swept back, and radar cross section is a direct function of frontal area. Thickness to
Danielle Soban
203
chord has the same effect: increasing the ratio increases the frontal area of the aircraft, and thus increases radar cross section. The next two variables, parasitic stealth and mission altitude, have no effect simply because they were not included as part of the original response surface equation. This is why they are labeled “not modeled”. These two variables, however, should have an effect on the radar cross section, and their mapping was accounted for during the middle level mapping. When discussing the screening test results, each response was discussed with respect to its primary drivers. The prediction profile of Figure 78 shows the same information from a different perspective. It allows the analyst to see the simultaneous effect of each variable on all of the responses. Starting with aspect ratio, one can see that it has very little effect on any of the response metrics, except for the radar cross section (which is itself suspect, as discussed earlier). If the trend for radar cross section with respect to aspect ratio is to be believed, then the analyst can conclude that radar cross section is highly sensitive to aspect ratio, and the optimum value of aspect ratio must be very carefully defined. At the same time, changing aspect ratio has little effect on the other responses, indicating there are few penalties to be paid for the correct value of aspect ratio. If the trend for radar cross section with respect to aspect ratio is to be ignored, then aspect ratio in its entirety can be said to have little effect on the overall design. The same arguments may be made for the taper ratio variable. Wing area, according to Figure 78, can be seen to be an important variable with a myriad of effects. Increasing wing area increases the weight of the aircraft, and thus the cost. Yet at the same time, takeoff and landing field lengths are decreased. Excess
Danielle Soban
204
energy is decreased, so a performance penalty is taken, but turn rate is increased and turn radius is decreased so a performance benefit is seen. Radar cross section is affected minimally, so the analyst can conclude that, if wing area is going to be considered as a survivability consideration, its role is significant in how it affects the aircraft’s performance, i.e. maneuverability. Wing sweep shows very little effect on all responses except for radar cross section. This would indicate that optimizing the aircraft for radar cross section with respect to wing sweep may be accomplished with little effect on the rest of the design, as indicated by those responses. Parasitic stealth is another interesting variable. Increasing parasitic stealth gives penalties across the spectrum of design responses. The benefits of parasitic stealth are not shown until the middle (mission) level of the analysis is reached. This supports the idea that the military continuum must be used in order to properly assess the effects of survivability concepts. At the engineering level, only penalties can be seen, leading the analyst to make erroneous conclusions about the effect of parasitic stealth. Finally, the mission altitude variable shows an effect only on the altitude range for the secondary mission. This is intuitive: the secondary mission is conducted after the aircraft is sized, and thus should have no impact on those responses that are a result of the sizing. There is a potential for some effect to be seen in the secondary performance variables, but these effects are seen to be negligible. This makes sense as altitude in general should not significantly affect the performance metrics tracked.
205
Danielle Soban
not modeled until middle level RCS in dB
-13.7 0.054 RCS in m^2
Stealth
-12.7 -13.452
0.045173
Takeoff Gross Weight Operating Empty Weight
26276.39
Takeoff Field Length
38590.06
Alternate Range
Responses
Landing Field Length
Sizing Results
0.0426 40779
36883 28092
24839 3198 2874.53 2636.5 5140 4781.355 4512 335.3 226.4788
Excess Energy Sizing
652.1287
Excess Energy Alternate
619.5841
Turn Rate Sizing
14.55787
Turn Rate Alternate
13.82599
Turn Radius Sizing
3392.271
Turn Radius Alternate
3571.89
712
590.27 677.8
559.08 15.5
13.338 14.74
12.637 3701
3187.9 3907
Taper Ratio
5,000 Mission Altitude (ft)
Variables
Figure 78 – Prediction Profile for Engineering Level
10,000
0
500 Parasitic Stealth (lbs)
1000
0
0.05
Thickness/ Chord
0.06
30
22.5
Wing Sweep (dg)
0.04
15
0.35
0.4
0.3
450 Wing Area (sq ft)
500
4
3.5
Aspect Ratio
400
3352
3
Combat Performance
183.45
206
Danielle Soban
Mission Level Results The next set of results presented will be the mission, or middle, level results. The mapping relationships for detectability and maneuverability, discussed earlier, were used. A Design of Experiments was wrapped around the relationships, and the resulting prediction profile is shown in Figure 79. The Design of Experiments and response surface methodology was not used to create an RSE for the purpose of reducing computational time. Rather, it was used to convert the results into a similar format, that of the prediction profile, of the engineering and campaign level results. In addition, the prediction profile format allows the analyst insight into the nature of the relationships. Prediction Profile The first response, detectability, can be seen to be a function of radar cross section, altitude, and parasitic weight. The performance characteristics are seen to be non-players, as they were not included in the original mapping. Altitude is shown to have a quadratic effect, due to the exponential mapping relationship that was used. Parasitic weight and altitude both show more influence than radar cross section. The reader is reminded that the radar cross section is only a function of the geometric variables used as inputs to the engineering level, and do not reflect a total “true” radar cross section as the original RSE used did not take into account the effect of the aircraft inlets. Maneuverability is shown to be a function of the three performance attributes used in the mapping relationships.
Each of the sizing and alternate performance
attributes were combined. For example, the excess energy term is the combined excess
207
Danielle Soban
energy for both the sizing and the alternate missions, non-dimensionalized, and are both multiplied by the same k-factor, as discussed earlier. The trends show the same level of influence, as would be expected since the k-factors were all set to the same value. The
Maneuverability Detectability
trends are intuitive and correct. 0.8 0.14131 -0.034 1 0.920279 0.8481
590.27 712 13.34 15.50 3187.9 3701 .0426 .0541 651.14 14.42 3444.45 .04835 Excess Energy (ft/s)
Turning Rate (deg/s)
Turning Radius (ft)
RCS (m2)
0
5,000
10,000 0
Altitude (ft)
500
1000
Parasitic Weight (lbs)
Figure 79 – Prediction Profile for Mission Level
Campaign Level Results The results of creating the response surface equations at the campaign level are now examined. There were four sets of cases run for the scenario, each corresponding to one decision node pathway. Each set consisted of 129 model runs in ITEM, with the variables and responses being identical in each set. Because the trends were identical in all four cases, only two will be presented here, representing a case in which the cruise missiles are fired on Day 1, and one in which they were not.
Danielle Soban
208
Prediction Profiles The first set of results is presented in the form of a prediction profile. This set corresponded to the decision node pathway in which cruise missiles were fired on Day 1 and an attack from the south airbase was conducted on Day 2. The prediction profile is shown in Figure 80. The first response of interest is the percentage of Blue-1 aircraft that survived. There is a clear trend that as the aircraft’s detectability decreases, so does its survivability. This is a measure of the effect of decreasing the radar cross section of the aircraft. The same trend is seen for the Blue-2 aircraft, but notice that the effect is not quite as dramatic. Also remember that the difference between the two aircraft is the weapons load each carries. Blue-2 carries a less effective weapon, and, interestingly, the detectability has less of an effect. The same effect is seen for the maneuverability factors, but at a much lesser degree. The ranges for the maneuverability, however, were much tighter than those for the detectability. This was a natural fallout of the mappings: the changes implemented at the engineering level had a more profound effect on detectability than on maneuverability. The next variables to have an effect are the track range variables for SAM sites 1 and 2. The trends are slight, but note how the as the track range for RSAM-2 increases, the survivability of the two aircraft decreases. The slightly opposite trend is shown for the tracking range for RSAM-2. While at first glance this doesn’t seem to make a lot of sense, the response is caused by the interaction between the two variables. Notice in the geometry of the scenario that RSAM-2 is the dominant SAM site of interest when an attack occurs from the south. At some point the
209
Danielle Soban
tracking range for RSAM-1 become large enough for the SAM site to track an incoming raid, and so fleetingly becomes a player. The repair rates show some slight trends, with the only one of significance again that of RSAM-2. As the repair rate increases, the survivability of both aircraft decreases. Finally, the reliability, or probability of hit, of the SAM-1 weapon shows a logical trend: as the reliability increases, the survivability of the aircraft decrease. The total number of SAM weapons fired from RSAM-1 and –2 are next. As the detectability of the aircraft decrease, the aircraft become more susceptible to the defensive systems, and more weapons are fired. however, the trends are relatively flat.
Notice that for maneuverability,
This is because of the modeling of the
maneuverability in ITEM. The maneuverability of the aircraft has no effect on the whether or not the aircraft are detected and fired upon, but is rather a multiplier on the probability that the aircraft is hit after being detected. The repair rates of the two SAM sites have a direct effect on how many weapons are fired: an increase in repair rate allows more weapons to be fired. The damage trends at the airbase are all the same. As the aircraft become more detectable, the defensive systems allow fewer aircraft through, and the damage to the airbase decreases. Maneuverability has a slight effect on damage, again because an increase in aircraft maneuverability allows more aircraft to penetrate and attack the airbase. Increased track range of the dominant SAM site in the scenario, RSAM-2, has the effect of diminishing damage at the airbase, but interestingly, the opposite effect is
Danielle Soban
210
seen for the track range of RSAM-1. Increasing repair rate and probability of hit also have the intended effect of decreasing damage at the airbase. Damage to the SAM sites themselves was tracked and two of those SAM sites presented here. The effect of aircraft detectability and maneuverability is practically nil on the damage to the SAM sites. This is a combined effect of repair rate and aircraft damage from the SAM defensive systems. In other words, when an aircraft is sent directly to attack the SAM site, the variable ranges (such as attack speed) are such that a substantial amount of damage is done to the attacking aircraft. These aircraft then do relatively little damage to the SAM sites, and that damage is repaired fairly quickly. Note the damage to RSAM-4 as a function of the tracking range of RSAM-2. This is a function of geometry. Note in the scenario how the tracking range of RSAM-4 will intercept attacks to the airbase. Increasing the tracking range of RSAM-2 helps “protect” the fourth SAM site. This same effect is mimicked in the reliability of the SAM-2 weapons used at the RSAM-2 site. Finally, there is a direct correlation between repair rate and damage done to the same SAM site, an obvious relationship but one that is a good sanity check. Finally, the prediction profile for the number of cruise missiles fired was not presented. Because this was a Day 1 event and no offensive measures were triggered, the number of cruise missiles fired was completely independent of any of the variables, which all applied to Day 2 events. There were 20 cruise missiles fired during Day 1. While the trends shown in the prediction profile were for the most part intuitive and correct, the error involved in the model fit could have been smaller. Previous studies
Danielle Soban
211
with ITEM have shown that model fits using the response surface methodology are highly dependent on the scenario. The primary reason for this is the nature of discrete events that occur in ITEM. For example, if the tracking range is allowed to become too small, there will be a point in which the raiding aircraft will be outside the sphere of influence of the SAM site, and thus will never become detected. This is a discrete cliff that occurs in a relationship that is trying to be modeled in a continuous way. Although care was taken in the development of the scenario to limit the number of discrete events, some were allowed to remain, chiefly to illustrate a potential drawback of using this sort of metamodeling technique at the campaign level. R2 values for the responses varied from values of 0.984 to 0.999.
212
% Blue AC-1 Survived % Blue AC-2 Survived
90.526 95.65 92.29875 89.882 4.592 2.415308 1.41
Total SAM2 Fired
19.27647
% Runways Remaining
23.91
60.90837
% Shelters Remaining
Total SAM1 Fired
94.7 92.65937
35.26398
9.104 106.5
29.45 59.34
19.02
% Revetments Remaining
42.2 25.02866
% Aircraft in Open Remaining
Damage to Airbase
Responses
SAM’s Fired
Survivability
Danielle Soban
13.69057
13.41 22.84
% Damage to RSAM4 Fire Control % Damage to RSAM4 Radar
23.76074
% Damage to RSAM2 Fire Control
13.5 11.88037
13.15564
% Damage to RSAM2 Radar
Damage to SAM sites
7.485
26.31128
10.875 27
21.75 23.18
4.8988 46.35
9.7976
.016
.507
.998 .016
Detectability Blue A/C 1
.507
.998 .848
Detectability Blue A/C 2
.924
1.0 .848
Maneuverability Blue A/C 1
.924
1.0
Maneuverability Blue A/C 2
50
60
70
Track Range RSAM1 Site (nm)
40
50
60
Track Range RSAM2 Site (nm)
.05
.25
.45
Repair Rate RSAM1 Site
.05
.25
.45 .05
Repair Rate RSAM2 Site
0.25
.45 .45
Repair Rate Red Runway
0.65 Reliability SAM-1
.85 .65
0.8
.95
Reliability SAM-2
Variables Figure 80 – Prediction Profile for Campaign Level, Case 1
The next response surface equation to be examined corresponds to the fourth decision node. This is the set where there are no cruise missiles fired on Day 1, and the second day attack comes from the southeast. This makes both RSAM-1 and RSAM-2
213
Danielle Soban
significant, unlike the dominance of RSAM-2 in the previous case. The prediction profile for this case is shown in Figure 81. In general, the trends follow the same path, with the difference being in the magnitude of the effect. The most notable trend difference is in the tracking ranges of the RSAM-1 and –2 SAM sites.
This scenario switches the emphasis of the primary
defensive site from SAM-2 to a combination of both. Note that at the lower range of the tracking range for RSAM-1 there is actually a slight increase in aircraft survivability. While this is not intuitive, it is a function of the interacting track ranges and reliabilities of the combined defenses of the SAM sites. Unfortunately, in this case the geometry is such that the prediction profile by itself is not enough to promote transparency of the cause and effect relationships being modeled. If the SAM sites were separated so that the attack from the south only encountered one site and the attack from the southeast encountered another distinct site, transparency of those effects would be enhanced. However, this would model a rather unrealistic real-life scenario. In most cases, a combined defensive geometry would be encountered. The other primary difference in trends is the effect of the repair rates. In Case 4, the repair rates have a far less significant effect than in Case 1. This is a function of the damage achieved by the cruise missiles in Day 1 of the campaign. The prediction profiles, in general, provide the analyst with the ability to look at all of the effects of all of the variables on all of the responses at once. This helps provide insight into the interrelationships between the variables. As shown here, however, there are times when the interrelationships lack transparency when only considering the
214
Danielle Soban
prediction profiles. The probabilistic analysis tool presented in the next section adds to
% Blue AC-1 Survived % Blue AC-2 Survived
97.36849
Total SAM1 Fired
SAM’s Fired
100.7 97.25001
12.35029
Total SAM2 Fired
Survivability
the analyst’s insight.
91.762 100.5
91.811 15.91
3.326 15.62 0.471068
% Runways Remaining % Shelters Remaining
124.7 98.46109 36.375 70.57 56.38518
% Revetments Remaining
21.585
% Aircraft in Open Remaining
Damage to Airbase
Responses
-3.305
50.54 40.03353 15.085 27.43 21.65593
% Damage to RSAM1 Fire Control % Damage to RSAM1 Radar
Damage SAM sites
8.305 15.58 11.73991 1.725 31.15 23.47982 3.45
.016
.507 Detectability Blue A/C 1
.998
.016
.507
Detectability Blue A/C 2
.998
.848
.924
Maneuverability Blue A/C 1
1.0
.848
.924
Maneuverability Blue A/C 2
1.0
50
60 Track Range RSAM1 Site (nm)
70
40
50
60
Track Range RSAM2 Site (nm)
.05
.25
Repair Rate RSAM1 Site
.45
.05
.25
Repair Rate RSAM2 Site
.45
.05
0.25 Repair Rate Red Runway
.45
.45
0.65 Reliability SAM-1
.85 .65
0.8
.95
Reliability SAM-2
Variables
Figure 81 – Prediction Profile for Campaign Level, Case 4
Complete Environment Results The response surface equations discussed in the previous section were incorporated into the probabilistic analysis environment created in the previous POSSEM step. This environment can now be used by the analyst in a myriad of ways to explore the complete analysis space. Several examples of using this tool are presented here in order to demonstrate the power of the POSSEM process and its resulting analysis environment.
Danielle Soban
215
Recall that the project goal, established in the development of the conceptual model, was to answer the following question: What is the effect of adding survivability concepts to an aircraft? The probabilistic analysis environment will be used to explore this question. Although data exists for all of the metrics listed in Table 13, for clarity the response investigated will be the survivability rates of both aircraft. The individual effects of varying input parameters at different levels will be conducted. First, the design variables will be allowed to vary probabilistically and the results discussed. Then, the campaign level threat variables will be varied. Finally, all parameters will be varied simultaneously and the results discussed. Effect of Varying Design Variables First, the environment will be used to explore the effects of probabilistically varying the design variables on the campaign level responses. Uniform distributions are placed around each of the design variables. Realize that the distributions could be of any form, allowing the analyst the freedom to weight variables towards specific values based on professional experience, technological data, or simply to explore a “what if” situation. The other variables, mission level k-factors, decision nodes probabilities, and campaign variables, are set at their expected or nominal values. Figure 82 shows the probability density functions for each of the four decision nodes, plus the combined campaign, for the survivability of the Blue-1 aircraft. These graphs show how often a certain value of survivability is achieved given the input
Danielle Soban
216
distributions and calculating the responses through the metamodels. For Cases 1 and 3, the curves are fairly “backloaded”, showing a rise and peak of the survivability around the 93% area. Cases 2 and 4 show a double peak at the beginning and end of the distributions, with a valley in the middle. The important graph, however, is the combined campaign graph. This shows the overall results for Blue-1 survivability when the design variables are probabilistic within a constant threat environment. There are two clear areas within which the survivability of the aircraft falls. The first peak shows a range of survivability from about 91.63% to about 93.73%. This curve shows a very clear peak close to the 93.73% point, with a fairly small amount of variability. The second curve occurs from about 97.33% to about 100%. This graph has two small survivability peaks and a broader variability. Note that there are regions between the two peaks where there are no results. This indicates that, given the scenario and the input assumptions, the aircraft cannot attain those survivability values. Similar trends can be shown in Figure 83 for the Blue-2 aircraft’s survivability. Recalling that the Blue-2 aircraft carries a less effective weapon, the results show that the values of survivability achievable by this aircraft are also less. The analyst can conclude from these results that setting different values of the design variables has an effect in two survivability rate ranges. By only manipulating design variables, the analyst or designer cannot achieve values of survivability that fall within these two groupings. There is less variability in the lower of the two groups, yet high values of survivability are desirable, and there is more variability seen in this grouping.
217
Danielle Soban
Forecast: % Survivability Blue-1 Campaign 1 10,000 Trials
Forecast: % Survivability Blue-1 Campaign 2
Frequency Chart
10,000 Trials
Frequency Chart
.024
244
.020
204
.018
183
.015
153
.012
122
.010
102
.006
61
.005
51
0
.000
.000 92.68
92.93
93.18
93.43
93.69
Forecast: % Survivability Blue-1 Campaign 3 10,000 Trials
0 97.85
98.38
98.91
99.44
99.98
Forecast: % Survivability Blue-1 Campaign 4
Frequency Chart
10,000 Trials
Frequency Chart
26 Outliers
.021
210
.020
198
.016
157.5
.015
148.5
.011
105
.010
99
.005
52.5
.005
49.5
0
.000
.000 91.58
92.04
92.50
92.95
93.41
0 97.33
97.97
98.60
99.24
99.88
Forecast: % Survivability Blue-1 Composite Camp. 10,000 Trials
Frequency Chart
.048
482
.036
361.5
.024
241
.012
120.5
.000
0 91.63
93.73
95.82
97.91
100.01
Figure 82 - Probability Density Functions for Survivability of Blue-1 Aircraft with Probabilistic Design Environment
218
Danielle Soban
Forecast: % Survivability Blue-2 Campaign 1 10,000 Trials
Forecast: % Survivability Blue-2 Campaign 2 10,000 Trials Frequency Chart
Frequency Chart
.021
208
.021
208
.016
156
.016
156
.010
104
.010
104
.005
52
.005
52
0
.000
.000 92.49
92.79
93.09
93.39
93.69
Forecast: % Survivability Blue-2 Campaign 3 10,000 Trials
0 97.42
97.83
98.25
98.66
99.07
Forecast: % Survivability Blue-2 Campaign 4
Frequency Chart
10,000 Trials
Frequency Chart
.020
204
.022
217
.015
153
.016
162.7
.010
102
.011
108.5
.005
51
.005
54.25
0
.000
.000 90.59
91.14
91.69
92.25
92.80
0 96.75
97.27
97.79
98.31
98.83
Forecast: % Survivability Blue-2 Composite Camp. 10,000 Trials
Frequency Chart
.043
434
.033
325.5
.022
217
.011
108.5 0
.000 90.89
92.93
94.97
97.01
99.05
Figure 83 – Probability Density Functions for Survivability of Blue-2 Aircraft with Probabilistic Design Environment
To understand the contributions of the individual decision nodes to the combined campaign result, overlay graphs are used.
These are shown in Figure 84 for the
survivability rates of the two aircraft. For the higher values of survivability, it can be seen that only Cases 2 and 4 will give the desired rate of survivability or higher. These cases correspond to attacks from the southeast airbase, and the use of cruise missiles on Day 1 is irrelevant. This is an interesting result given that the objective of using the cruise missiles was to increase the survivability of the aircraft on the second day strikes. This objective was not realized, and it can be seen that the more important of the
219
Danielle Soban
decisions is clearly from which airbase the attacks should be originated. This illustrates the usefulness of a method such as POSSEM.
Frequency Comparison .147
% Survivability Blue-1 Composite Campaign .110
% Survivability Blue-1 Campaign 1
.073
% Survivability Blue-1 Campaign 2
.037
% Survivability Blue-1 Campaign 3 % Survivability Blue-1 Campaign 4
.000 91.00
93.50
96.00
98.50
101.00
Frequency Comparison .108
% Survivability Blue-2 Composite Campaign
.081
% Survivability Blue-2 Campaign 1
.054
% Survivability Blue-2 Campaign 2 % Survivability Blue-2 Campaign 3
.027
% Survivability Blue-2 Campaign 4 .000 90.00
92.50
95.00
97.50
100.00
Figure 84 - Overlay Charts for Aircraft Survivability with Probabilistic Design Environment
Effect of Varying Threat Variables In order to achieve a robust solution, it was postulated that the threat environment must be allowed to vary probabilistically. To achieve the sole effects of this, the design variables were held constant at their nominal values, and uniform distributions were placed around the threat variables. A Monte Carlo simulation was then performed.
Danielle Soban
220
The probability density functions for the two aircraft survivability rates are shown in Figure 85 and Figure 86. The curves are similar in shape to those generated by the probabilistic design variables, yet the trends are smoother and less abrupt. The minimum survivability rate for Blue-1 aircraft is around 91.20%, compared to 89.74% for Blue-2 aircraft. Again, this is because the second aircraft carries a less effective weapon. The band of unachievable values of survivability is very small in these results, compared to the band generated by holding the threats constant and varying the design parameters. Figure 87 is included to illustrate the difference between the probability density function graph and the cumulative distribution function graph. Both represent the data, and it is more usual to see a CDF in presenting results of this type of probabilistic method.
Closer inspection of the graphs, however, reveals why the PDF is more
appropriate. As noted earlier, the data tends to fall into two survivability groupings. When displayed on a CDF graph, the area between the two groupings is represented by a flat line. One could interpret this flat line on the CDF to mean that those values of survivability are invariant to any changes in the threat environment, yet there is a successful chance of achieving these values. This is an untrue interpretation of the results. As the PDF clearly shows, no combination of variable will allow the analyst to achieve those values. The usefulness, however, of a CDF presentation lies in the ease of interpreting confidence values. For example, looking at the CDF for Blue-1, it can easily be determined that for a 75% confidence level, the analyst can expect a survivability value of at least 93%. This is “bottom line” information that is very useful to the analyst. The lesson is that the presentation of the results is very important, as well as the correct
221
Danielle Soban
interpretation of those results. POSSEM aids analysis by formulating a clear analysis pathway. The correct use of the information generated by POSSEM, including the correct presentation of the results, is the responsibility of the analyst.
Forecast: % Survivability Blue-1 Campaign 2
Forecast: % Survivability Blue-1 Campaign 1 10,000 Trials
10,000 Trials
Frequency Chart
Frequency Chart
.021
213
.020
204
.016
159.7
.015
153
.011
106.5
.010
102
.005
53.25
.005
51
0
.000
.000 92.02
92.64
93.25
93.87
96.79
97.50
98.21
98.93
Forecast: % Survivability Blue-1 Campaign 4
Forecast: % Survivability Blue-1 Campaign 3 10,000 Trials
0 96.08
94.49
10,000 Trials
Frequency Chart
Frequency Chart
219
.022
221
.016
164.2
.017
165.7
.011
109.5
.011
110.5
.005
54.75
.006
55.25
0
.000
.022
.000 91.10
91.60
92.10
92.60
0 94.82
93.11
95.81
96.80
97.79
98.77
Forecast: % Survivability Blue-1 Composite Camp. 10,000 Trials
Frequency Chart
.028
275
.021
206.2
.014
137.5
.007
68.75 0
.000 91.20
93.11
95.01
96.92
98.82
Figure 85 – Probability Density Functions for Survivability of Blue-1 Aircraft with Probabilistic Threat Environment
222
Danielle Soban
Forecast: % Survivability Blue-2 Campaign 1 10,000 Trials
Forecast: % Survivability Blue-2 Campaign 2
Frequency Chart
10,000 Trials
Frequency Chart
.020
195
.023
231
.015
146.2
.017
173.2
.010
97.5
.012
115.5
.005
48.75
.006
57.75
0
.000
.000 91.61
92.52
93.43
94.34
95.26
Forecast: % Survivability Blue-2 Campaign 3 10,000 Trials
0 96.11
96.82
97.54
98.25
98.97
Forecast: % Survivability Blue-2 Campaign 4 10,000 Trials Frequency Chart
Frequency Chart
.020
197
.026
256
.015
147.7
.019
192
.010
98.5
.013
128
.005
49.25
.006
64
0
.000
.000 89.77
90.75
91.73
92.71
93.69
0 95.53
96.28
97.03
97.77
98.52
Forecast: % Survivability Blue-2 Composite Camp. 10,000 Trials
Frequency Chart
.040
399
.030
299.2
.020
199.5
.010
99.75 0
.000 89.74
92.18
94.61
97.05
99.49
Figure 86 – Probability Density Functions for Survivability of Blue-2 Aircraft with Probabilistic Threat Environment
223
Danielle Soban
Probability Density Functions
Cumulative Distribution Functions
Forecast: % Survivability Blue-1 Composite Camp. 10,000 Trials
Forecast: % Survivability Blue-1 Composite Camp.
Frequency Chart
10,000 Trials
.028
275
.021
206.2
.750
.014
137.5
.500
.007
68.75
.250
0
.000
.000 91.20
93.11
95.01
96.92
98.82
.030
299.2
.750
.020
199.5
.500
.010
99.75
.250
0
.000
92.18
94.61
97.05
99.49
93.11
95.01
96.92
98.82
Forecast: % Survivability Blue-2 Composite Camp.
399
89.74
0
10,000 Trials
.040
.000
10000
91.20
Forecast: % Survivability Blue-2 Composite Camp. 10,000 Trials Frequency Chart
Reverse Cumulative
1.000
Reverse Cumulative
1.000
10000
0 89.74
92.18
94.61
97.05
99.49
Figure 87 – Comparison of PDF and CDF for Aircraft Survivability, Composite Campaign
While the shapes of the curves are different, the overlay charts presented in Figure 88 show the same conclusion as that with the probabilistic design variables. Only Cases 2 and 4 allow success for the higher survivability values. Again this points to the relative ineffectiveness of the cruise missiles on Day 1 with respect to aircraft survivability, and highlights the importance of the selection of the attacking airbase. Note, however, that for the lower survivability value grouping, Case 1 has more of an effect on the higher values of survivability than Case 3. This means that, given the selection of the south airbase for the attack, it is more beneficial to use the cruise missiles on Day 1 than not.
224
Danielle Soban
Frequency Comparison .060
% Survivability Blue-1 Composite Campaign .045
% Survivability Blue-1 Campaign 1 .030
% Survivability Blue-1 Campaign 2 % Survivability Blue-1 Campaign 3
.015
% Survivability Blue-1 Campaign 4
.000 91.00
93.25
95.50
97.75
100.00
Frequency Comparison .060
% Survivability Blue-2 Composite Campaign .045
% Survivability Blue-2 Campaign 1 .030
% Survivability Blue-2 Campaign 2 % Survivability Blue-2 Campaign 3
.015
% Survivability Blue-2 Campaign 4
.000 89.00
91.75
94.50
97.25
100.00
Figure 88 – Overlay Charts for Aircraft Survivability with Probabilistic Threat Environment
Effect of Fully Probabilistic Environment
While looking at the effects of holding the design variables and the threat variables constant in turn is interesting and does provide some insight, the true power of POSSEM comes from allowing the analyst to view the data in a fully probabilistic environment. The Monte Carlo simulation is performed once again on the environment. The design variables are allowed to vary with their uniform distributions, as are the threat elements. The probability density distributions are shown in Figure 89 and Figure 90 for the aircraft survivability rates.
225
Danielle Soban
The first thing to notice is that the curves tend to smooth out as the number of probabilistic variables increases. The characteristic gap in the middle of the groupings found in the first two sets of charts is smaller here and not as pronounced. This gap is significantly smaller than either gap produced by looking at the design variable set and the threat variable set independently. It also illustrates the importance of looking at the data in a summative way.
If only the effect of the varying design variables was
examined, the analyst would come to the false idea that a wider spread of survivability rates was relatively immune to the effect of design variable changes.
Adding the
probabilistic threat environment shows that to achieve a robust solution, a smaller range of survivability rates remain relatively insensitive to design variable changes. Figure 91 shows the overlay charts for aircraft survivability within the fully probabilistic environment. Although the trends are the same, and Cases 2 and 4 dominate the higher survivability values, notice the decreasing variability of the responses. Given an attack from the southeast airbase and the desire for the higher survivability values, there is not a clear preference for the use of cruise missiles on Day 1. If this decision were to be considered, the cost of the cruise missiles must be weighed against the slight increase in the aircraft survivability. Figure 92 compares both the CDFs and PDFs for Blue-1. It is clear from these figures how the groupings move from two distinct entities towards each other, filling in the survivability value gap. variability.
In addition, the curves smooth out and decrease in
The characteristic flat spot shown in the CDF of the probabilistic design
variable graph becomes less prominent in the probabilistic threat environment, and is
226
Danielle Soban
practically non-existent in the fully probabilistic environment. Figure 93 is another way of presenting the same data, in the form of an overlay chart. When the curves are superimposed on each other, the relative contributions of the design environment and the threat environment are readily apparent.
Forecast: % Survivability Blue-1 Campaign 2
Forecast: % Survivability Blue-1 Campaign 1 10,000 Trials Frequency Chart
10,000 Trials
Frequency Chart
.022
217
.019
193
.016
162.7
.014
144.7
.011
108.5
.010
96.5
.005
54.25
.005
48.25
0
.000
.000 92.62
93.19
93.76
94.33
Forecast: % Survivability Blue-1 Campaign 3 10,000 Trials
0 96.77
94.91
97.69
98.60
99.52
100.43
Forecast: % Survivability Blue-1 Campaign 4
Frequency Chart
10,000 Trials
Frequency Chart
.021
214
.020
203
.016
160.5
.015
152.2
.011
107
.010
101.5
.005
53.5
.005
50.75
0
.000
.000 91.64
92.33
93.02
93.70
94.39
0 95.78
96.97
98.15
99.34
100.53
Forecast: % Survivability Blue-1 Composite Camp. 10,000 Trials
Frequency Chart
.027
271
.020
203.2
.014
135.5
.007
67.75 0
.000 91.46
93.72
95.98
98.24
100.51
Figure 89 – Probability Density Functions for Survivability of Blue-1 Aircraft with Fully Probabilistic Environment
227
Danielle Soban
Forecast: % Survivability Blue-2 Campaign 1 10,000 Trials
Forecast: % Survivability Blue-2 Campaign 2
Frequency Chart
10,000 Trials
Frequency Chart
.021
208
.021
210
.016
156
.016
157.5
.010
104
.011
105
.005
52
.005
52.5
0
.000
.000 92.22
93.16
94.10
95.04
95.98
97.62
98.49
99.37
100.24
Forecast: % Survivability Blue-2 Campaign 4
Forecast: % Survivability Blue-2 Campaign 3 10,000 Trials
0 96.74
10,000 Trials
Frequency Chart
Frequency Chart
.021
210
.020
196
.016
157.5
.015
147
.011
105
.010
98
.005
52.5
.005
49
0
.000
.000 90.12
91.46
92.80
94.14
0 95.95
95.47
97.00
98.04
99.09
100.13
Forecast: % Survivability Blue-2 Composite Camp. 10,000 Trials
Frequency Chart
.031
311
.023
233.2
.016
155.5
.008
77.75
.000
0 90.23
92.72
95.21
97.70
100.19
Figure 90 – Probability Density Functions for Survivability of Blue-2 Aircraft with Fully Probabilistic Environment
228
Danielle Soban
Frequency Comparison .057
% Survivability Blue-1 Composite Campaign .043
% Survivability Blue-1 Campaign 1
.028
% Survivability Blue-1 Campaign 2
.014
% Survivability Blue-1 Campaign 3 % Survivability Blue-1 Campaign 4
.000 91.00
93.50
96.00
98.50
101.00
Frequency Comparison .039
% Survivability Blue-2 Composite Campaign
.029
% Survivability Blue-2 Campaign 1
.020
% Survivability Blue-2 Campaign 2
.010
% Survivability Blue-2 Campaign 3 % Survivability Blue-2 Campaign 4
.000 90.00
92.75
95.50
98.25
101.00
Figure 91 – Overlay Charts for Aircraft Survivability with Fully Probabilistic Environment
229
Danielle Soban
Forecast: % Survivability Blue-1 Composite Camp.
10,000 Trials
Frequency Chart
.048
Forecast: % Survivability Blue-1 Composite Camp. 10,000 Trials
482
.036
361.5
.750
.024
241
.500
.012
120.5
.250
.000 91.63
93.73
95.82
97.91
0 100.01
Forecast: % Survivability Blue-1 Composite Camp. 10,000 Trials
Frequency Chart
206.2
.750
.014
137.5
.500
.007
68.75
.250
.000
0
.000
96.92
203.2
.750
.014
135.5
.500
.007
67.75
.250
.000
0
.000
98.24
95.01
96.92
Probabilistic Threats (Design Variables Held Constant)
98.82
Reverse Cumulative 10000
Fully Probabilistic Environment
0 91.46
100.51
93.11
1.000
.020
95.98
Reverse Cumulative
Forecast: % Survivability Blue-1 Composite Camp.
271
93.72
Constant)
100.01
0
10,000 Trials
.027
91.46
97.91
10000
91.20
Frequency Chart
95.82
1.000
98.82
Forecast: % Survivability Blue-1 Composite Camp. 10,000 Trials
93.73
Probabilistic Design Variables (Threats Held
Forecast: % Survivability Blue-1 Composite Camp. 10,000 Trials
.021
95.01
0 91.63
275
93.11
10000
.000
.028
91.20
Reverse Cumulative
1.000
93.72
95.98
98.24
100.51
Figure 92 – Comparison of PDFs and CDFs for Blue-1 Survivability, Composite Campaign
Frequency Comparison .042
Probabilistic Design Variables Only
.032
Probabilistic Threat Variables Only .021
All Probabilistic Variables .011
.000 91.00
93.50
96.00
98.50
101.00
Figure 93 – Overlay Chart Comparing Contributions of Design and Threat Environments to Fully Probabilistic Environment for Blue-1 Survivability
Danielle Soban
230
Figure 94 shows the cumulative charts plotted on the same axis. From this it is clear to see how varying different sets of input parameters effects the shape of the overall curve. The lower half of the design curve is shifted to the left of the overall curve. At higher survivability values, however, the curve is shifted to the right of the overall curve. The threat curve is offset at both low and high values to the left of the overall curve. What this means is that at the higher confidence levels (where the analyst prefers to be), looking at the effect of the design variables in isolation results in an overprediction of the survivability rate. For a 70% confidence, this overprediction is on the order of 0.44% for the Blue-1 aircraft. While this may not seem like a lot, returning to the conclusions of Reference 67, a small change in survivability can have huge impacts in overall force effectiveness. The graph also shows that the combined survivability is significantly greater than that achieved simply by considering the probabilistic threat environment. (Remember, however, that an underprediction is preferred in this case to an overprediction). In light of this example, it is clear that the analyst must consider the combined effects of design variable changes as well as a probabilistic threat environment in order to accurately assess aircraft survivability rates.
231
Danielle Soban
100% 90% 80% Confidence
70% 60%
Design Threat All
50% 40% 30% 20% 10% 0% 90.00
92.00
94.00
96.00
98.00
100.00
102.00
Survivability Rate Blue-1 Aircraft (%)
Figure 94 – Cumulative Effects of Varying Sets of Input Data for Survivability Rate of Blue-1
232
Danielle Soban
CHAPTER IX
CONCLUDING REMARKS
This research and consequent dissertation resulted from a simple observation. As aircraft designers and researchers, a lot of attention is paid to the geometric and performance outputs that are the result of a design. Yet how can the results really be a measure of the “goodness” of the aircraft when the aircraft is never placed in the context for which it was designed?
For military aircraft, this context is the warfighting
environment. Therefore, to really understand the effectiveness of an aircraft design, that aircraft must be placed in its intended environment and assessed there. In addition, the links between design changes at the engineering level and the resulting effectiveness parameters at the warfighting level must be made clear to both the analyst and the designer early enough in the design process for those effects to influence the design. A second observation resulted from the first. Much work has been done using probabilistic methods applied to aircraft design. This allows the designer to quantify the effects of the uncertainty that is inherent in the design process, as well as assess the effects of new technologies. Why not extrapolate these methods to the campaign level? This would help the analyst account for the uncertainties inherent in the warfighting environment, and also allow the consideration of a probabilistic threat environment. Research was conducted to examine whether the above observations were justified and if there existed a need to address these issues. The results of this research clearly indicated that there was a need for a military system effectiveness framework.
Danielle Soban
233
This was based on a rapidly changing world economy in which a significant amount of resources were being placed into the research and procurement of warfighting tools. Decision makers needed information that would allow them to allocate resources, make trade studies between system components, and develop appropriate requirements for future system designs. While the need for a system effectiveness framework was clear, it was also established that there was a lack of just such an analysis environment. Part of this lack was a function of differing definitions and connotations of key concepts and words, such as "system effectiveness” and “operations research”, hindering communication.
In addition, those agencies, both government and private, that did
develop some sort of analysis framework often did not publish results and were not open to the idea of sharing methods and results (often due to national security reasons and proprietary interests). This led to a lack of discussion in the open literature about such frameworks and methodologies.
Finally, there is a common misconception that an
analysis tool and a methodology are the same thing. Research showed that modeling tools abound, but there are few, if any, methodologies. A significant result of this research is the clear definition of the term “system effectiveness”, coupled with a clear, cohesive analysis methodology. Concluding that a methodology for the assessment of military system effectiveness would be a worthwhile contribution to the general body of knowledge, it was determined that a system of systems approach would be appropriate. By redefining the system from the aircraft to the overall campaign environment, useful system level
234
Danielle Soban
metrics could be examined, and the observation made in the first paragraph could be addressed. Current probabilistic methods were examined to ascertain their appropriateness for inclusion into the proposed methodology.
Design of experiments methods and
response surface equations methods were studied. The Technology Impact Forecasting method, with its use of k-factors for modeling uncertainties and technological additions, was considered a base concept upon which the methodology would be modeled. Attention then turned to researching military models and their characteristics. The probabilistic methods that were considered compelled the use of modeling tools to generate the appropriate data. The concept of a military modeling continuum was then developed to help justify the necessity of a smooth and uninterrupted analysis path between different levels of the continuum (and thus between the military models). A preliminary investigation was conducted to ensure that it was possible and appropriate to apply the current statistical methods to the theater level. Three primary issues were identified that would need to be addressed by the proposed methodology. The first is the level of detail problem. As shown in the military code continuum, the level of modeling detail in a code progresses from a high level at the engineering level to a low level at the campaign end. The use of model integration (linking together codes) and model abstraction (metamodeling) was proposed to address the level of detail problem. While often discussed in the literature as separate approaches, the research concluded that a combination of the techniques was the appropriate solution (“abstragration”). The second issue was called the human-in-the-loop problem, and
235
Danielle Soban
addressed the common practice of most campaign codes to have an analyst contribute in real time to the analysis of the code while it is running. While this can be a fairly straightforward practice when examining a single scenario, it does not allow the exploration of an entire analysis space in a timely manner. A solution, replacing the human with an embedded rule set, was rejected because of the possibility of generating unrealistic and fallacious results.
A compromise was sought to model the human
interaction, and the research presents the concept of key decision nodes around scenario pathways that are themselves modeled probabilistically. The final issue was that of scenario significance. Simply put, it is fairly straightforward to optimize a design if the threat environment is known. However, if a robust design is desired, the threats must be allowed to vary and the design is selected based in its insensitivity to the threat environment. All of the above information was combined in the proposal of a methodology to assess military system effectiveness. Called POSSEM (PrObabilistic System of Systems Effectiveness Methodology), the framework is a step by step methodology for conducting a system of systems analysis. The first step is the most important: create the conceptual model.
This is the upfront work that helps define the scope of the problem and
determines the tools to be used. The baseline concepts are selected and the scenario is established.
The next step starts to address the human-in-the-loop problem.
It
determines the key decision nodes in the scenario. POSSEM then continues by creating the linked analysis environment using the elements determined by the conceptual model. This is the most difficult step and often the most time-consuming.
Computational
236
Danielle Soban
pathways must be established between all modeling tools, and the inputs and outputs for the individual tools defined. The concepts of model integration and model abstraction are applied where appropriate. Once the analysis pathway has been clearly established, the environment is used in the application of the probabilistic techniques. All of the previous POSSEM steps led up to the development of the fully probabilistic environment.
Metamodels of the modeling codes are created where
appropriate, as determined by the conceptual model. The metamodels and any other mapping relationships are combined into a single computational entity, such as a spreadsheet. Probability distributions are then assigned to any or all of the variables and a Monte Carlo simulation is performed. The real power of POSSEM comes from the use of this environment for analysis. The individual or combined effect of varying input parameters, at any level, is an easy, inexpensive computational task. The analyst can play “what if” games rapidly and gain insight by exploring the entire analysis space without having to return to the original modeling codes. An example of the entire POSSEM process was performed using a survivability test case.
This example was ideal because survivability enhancements are best
incorporated at the preliminary design level (engineering) yet their effect is primarily seen at the campaign level. This allowed the creation of an analysis pathway that linked the engineering level all the way through the military code continuum to the campaign level. Design variables were allowed to vary probabilistically, and a probabilistic threat environment was incorporated. Results in the form of probability distributions and overlay charts were presented.
Danielle Soban
237
In the example, three components of susceptibility were modeled and used; stealth (in the form of radar cross section), maneuverability, and tactics (in the form of a low level attack approach). It should be realized, however, that there are other survivability parameters that were not addressed that would be important to overall aircraft survivability. The insights gained from the example are all dependent on only those variables used and addressed in the modeling process. Thus, it is reinforced to the reader that the example served as a proof of concept of POSSEM, and not as a definitive study of survivability and susceptibility. POSSEM is the response to the need for a military system effectiveness framework. It provides the analyst with a step by step methodology to create and use a fully probabilistic analysis environment. As a decision maker, the analyst can use the resulting data and insight for resource allocation, trade studies between system components, and the development of requirements for future systems. It is observed at this point that although the methodology was formulated to aid in the analysis of military system effectiveness, it can be applied to system effectiveness studies in general. POSSEM is a methodology, and, as such, it provides an analysis framework. The modeling tools used in POSSEM are independent of the process. Given a generic complex system of systems, POSSEM can still be applied. For example, the system of systems could be a cargo transportation system, similar to the United Parcel Service (UPS) or Federal Express (FedEx). Changes made at the engineering level, such as to the delivery trucks, the aircraft used, or even the procedures of the employees, could be propagated up to the “campaign” level, or the total system level. The effectiveness of
238
Danielle Soban
the entire network could be quantified using Measures of Effectiveness such as delivery time, direct operating costs, and return on investment.
The concepts inherent to the
POSSEM framework may be easily applied to this and other complex systems.
Research Questions and Answers
With the development and presentation of POSSEM now complete, a return to the research questions is appropriate. 1) What are the needed elements of a systems effectiveness methodology? The individual steps of POSSEM are the basic elements needed for a system effectiveness methodology.
It was determined that an initial element consisting of
upfront analysis was crucial to the success of the analysis. A linked analysis environment must be created, and used in a fully probabilistic manner. 2) How does one define the system and its components? Traditionally, the system was defined as the aerospace vehicle. In this research, the system is redefined to become the entire warfighting environment, and the aerospace vehicle becomes a component of the new system. This leads to a “system of systems” formulation, which is then incorporated into the POSSEM framework. 3)
How does one link design level metrics to system level Measures of Effectiveness?
239
Danielle Soban
The creation of the linked analysis environment in POSSEM is used to link design level metrics to system level Measures of Effectiveness.
The techniques of model
integration and model abstraction are used in combination, and their use is defined by the conceptual model. 4)
Can the methodology be used to analyze the effect of incorporating survivability concepts at the vehicle level, yet assess them at the theater level? Through the use of the linked analysis environment, survivability concepts can be
added to an aerospace vehicle and their effect propagated through the environment up to the theater level. 5)
How can uncertainties, such as unknown or incomplete data, rapidly changing technologies, and uncertainties in the modeling environment be taken into account in a system effectiveness methodology? Probabilistic techniques are applied to the linked analysis environment. This
introduces uncertainties into the modeling equations, allowing the effects of unknown data, new technologies, and varying threats to be quantified.
240
Danielle Soban
Recommendations
The lessons learned from pursuing a project of this magnitude are substantial. Some of these lessons will be passed on here in the form of recommendations and future work. Perhaps the most difficult part of beginning a research project such of this is the proper identification of the problem that needs solving. Research into this particular issue was clouded in part due to the differing definitions and connotations of key words, such as “system effectiveness”. One lesson learned from this part of the research is that researchers and analysts must take great care that the people they are communicating with share the same meanings of these key words. When possible, operating definitions of these words need to be introduced at the beginning of conversations and all parties agree with and understand the verbiage. This research proposed a definition of system effectiveness, and it is recommended that this definition be used consistently by the researcher and her professional organization(s) in all future work. It is further hoped that this definition find broad acceptance and use in the community at large. The resulting analysis framework POSSEM was shown in the research to be a very powerful tool for tradeoff studies and “what if” analysis. It gives the analyst a clear methodology to follow that results in the creation of a comprehensive and useful analysis tool that allows the exploration of an entire analysis space in a computationally inexpensive way. While just a few examples of the utility of such a tool were presented here, many more areas of use should be explored. For example, the scenario presented herein was a fairly straightforward proof of concept study. The method, however, allows
241
Danielle Soban
analysis of a much more complex scenario. The probabilistic threat environment could be expanded tremendously. One very interesting study would be to allow the threats, such as the SAM sites, to move geographically. This would mimic imperfect knowledge on the Blue side, a major source of uncertainty in the real world. In addition, more complex flight paths for the attackers could be explored, including attacks from behind the installation and around key threats. Attack paths that take the raids through a series of integrated defenses could be explored, and those defenses varied probabilistically. Finally, more advanced studies could be conducted that combine dissimilar attack components, such as tanks and aircraft working in concert. More research could be conducted into the behavior of the probabilistic methods involved in the methodology. For example, it was clear from the test case that increasing the number of probabilistic variables had the effect of “smoothing” the resulting distributions.
It is postulated that when the number of parameters being varied
probabilistically become too high, the resulting curves could approach normal distributions, and the transparency of cause and effect relationships diminished. Research into an optimal maximum number of parameters to be varied could be conducted, and a resulting screening process, similar in concept to that presented in Chapter II, could be presented and added as a part of the POSSEM framework. Several recommendations are made concerning the actual implementation of POSSEM. One of the original goals of the research was to extrapolate existing statistical and probabilistic methods up to the campaign level.
It was found that using the
traditional response surface methodology worked quite well at the lower end of the
Danielle Soban
242
military code continuum. As the continuum points out, codes at the lower end are usually physics-based codes incorporating rather precise mathematical relationships.
Thus,
replacing these mathematical relationships with metamodels often results in quite good modeling fits and a significant reduction in execution time without sacrificing a lot of accuracy. As one progresses through the military code continuum, however, the behavior of the codes starts to become more difficult to model using typical regression techniques. The research consistently showed that model fits at the upper end of the code continuum had more error than those at the lower ends. One major source of this error is the introduction of discrete events that are difficult to model with a continuous curve. For example, one of the probabilistic threat variables was the radar tracking range of the SAM site. Given the range of the variables, it is conceivable that there will occur a radar tracking range that is so small that the attacking aircraft is simply not detected. This is a discrete event: as the tracking range decreases, the detectability of the aircraft decreases, but at some point, it simply goes to zero. Trying to model this event with a continuous curve is difficult and sometimes not even achievable with acceptable error levels. On the other hand, it was interesting to note that the damage to the SAM sites curves had very little error, yet an examination of the data showed definite “clumps” of damage. This was due to the repair rate of the SAM sites combined with the distributed hourly nature of the attacks. In other words, the aircraft would do damage, then a few hours were allowed when the sites would attempt repairs, and then a new round of damage would occur. This led to a cyclical pattern of damage, repair, damage, repair that showed up in the data as damage clumps. A very good curve fit, however, could be made through the center of the
Danielle Soban
243
clumps. The point is one that is made elsewhere in this document: the method and its results cannot stand in isolation as a black box. The analyst needs to understand the behavior of the models and needs to be able to interpret this data correctly. To address these poor model fits at the campaign level, it is recommended that other model fit techniques be explored. The choice of modeling codes needs to be considered very carefully. Although an important step mentioned in the creation of the conceptual model, it was discovered through the research that the code selection impacts the implementation of the process to a much greater degree than originally postulated. Learning curves associated with the use of a previously unfamiliar code can be substantial. In addition, it is not uncommon for the actual capabilities of the code to not meet expectations set forth in the advertisement of that code.
The process through which the user interacts with the code is very
important in the implementation of the method:
if multiple code runs cannot be
performed easily and fairly rapidly, the usefulness of the method diminishes considerably. In the course of this research, it was attempted to bring several different codes into the process, all without success. This resulted in serious research delays. If this research had been a time critical analysis task, the task itself might have been jeopardized. The lesson learned is that the analyst should have considerable knowledge about the capabilities and features of the analysis codes that are considered for use within the POSSEM framework. Finally, it is recommended that more research be conducted into the refinement of the decision node technique used to address the human in the loop problem. While the
Danielle Soban
244
method was used extremely effectively in the example case presented, it must be pointed out that there were only two decision nodes in this scenario, resulting in four analysis pathways to be explored. Response surface equations were created for each of those pathways, which was computationally expensive, but still within the scope of the problem. However, if a more substantial amount of decision nodes were added to the process, the number of potential analysis pathways would increase exponentially. Depending on the modeling tool, creating a metamodel for each pathway may be computationally unrealistic. One way of addressing this issue could be to develop a sort of screening process for the decision nodes, enabling the selection of key nodes only. Another area of possible research could be to use design of experiment techniques to determine a statistically significant subset of analysis pathway combinations to model, rather than creating a metamodel of each and every decision. Finally, the technique of genetic algorithms could be explored for applicability to this particular area.
245
Danielle Soban
REFERENCES
1
Habayeb, A.R., Systems Effectiveness, Pergamon Press, Headington Hill Hall, Oxford, England. 1987.
2
Volpe V. and Schiavone, J.M., “Balancing Design for Survivability”, Grumman Aerospace and Electronics Group AIAA 93-0982, AIAA/AHS/ASEE Aerospace Design Conference, Irvine, CA 16-19 February, 1993.
3
Ball, Robert E., The Fundamentals of Aircraft Combat Analysis and Design, American Institute of Aeronautics and Astronautics, Inc., 1985.
4
Toffler, A. and Toffler, H., War and Anti-war: Survival at the Dawn of the Twenty-First Century, Little, Brown and Company, 1993.
5
Jaiswal, N.K., Military Operations Research Quantative Decision Making, Kluwer Academic Publishers, Boston, 1997.
6
Hughes, W.P. et al, Military Modeling for Decision Making, 3rd Edition, Military Operations Research Society, Inc.,1997.
7
Blanchard, Benjamin S., Logisitics Engineering and Management, 5th Edition, Prentice Hall Inc., 1998.
8
ARINC Res. Corp., Reliability Engineering. Englewood Cliffs, NJ. Prentice Hall 1964.
9
“Definitions of Effectiveness Terms for Reliability, Maintainability, Human Factors, and Safety”, MIL-STD-721B, US Department of Defense. August 1966.
10
Kececioglu, D., Maintainability, Availability, & Operational Readiness Engineering Handbook, Volume 1, Prentice Hall PTR, Upper Saddle River, NJ, 1995.
11
Tillman, F.A., Hwang, C.L., and Kuo, W., “System Effectiveness Models: An Annotated Bibliography”, IEEE Transactions of Reliability, Vol. R-29, No. 4, October 1980.
12
Aerospace Systems Design Laboratory, in-house document, 1999.
Danielle Soban
246
13
Seykowski, R., Research Associate, Potomac Institute for Policy Studies, electronic mail conversation, June 2000.
14
Fabrycky, W.J., Blanchard, B.S., Life-Cycle Cost and Economic Analysis, Prentice-Hall Inc., Englewood Cliffs, New Jersey, 1991.
15
Dixon, J.R., Design Engineering: Inventiveness, Analysis, and Decision Making, McGraw-Hill Book Company, New York, 1966.
16
The American Heritage Dictionary of the English Language, Third Edition, Houghton Mifflin Company, 1996.
17
Myers, R., Khuri, A., and Carter Jr., W., “Response Surface Methodology: 1966-1988”, Technometrics, Vol. 31, No. 2, May 1989.
18
Welch, W., Buck, R., Sacks, J., Wynn, H., “Screening, Predicting, and Computer Experiments”, Technometrics, Vol. 34, No. 1, February 1992.
19
Box, G.E.P., Draper, N.R., Empirical Model Building and Response Surfaces, John Wiley & Sons, New York, 1987.
20
Montegomery, D.C., Design and Analysis of Experiments, 3rd Ed., John Wiley & Sons, New York, 1991.
21
Box, G.E.P., Hunger, W.G., Hunter J.S., Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, Wiley & Sons, New York, 1978.
22
Bandte, O., “A Probabilistic Multi-Criteria Decisions Making Technique or Conceptual and Preliminary Aerospace Systems Design”, Ph.D. Thesis, Georgia Institute of Technology, October 2000.
23
Bailey, M., Rothenberg, L., “JMP Software: New Features and Enhancements in Version 4 Course Notes”, SAS Institute, Inc., 2000.
24
U.S. Department of the Interior, Bureau of Reclamation Website: Decision Process Guidebook. http://www.usbr.gov/Decision-Process/toolbox/paretopr.htm, 2001.
25
Minitab Website: http://www.minitab.com/, 2001.
26
Engineous Sotfware, “iSIGHT: World Leading Desing Exploration
Danielle Soban
247
Technology”, http://www.engineous.com/isightv5.html, 2001.
27
Mavris, D.N., Soban, D.S., Largent, M.C., “An Application of a Technology Forecasting (TIF) Method to an Uninhabited Combat Aerial Vehicle”, AIAA 1999-01-5633, 1999 World Aviation Conference, 19-21 October, San Francisco, CA.
28
Mavris, D.N., Baker, A.P., Schrage, D.P., “Implementation of a Technology Impact Forecast Technique on a Civil Tiltrotor”, Proceedings of the 55th National Forum of the American Helicopter Society, Montreal, Quebec, Canada, May 25-27, 1999.
29
Kirby, M.R., Mavris, D.N., “Forecasting the Impact of Technology Infusion on Subsonic Transport Affordability”, World Aviation Congress and Exposition, Anaheim, CA, September 28-30, 1998. SAE-985576.
30
Mavris, D.N., Kirby, M.R., “Forecasting Technology Uncertainty in Preliminary Aircraft Design”, World Aviation Congress and Exposition Congress, San Francisco, CA, October 19-21, 1999. SAE Paper 1999-015631.
31
Mavris, D.N., Kirby, M.R., “Technology Identification, Evaluation, and Selection for Commercial Transport Aircraft”. 58th Annual Conference of Society of Allied Weight Engineers, San Jose, CA, 24-26 May, 1999.
32
Mavris, D.N. and DeLaurentis, D.A., “A Probabilistic Approach for Examining Aircraft Concept Feasibility and Viability”, Aircraft Design, Vol. 3, pg 79-101, Pergamon, 2000.
33
Decisioneering, Inc., “Crystal Ball, Computer Program and Users Guide”, Denver, CO, 1993.
34
Contingency Analysis Website: http://www.contingencyanalysis.com/glossarymontecarlosimulation.htm
35
Aerospace Systems Design Laboratory Publications Website: http://www.asdl.gatech.edu/publications/index.html
36
Grier, J.B., Bailey, T.G., Jackson, J.A., “Response Surface Modeling of Campaign Objectives Using Factor Analysis”, Military Operations Research, V4 N2 1999.
Danielle Soban
248
37
Grier, J.B., Bailey, T.G., Jackson, J.A., “Using Response Surface Methodology to Link Force Structure Budgets to Campaign Objectives”, Proceedings of the 1997 Winter Simulation Conference.
38
Soban, D.S., Mavris, D.N., "Formulation of a Methodology for the Probabilistic Assessment of System Effectiveness”, Presented at the AIAA 2000 Missile Sciences Conference, Monterey, CA, November 7-9, 2000.
39
Hillestad, R.J., Bennett, B., Moore, L, “Modeling for Campaign Analysis, Lessons for the Next Generation of Models, Executive Summary”, prepared for the United States air Force, Rand, 1996.
40
Bennett, B.S., Simulation Fundamentals, Prentice Hall International, 1995.
41
VSE User’s Guide, webpage: http://www.cslab.vt.edu/VSE/
42
Ackoff, R.L., Gupta, S.K., Minas, J.S., Scientific Method- Optimizing Applied Research Decisions, John Wiley & Sons, Inc., New York and London, 1962.
43
Caughlin, D., “Verification, Validation, and Accreditation (VV&A) of Models and Simulations Through Reduced Order Metamodels”, Proceedings of the 1995 Winter Simulation Conference.
44
Anderson, L.B., Cushman, J.H., Gropman, A.L., Roske, V.P., “SIMTAX: A Taxonomy for Warfare Simulation”, Military Operations Research Society, Workshop Report, October, 1989.
45
Ziegler, B.P., Praehofer, H., Kim, T.G., Theory of Modeling and Simulation: Integrating Discrete Event and Continuous Complex Dynamic Systems, 2nd Edition, Academic Press, 2000.
46
Zaerpoor, F., Weber, R.H., “Issues in the Structure and Flow in the Pyramid of Combat Models”, 68th MORSS, US Air Force Academy, 2022 June 2000.
47
“Catalog of Wargaming and Military Simulation Models”, (UNCLAS) (Computer Diskette), Joint Staff, Washington, DC, Force Structure Resource and Assessment Directorate (J-8), 12th edition 7 Feb 92. ADM000108/NAA
Danielle Soban
249
48
SURVIAC, Survivability/Vulnerability Information Analysis Center, http://iac.dtic.mil/surviac/prod_serv/prod_serv.html
49
“SIMTAX, A Taxonomy for Warfare Simulation”, Military Operations Research Society, Workshop Report, 27 October 1989.
50
Enthoven, A.C., Smith, K.W., How Much is Enough? Shaping the Defense Program, 1961-1969, Harper and Row, New York, NY, 1971.
51
B. P. Ziegler, Theory of Modeling and Simulation, John Wiley, New York, 1976.
52
Rozenblit, J.W., Sevinc, S., Zeigler, B.P., “Knowledge-Based Design of LANs Using System Entity Structure Concepts”, Proceedings, Winter Computer Simulation Conference, 1987.
53
Air Force Studies & Analyses Agency's Verification, Validation and Accreditation page for the THUNDER analytical campaign simulation, http://www.s3i.com/AFSAAV&V/campaign.htm, June 2001.
54
Baker, A. P., Mavris, D. N., “Assessing the Simultaneous Impact of Requirements, Vehicle Characteristics and Technologies During Aircraft Design”, 39th Aerospace Sciences Meeting and Exhibit, Reno, NV, January 8-11, 2001, AIAA-01-0533.
55
Hillestad, R.J., Moore, L., “The Theater-Level Campaign Model: A Research Prototype for a New Generation of Combat Analysis Model”, MR-388-AF/A, Rand Corporation, 1996.
56
Sisti, A.F., “Large Scale Battlefield Simulation Using a Multi-Level Model Integration Methodology”, AFRL/IF Webpage: http://www.if.afrl.af.mil/tech/papers/ModSim/LgScale.htm, January, 2001.
57
Sisti, A.F., Farr, S.D., “Model Abstraction Techniques: An Intuitive Overview”, AFRL/IF, Rome, New York. Webpage: http://www.if.afrl.af.mil/tech/papers/ModSim/LgScale.htm, January, 2001.
58
“Integrated Theater Engagement Model (ITEM) Technical Manual”, Version 8.3, Science Applications International Corporation, 10260 Campus Point Drive, San Diego, California 92121 November 5, 1999.
59
Sisti, A.F., Farr, S.D., “Modeling and Simulation Enabling Technologies
Danielle Soban
250
for Military Applications”, AFRL/IF, Rome, New York. Webpage: http://www.if.afrl.af.mil/tech/papers/ModSim/LgScale.htm, January, 2001.
60
“What is Campaign Analysis? An Accreditation Perspective”, webpage: http://www.s31.com/AFSAAV&V/campaign.html, June, 2001.
61
Baker, A., Mavris, D., Schrage, D., Craig, J., “Annual Review of NRTC Task 9.2.1”, in-house document, Aerospace Systems Design Laboratory, Georgia Institute of Technology, November, 1998.
62
Frantz, F.K., “A Taxonomy of Model Abstraction Techniques”, Proceedings of the 1995 Winter Simulation Conference, Washington D.C., 3-6 December, 1995.
63
Caughlin, D., Sisti, A.F., “A Summary of Model Abstraction Techniques”, Proceedings of Enabling Technology for Simulation Science I Conference, Orlando, Florida, 22-24 April, 1997.
64
Brassard, M., The Memory Jogger Plus+ Featuring the Seven Management and Planning Tools, Revised Edition, GOAL/QPC, Methuen, MA 1996.
65
Bronson, R., Naadimuthu, G., Operations Research, 2nd Edition, Schaum’s Outline Series, McGraw Hill, New York, 1982.
66
Adapted from Reference 2.
67
Operation Desert Storm Combat Incident Database Version I, Survivability/Vulnerability Information Analysis Center, March 1991.
68
Sapp, C.N., “Survivability-A Science Whose Time Has Come”, U.S. Naval Institute Proceedings, December 1978.
69
Website of the Joint Technical Coordinating Group on Aircraft Survivability: http://www.aiaa.org/calendar/index.html
70
McCullers, L.A., “Flight Optimization System Release 5 User’s Guide”, NASA Langley Research Center, January, 1988.
71
Roth, B. R., “A Theoretical Treatment of Risk in Modern Propulsion System Design”, Ph.D. Thesis, Georgia Institute of Technology, 2000.
Danielle Soban
251
72
Jane’s Information Group, Jane’s All the World’s Aircraft 1994-1995, Sentinel House, Surrey, United Kingdom, 1995.
73
Standard Aircraft Characteristics F/A-18A Hornet, NAVAIR 00110AF18-1, McDonnell Douglas, October 1984.
74
Kachurak, P., “F/A-18C Substantiating Performance Data with F404-GE402 Engines” Report MDC91B0290.
75
Soban, D.S., Largent, M.C. , “Modeling of F/A-18C Presentation”, inhouse document, Aerospace Systems Design Laboratory, Georgia Institute of Technology. Used in support of Office of Naval Research Contract N00014-00-10132, 1999.
76
Soban, D.S., Largent, M.C. , “Weight Breakdown Presentation”, in-house document, Aerospace Systems Design Laboratory, Georgia Institute of Technology. Used in support of Office of Naval Research Contract N00014-00-10132, 1999.
77
Hines, N.R., “A Probabilistic Methodology for Radar Cross Section Prediction in Conceptual Aircraft Design”, Ph.D. Thesis, Georgia Institute of Technology, June 2001.
78
Mavris, D.N., Soban, D.S., Largent, M.C., “Determination of Notional Aircraft Performance Relationships, Final Report”, submitted to Institute of Defense Analysis (IDA), Contract Reference Number DB/02.500.360.98.48245, May, 1998.
79
“Integrated Theater Engagement Model (ITEM) User’s Manual”, Version 8.3, Science Applications International Corporation, 10260 Campus Point Drive, San Diego, California 92121 November 5, 1999.
80
Knott, E., Shaeffer, J., Tuley, M., Radar Cross Section, Artech House, Norwood, MA 1993.
81
“F-22 Raptor Information Site”, website: http://f22rap.virtualave.net/stealth.html.
82
“Virginia Tech Dynamics and Control Research”, website: http://www.aoe.vt.edu/ACRS.html.
83
Stinton, D., Flying Qualities and Flight Testing of the Airplane, AIAA
Danielle Soban
252
Education Series, American Institute of Aeronautics and Astronautics, Inc., Washington, D.C., 1996. 84
Raymer, D.P., Aircraft Design: A Conceptual Approach 3rd Edition, AIAA Education Series, American Institute of Aeronautics and Astronautics, Inc., Reston, VA, 1999.
85
von Mises, R., Theory of Flight, Dover Publications, New York, 1959.
Danielle Soban
253
VITA Danielle Soban was born in Simi Valley, California in the Spring of 1966. She was raised there and attended Simi Valley High School, where she graduated in 1984. She then attended California Polytechnic State University and graduated with her Bachelor of Science in 1991. She remained at Cal Poly, SLO to pursue her Master of Science in Aeronautical Engineering, which she obtained in 1996. Her Master’s Thesis was entitled “PREDAVOR: Development and Implementation of Software for Rapidly Estimating Stability Derivatives and Handling Qualities”. While pursuing her degree, Ms. Soban interned at NASA Ames Systems Analysis Branch and also completed a research study entitled “Special Uses of the C17- An Investigation of Three Market Niches Applicable to a Commercial Version of the McDonnell Douglas C17 Transport Aircraft” while under contract to McDonnell Douglas. Ms. Soban moved to Atlanta, GA in 1996 to pursue her doctorate in Aeronautical Engineering from the Georgia Institute of Technology. She was employed as a Graduate Research Assistant at the Aerospace Systems Design Laboratory at Georgia Tech, where she worked on a variety of projects, including applying probabilistic methods to an Uninhabited Combat Aerial Vehicle (UCAV), exploring the effects of adding parasitic stealth to a notional aircraft fighter, and the design of a High Speed Civil Transport (HSCT). She currently resides in Mableton, GA with her husband Todd and their cat Maggie.