not sufficiently efficient to support comprehensive data farming endeavors. ... Civil Security, and (4) Support and Economic Infrastructure. These were deemed ...
Team 1: Cyber Defence in Support of NATO Team 1 Members Balestrini-Robinson, Santiago, PhD Georgia Tech Research Institute, US Horne, Gary, PhD Blue Canopy, US Ng, Kevin, PhD Defense Research & Development Canada, Canada Simo Huopio Finnish Defence Forces Technical Research Centre, Finland Ürek, Burhan, Maj Turkish Army, Turkey Johan Schubert, PhD Totalförsvarets Forskningsinstitut (FOI), Sweden
farming endeavors. For example, in some cases, a single execution could take in the order of 10 minutes to run a 4 year simulation on a standard computer. Such a case must be repeated dozens to hundreds to times, and a single study may have hundreds of such repetitions of cases. Additionally, the initial assumptions that NetLogo would facilitate the collaborative development of a model proved to be invalid as the model was developed by a single group member. For these reasons, the authors attempted to use the IWW 29 to learn a final set of lessons from the NetLogo model and start development of a more efficient discrete event simulation model using the open-source discrete event simulation framework SimPy and version control it using Git to facilitate tracking and integration of changes.
ANALYSIS INTRODUCTION The overall objective of the NATO MSG-124 is to apply data farming capabilities to contribute to the development of improved decision support to NATO forces. The overall goal of Team 1 is to provide quantitative insight into cyber security technologies and measures, for the purpose of providing more secure networks to NATO and partner nations. For this purpose, the team has developed a prototype simulation using an agent-based model. Development and use of the model was suspended between IWIW 28 and IWIW 29, nonetheless, the team recognized that the NetLogo model originally developed was not sufficiently efficient to support comprehensive data
The experimental design consisted of 26 cases surrounding a base case scenario (depicted in Figure 1). Each case was repeated 12 times, for a total of 312 cases. The metrics assessed were: (1) Search and Attack, (2) Cordon Search, (3) Civil Security, and (4) Support and Economic Infrastructure. These were deemed different enough to capture the diversity in the types of operations analyzed and ensure the results would be more universally applicable. The parameters varied (and their ranges) were: • Mean time to update (5 to 35 days) • Susceptibility to phishing (1% to 6%) • Maximum competency of attacker (60% to 90%) • Sensor probability of detection (30% to 80%)
Figure 1. Base Case for Exploration
2 - IWW 29 - Team 1
FUTURE WORK The SMEs proposed additional ideas for improving the model, as listed below: •
Create a tool that can let operators explore the options available for setting up a network and providing insight into which behaviors need to be modeled
•
Capture the dynamics of malware transfer to jump through subnets/servers
Additionally, as initially described in the introduction section, the authors decided to begin work in porting the model to a more efficient framework using SimPy and using a version control system (VCS) to facilitate the tracking and merging of changes. The VCS seleted was Git, and the tool was chosen to be hosted in the popular site github.com. The current new DACDAM model can be found here: https://github.com/sanbales/dacdam
Figure 2. Factors impact on Search and Attack
The team plans to pursue the following objectives during the next meeting to be held in Stockholm in June 2015: •
Obtain feedback from cyber model output from Swedish cyber experts
•
Data Farming of cyber model to explore more parameters of interest
•
Analysis of data farming results
•
Explore inserting ideas from Active Cyber Defence into syndicate work
•
Begin inserting draft sections into the outline of the cyber syndicate report
Figure 3. Factors impact on Civil Security
RESULTS
REFERENCES
The results from the experimental design were analyzed using SAS’s JMP statistical analysis tool. The authors perform a screening analysis to identify the p-value for each parameter and their interactions to assess their significance in predicting the variability in the outputs.
1.
Muller, K., and Tony Vignaux. "Simpy: Simulating systems in python." ONLamp. com Python Devcenter, 2003.
2.
Sycthe, Proceedings and Bulletin of the International Data Farming Community, Issue 16, What-if? Workshop 28 proceedings, October 2014.
3.
Wilensky, U., “NetLogo,” Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL, 1999. http:// ccl.northwestern.edu/netlogo/
The results indicate that mean-time-to-update is indeed the most significant parameter in this particular scenario, as its p-value is less than 0.05 (an indication that there is a low probability that the mean-time-to-update may not be correlated to the variability in the output metrics). For illustration purposes, the results for `Search and Attack` and `Civil Security` are presented in Figure 2 and Figure 3. In addition, the impact indicated that higher values for meantime-to-update would reduce the ability to perform the given mission, which agreed with the authors’ intuition. This is represented in the images by the negative contrast values.
3 - IWW 29 - Team 1