Can We Measure Security and How? Steven Drager William McKeever
Janusz Zalewski Dept. of Software Engineering Florida Gulf Coast University Ft. Myers, FL 33965 239-590-7317
[email protected]
Air Force Research Lab Rome, NY 13441
[email protected] [email protected]
Andrew J. Kornecki ECSSE Department Embry-Riddle Aeronautical University Daytona Beach, FL 32114 386-226-6888
[email protected]
ABSTRACT In this paper, basic issues of measuring security as a system property are discussed. While traditional approaches to computer security metrics deal mostly with security at the enterprise or organizational level, fewer authors address security measurement at the operational level, that is, when the system is running. After reviewing some basic issues in security assessment, three possible ways of addressing the security measurement are outlined: theoretical, experimental and computational. The computational path in measuring security is pursued in more detail.
Categories and Subject Descriptors D.2.8 [Software Engineering]: Metrics – complexity measures, product metrics.
General Terms Measurement, Experimentation, Reliability, Security.
Keywords Security, software assurance.
1. INTRODUCTION There are many research and engineering papers [1-2], as well as books [3-4], government reports [5], and websites [6-7], discussing security metrics. A vast majority of them deal with metrics at the management level and have very little to do with measurement in a scientific sense of the term, as developed in measurement theory [8-9]. What is meant by security metrics in these publications is primarily adherence to standards, whether established industry standards or internal company standards, leading to the assessment of how security policies are executed, for example, by implementing respective processes and auditing them. While this way of security assessment is beneficial and productive, measuring security as a property of a computing system or software is not particularly well developed. Along these lines, even international software engineering standards adopted by IEEE define security as: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CSIIRW’2011, October 12-14, 2011, Oak Ridge, Tennessee, USA. Copyright 2011 ACM 1-58113-000-0/00/0010…$10.00.
(1) protection of information and data so that unauthorized persons or systems cannot read or modify them and authorized persons or systems are not denied access to them [10]; (2) all aspects related to defining, achieving, and maintaining confidentiality, integrity, availability, non-repudiation, accountability, authenticity, and reliability of a system [11]. What interests us in this paper is not security at the enterprise or the organization level, but rather how security as a computer system property can contribute to protecting information and other resources during system’s operation. In this regard, security can be viewed as one specific aspect of system’s dependability, the other two aspects being safety and reliability. This focus on quantitative assessment of operational aspects of security has become more popular in recent years. A thorough survey has been published in 2009 [12], covering quantitative representation and analysis of operational security since 1981, and addressing the question whether “security can correctly be represented with quantitative information?” The major finding of this study was that “there exists significant work for quantified security, but there is little solid evidence that the methods represent security in operational settings.” Several other recent publications express similar concerns [1318], even arguing that security as a system property is not measurable [19-20]. On the other hand, an increasing number of publications attempt to address the issues of security metrics using more adequate approaches, whether from the architectural viewpoint [21-22] or applying new theoretical models [23-24]. Special attention is also given to the tools for assessment of software security [25-26]. Nevertheless, only a limited number of the newer papers on security metrics bring up a key issue in attempting to measure security, that is, the necessity to approach it from the perspective of science, nearly just like measurements of physical quantities. Two principal methods of quantitative assessment of any property are usually mentioned: theory, which is based on analytical calculations, and experiment, which is based on physical measurements. A third method, simulation, complementary to the former two, has not been given adequate attention in this context (to our knowledge). The simulation is based on computational models, which are typically different from the analytical ones. The objective of this paper is to shed a new light on all three aspects of measuring security. The next sections discuss briefly: (1) selected theoretical models; (2) fundamental concepts of measurements, and (3) an approach to involve simulation in assessing an impact of security breaches in an embedded system.
2. THREE WAYS TO ASSESS VALUES OF A SYSTEM PROPERTY Among the research community, it would be quite obvious that one who wants to assess a value of a system property quantitatively should apply the principles of science. As Glimm and Sharp, for example, point out [27]: “It is an old saw that science has three pillars: theory, experiment, and simulation.” This principle is broadly applied in physics, the mother of modern sciences, but it has been also adopted in computing [28-29]. A closer look at selected computing disciplines reveals that, knowingly or not, this principle has merit, for example, in computer networks. Analytical modeling of network traffic is usually done using queuing theory, measuring network parameters, such as throughput and latency, is done via experiments, and computer simulations use combined computational models to accomplish what cannot be done with theory or live experiments. Below we discuss briefly how all three “pillars” can be applied to quantitative assessment of security, shedding some new light on the meaning of each term in computing.
2.1 Theoretical Models What we mean by a theoretical model is a complete mathematical theory, which would allow analytical calculation of property values and, thus, predicting system behavior over time. Various types of such mathematical models exist and are the basis of modern science and engineering. Some of them are continuous, for example, differential equations, but most of those used in computing are discrete, such as queuing theory, finite state machines, network models (Bayesian networks, Petri nets), rulebased systems, etc. including what is called formal methods. What is interesting to observe is that most of these models, if not all, when applied by engineers begin with a graphical representation of a system they describe. Although not absolutely necessary at the level of formulation of a mathematical theory, when the theory is applied, these graphical representations significantly help in understanding the subject. This is the case with one of the most successful mathematical models ever conceived, a Kalman Filter in control engineering.
The diagram shows that multiple controller interfaces to the process, the operator, the network, and the database, are all subject to security threats. More importantly, to take the analogy further, just like control theory assumes that the plant (controlled object) is subject to disturbances, security theory, if one is built for this model, could assume that known or unknown threats play the role of disturbances to the controller. While the control theory can make usually realistic assumptions about the statistical nature of disturbances (e.g., Gaussian noise), it would be challenging – but not impossible – to try and develop a statistical model for threats. A similar analogy between system failures and security breaches has been described nearly twenty years ago in [30], but not in the control engineering context, and to the authors’ knowledge has not been pursued much further. The assumption is that there are significant similarities between models use in reliability engineering and those used in security assessment. As mentioned above and addressed by nearly every author studying quantitative assessment of security, the most serious problem with making any estimates is the unpredictable nature of threats. Even if one can design countermeasures for existing threats and assess those, there is high likelihood that new, unknown, threats will appear, so one has to design the security system for the unknown. The lack of sufficient information suggests using one of the theories, which deal with uncertainty. One such theory, rough sets [31], not yet widely used and accepted, has been applied in a number of cases, e.g. [23, 32-33]. The attractiveness of a rough set theory in this sort of applications is that a rough set, as opposed to a fuzzy set, which has vague boundaries, has its boundaries undefined. Thus, not knowing the set boundaries, one can only approximate it by its lower and upper approximations, as illustrated for a membership function in Fig. 2. Here comes the attractiveness of this theory in security assessment, because one can deal with entities, such as threats or attacks, which are hard to define or even do not exist, yet, with rough sets there is no essential need to access their probabilities.
Taking the analogy with control engineering, one would represent an embedded controller subject to security threats as in Fig. 1. Similar models are widely used to illustrate principles of control theory.
Fig. 1. Generic view of an embedded controller subject to security threats.
Fig. 2. Illustration of the concept of a rough set with the membership function (function values for points in segments A-C and D-B are undefined).
2.2 Experimental Models: Metrics and Measures A significant number of authors deal with experimental assessment of security, using an array of metrics and measures, but literally no one compares the measurement processes with those used in measuring physical quantities. Even the uses of terms metric and measure are confusing and the terms are used sometimes interchangeably. Moreover, in security literature a measure usually means an approach taken to alleviate security threats. It is, therefore, in place to reconsider how these basic concepts are used in physical measurements. When one looks at one of the fundamental properties of the environment, which we experience every day and which is length, what strikes us is the preciseness with which this property is defined. A unit of length, a meter, is nowadays defined in terms of the speed of light as: “the path traveled by light in vacuum during a time interval of 1/299 792 458 of a second” (http://www.bipm.org). A second, a unit of time, is defined with similar preciseness in terms of transition between states of the caesium 133 atom. To conduct measurements of length (or distance – an equivalent concept), we have invented a variety of measuring instruments, including a ruler, tape, meterstick, and laser-based instruments (other instruments – to measure time).
2.3 Computational Models: Simulation One could argue that computational models are just numerical solutions of analytical models built by theories. This is true in some cases but not always. One trivial but illustrative example is a quadratic (or higher order) equation, which one can solve analytically, in theory, but also numerically using the bisection method, for example. A more serious example of essentially only computational model would be the Monte Carlo method. Studying security with computational models is another path to pursue. The authors applied a generic controller model from Fig. 1 to investigate security vulnerabilities in a Co-operative Adaptive Cruise Control (CACC) system [35]. The model was limited and included only networking components, as shown in Fig. 4. Nevertheless, simulations of Markov chains [36] for four different kinds of attacks targeting message vulnerabilities led to interesting results regarding the degradation of system services and its availability, partially illustrated in Fig. 5.
Thus, it is fair to say that to assess length quantitatively, we need to have the unit of measure, that is, meter, which is our metric, and a measuring device, that is a measure. What do physics definitions of metrics have to do with quantitative assessment of security? They remind us that to develop a precise definition of a metric may take a long time. Even developing a clear concept of length took the humankind quite a while. As the legend tells it, Henry I in the first half of the XII century defined a respective metric by issuing a decree that the yard should be "the distance from the tip of the King's nose to the end of his outstretched thumb".
Fig. 4. Co-operative Adaptive Cruise Control (CACC) system for studying security impacts [35].
The authors believe that what would move us a step ahead is a closer alignment of security assessment with concepts developed in measurement science and physics. Certainly, metrics and measures for security assessment are and will be very imperfect at the moment and, possibly, for a long time to come. But attempts to define metrics and measures would help improve them. This concept was used by the authors to evaluate software tools for safety critical systems, where the metric was the average score and measure a questionnaire, as illustrated in Fig. 3 [34].
Fig. 5. Markov model of a CACC system with no repairs.
3. SUMMARY
Fig. 3. Software tool functionality measured via tool questionnaire score (on a scale from 1 to 5) [34].
The objective of this paper was to take a closer look at three fundamental approaches to quantitative assessment of system properties used in science: theory, experiments and simulation, and how they can be applied in the assessment of security. Examples of each were given from authors’ own experience. The value of using them jointly lies in prospective validation of security assessment by comparing results obtained separately.
4. ACKNOWLEDGMENTS This project has been funded in part by a grant SBAHQ-10-I-0250 from the U.S. Small Business Administration (SBA). SBA’s funding should not be construed as an endorsement of any products, opinions, or services. All SBA-funded projects are extended to the public on a nondiscriminatory basis. Additional support comes from the NSF Award No. 1129437 – Curriculum Development for Embedded Systems Security. The first author acknowledges the AFRL 2011 Summer Faculty Fellowship through the American Society of Engineering Education. Approved for public release: #88ABW-2011-5509.
5. REFERENCES [1] Landwehr, C.E. 2001. Computer Security, Int. J. on Information Security, 1, 1 (August 2001), 3-13.
[2] Atzeni, A. and Lioy A. 2006. Why to Adopt a Security Metric? A Brief Survey. In Quality of Protection: Security Measurements and Metrics, D. Gollmann, F. Massacci and A. Yautsiukhin, Eds. Springer-Verlag, New York. 1-12
[3] Brotby, W.K. 2009. Information Security Management Metrics: A Definitive Guide to Effective Security Monitoring and Measurement. CRC Press, Boca Raton, FL.
[4] Herrmann, D.S. 2011. Complete Guide to Security and Privacy Metrics: Measuring Regulatory Compliance, Operational Resilience and ROI. Auerbach Publications, London.
[5] Chew E. et al. 2008. Performance Measurement Guide for Information Security. NIST Special Publication 800-55 Rev. 1. National Institute of Standards and Technology, Gaithersburg, MD.
[6] A Community Website for Security Practitioners. URL: http://www.securitymetrics.org – accessed in Sept. 2011.
[7] Hinson G. 2006. Seven Myths about Security Metrics. URL: http://www.noticebored.com/html/metrics.html
[8] Luce, R.D., Krantz, D.H., Suppes, P. and Tversky A. 1990. Foundations of Measurement. Vol. III. Representation, Axiomatization and Invariance. Dover Publications, Mineola, NY.
[9] Zuse, H. 1007. A Framework of Software Measurement. Walter de Gruyter, Berlin.
[10] ISO/IEC 12207:2008 Systems and software engineering--Software life cycle processes. Geneva, Switzerland.
[11] ISO/IEC 15288:2008 Systems and software engineering--System life cycle processes. Geneva, Switzerland.
[12] Verendel V. 2009. Quantified Security Is a Weak Hypothesis. Proc. NSPW’09 New Security Paradigms Workshop (Oxford, UK, 8-11 September, 2009), ACM, New York, NY. 37-50.
[13] Aime, W.D., Atzeni, and Pomi, P.C. 2008. The Risks with Security Metrics. Proc. QoP’08, 4th ACM Workshop on Quality of Protection (Alexandria, VA, Oct. 27, 2008), ACM, New York, NY, 65-69.
[14] Rosenblatt, J. 2008. Security Metrics: A Solution in Search of a Problem. EDUCAUSE Quarterly, 31, 3 (July 2008), 8-11.
[15] Jansen, W. 2009. Directions in Security Metrics Research. Report NISTIR 7564. National Institute of Standards and Technology, Gaithersburg, MD.
[16] Bartol, N., Bates, B., Goertzel, K.M. and T. Winograd, 2009. Measuring Cyber Security and Information Assurance. State of the Art Report. Information Assurance Technology Analysis Center (IATAC), Herndon, VA.
[17] Sree Ram Kumar, T., Sumithra, A. and K. Alagarsamy 2010. The Applicability of Existing Metrics for Software Security, Int. J. of Computer Applications, 8, 2 (October 2010), 29-33.
[18] Savola, R. 2010. On the Feasibility of Utilizing Security Metrics in Software-Intensive Systems, Int. J. of Computer Science and Network Security, 10, 1 (Jan. 2010), 230-239.
[19] Torgersen, M.D. 2007. Security Metrics for Communication Systems, In Proc. ICCRTS’07, Intern. Command and Control Research and Technology Symposium (Newport, RI, June 19-21, 2007).
[20] Stolfo, S., Bellovin, S.M. and Evans, D. 2011. Measuring Security. IEEE Security and Privacy, 9, 3 (May/June 2011), 60-65.
[21] Zhu, H. 2009. Towards a Theory of Cyber Security Assessment in the Universal Composable Framework. In Proc. ISISE’09, 2nd Intern. Symp. on Information Science and Engineering (Shanghai, China, Dec. 26-28, 2009), 203-207.
[22] Kanstren, T. et al. 2010. Towards and Abstraction Layer for Security Assurance Measurements. Proc. ECSA 2010, 4th European Conference on Software Architecture (Copenhagen, Aug. 23-26, 2010), 189-196.
[23] Kong, L., Ren X. and Fan, Y. 2009. Study on Assessment Method for Computer Network Security Based on Rough Sets. In Proc. ICIS 2009, IEEE Intern. Conference on Intelligent Computing and Intelligent Systems (Shanghai, China, Nov. 20-22, 2009), 617-621.
[24] Löf, F. et al. 2010. An Approach to Network Security Assessment Based on Probabilistic Rational Models, Proc. SCS 2010, 1st Workshop on Secure Control Systems (Stockholm, April 12, 2010).
[25] Anonymous 2009. Software Security Assessment Tools Review. Booz Allen Hamilton, McLean, VA.
[26] Dowd, M., McDonald, J. and Schuh, J. 2007. The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities. Addison-Wesley, Boston, MA.
[27] Glimm, J. and Sharp, D.H. 2006. Complex Fluid Mixing Flows: Simulation vs. Theory vs. Experiment. SIAM News. 39, 5 (June 12, 2006).
[28] Dodig-Crnkovic, G. 2002. Scientific Methods in Computer Science. In Proc. PROMOTE IT 2002, 2nd Conference for the Promotion of Research in IT at New Universities and at University Colleges in Sweden (Skövde, Sweden, April 22-24, 2002), 126-130. [29] Longman, R.W. 2003. On the Interaction Between Theory, Experiments, and Simulation in Developing Practical Learning Control Algorithms. Int. J. Appl. Math. Comput. Sci., 13, 1 (Jan. 2003), 101-111.
[30] Littlewood, B. et al. 1992. Towards Operational Measures of Computer Security. Journal of Computer Security, 2, 2-3 (June 1993), 211-229. [31] Pawlak Z. 1982., Rough Sets, Int. J. of Information and Computer Sciences, 11, 5 (Sept. 1982), 341-356. [32] Wang., Q., Lin, M. and Li, J. 2008. Method on Network Information Security Assessment Based on Rough Set. In Proc. SITIS’07. 3rd Intern. IEEE Conf. on Signal-Image Technologies and Internetbased Systems (Shanghai, China, Dec. 16-18, 2007), 1041-1046.
[33] Liang Z. and Zhi X. 2010. Synthetic Security Assessment Based on Variable Consistency Dominance-based Rough Set Approach, High Technology Letters, 16, 4 (April 2010), 413-421.
[34] Kornecki A. and Zalewski J. 2005. Experimental Evaluation of Software Development Tools for Safety-Critical Real-Time Systems, Innovations in Systems and Software Engineering – A NASA Journal, 1, 2 (May 2005), 176-188. [35] Kornecki A., Zalewski J. and Stevenson W.F. 2011. Availability Assessment of Embedded Systems with Security Vulnerabilities. Proc. SEW-34, 34th Annual IEEE Software Engineering Workshop, Limerick, Ireland, June 20-21, 2011.
[36] PTC – The Product Development Company 2011. Relex Markov Modeling Tool. http://www.ptc.com/products/windchill/markov