Measuring Risk: Computer Security Metrics, Automation, and Learning Rebecca Slayton
IEEE Annals of the History of Computing, Volume 37, Number 2, April-June 2015, pp. 32-45 (Article) Published by IEEE Computer Society
For additional information about this article http://muse.jhu.edu/journals/ahc/summary/v037/37.2.slayton.html
Access provided by Cornell University (8 Jan 2016 04:47 GMT)
Measuring Risk: Computer Security Metrics, Automation, and Learning Rebecca Slayton Cornell University
Risk management is widely seen as the basis for cybersecurity in contemporary organizations. Risk management aims to minimize the combined cost of security breaches and measures to prevent breaches. This article analyzes debate over computer security risk assessment in the 1970s and 1980s, arguing that the most valuable part of risk management—learning—is also one of its most neglected aspects. The Cybersecurity Framework shall provide a prioritized, flexible, repeatable, performancebased, and cost-effective approach, including information security measures and controls, to help owners and operators of critical infrastructure identify, assess, and manage cyber risk.1
Risk management is the central approach to cybersecurity in governments and corporations around the world.1–5 It is widely viewed as the way to achieve adequate computer security at the lowest possible cost. Like other computer security metrics, it is also seen as the basis for objectivity, efficiency, and control—even automated control. For example, the US government recently mandated that all federal agencies report a variety of cybersecurity risk-related metrics through an automated tool called Cyberscope.6 Yet risk analysis, like all efforts to measure computer security, requires qualitative human judgment. As early as 1978, computer security researchers noted that “the notion of security is fundamentally one of judgment rather than measurement.”7 A metric—commonly defined as a method of measuring something8,9—is a human and social construction that abstracts from the messy complexity of the real world and is thus inevitably incomplete. For example, the number of failed log-in attempts may be used to gauge the threat of somebody guessing at passwords, but it neglects the threat of somebody using a stolen password. Furthermore, many metrics ultimately measure human judgment. For example, the
32
IEEE Annals of the History of Computing
risk of an event that has never happened before can only be estimated using the judgment of experts, who may express the same risk qualitatively (such as “high”) or quantitatively (such as “nine on a scale of one to 10”). Indeed, some computer security researchers have argued that the distinction between qualitative and quantitative evaluation is misleading because qualitative descriptions such as “high” are often converted to quantitative estimates, and vice versa.8 In this context, quantification may imply more certainty than is warranted. If metrics cannot eliminate the subjectivities of human judgment, why have they remained so central to computer and network security? How exactly do metrics such as risk assessment improve computer security—or do they? Here I address these questions by analyzing debate over the development and application of computer security risk assessment in the 1970s and 1980s. Drawing on US government reports, the trade press, and original interviews with computer security pioneers, this article aims to improve our understanding of the interrelationships between the history of computing, metrics, and contemporary efforts to manage cybersecurity risk. Donald MacKenzie has shown how highrisk areas of computing, such as air-traffic control and military operations, drove efforts to formally prove that computer systems were correct and secure.10 But relatively few computer systems were even partially verified through formal proof, and we know virtually nothing
Published by the IEEE Computer Society
1058-6180/15/$31.00
c 2015 IEEE
about how the far more common approach to computer security—risk management—shaped computing research or practice. Unlike formal verification and program proving, risk management accepts the inevitability of imperfections and vulnerabilities and aims to minimize the combined costs of security breaches and measures to prevent them. A better understanding of the past successes and pitfalls of computer security risk assessment can also inform contemporary policy, which remains a topic of dispute. For example, one year after Cyberscope was introduced, 49 percent of respondents reported that Cyberscope had not reduced their risk, and only 24 percent reported that it had reduced their risk (the remainder did not know).11 Meanwhile, prominent computer security experts have argued that risk management is “bound to fail” as a means of regulating the security of private-sector-owned critical infrastructure.12 As risk metrics remain a topic of dispute, the question is timely: in what ways are metrics valuable, and to whom? Historians and social scientists have noted multiple reasons that metrics are prized. Ted Porter has argued that quantification is appealing because it seems to offer absolute objectivity—that is, knowing things as they really are.13 Lord Kelvin famously summarized this view when he wrote, “when you can measure what you are speaking about, and express it in numbers, you know something about it.”14 Porter argues that what quantification actually provides is mechanical objectivity— something obtained by following rules.13 Metrics provide a set of rules for measurement, which may or may not lead to the truth. Nonetheless, as the name implies, mechanical objectivity can be automated, and this leads to a second reason that metrics are prized: they can increase efficiency. For example, intrusion-detection metrics enable the automation of tedious audit processes. Because such metrics are based on models that only partially represent the real world, human judgment is still needed.9,15 Nonetheless, metric-enabled automation can radically reduce the work of audits, enabling new kinds of analysis. A third reason that metrics are prized is that they enable feedback and control, regardless of whether they model the world realistically. Despite the rhetoric of truth, fairness, and accountability, it is ultimately the desire for a specific kind of control—that is, improvement of federal agencies’ computer
security—that drives the US government’s pursuit of metrics. Similarly, it is primarily an interest in control, rather than truth, that underlies the way that computer security researchers such as Steve Bellovin render Kelvin’s words: “Until we measure security, we can’t improve it.”15 This article articulates a fourth and often neglected reason for valuing metrics. If practitioners have valued risk assessment at all (and much evidence suggests that they have not), it is not because the final measurement is accurate or the result of increased efficiency but because the process of measuring risks within organizations encourages learning among workers. This process was qualitatively valuable, irrespective of any quantitative output. And because the human factor was often the weakest link in a computer security system, learning could often contribute far more to security than any technical mitigation. Here I advance a two-part argument. First, I argue that the most valuable part of risk assessment was also the most overlooked part—namely, the learning that came from the process of measuring risk rather than the final measurement. Second, although the automation of metrics partly contributed to objectivity, efficiency, and control, it played a more ambivalent role with respect to learning. Automated risk assessment software codified a model of risk that was vetted by a wider team of experts and could thereby achieve greater objectivity than an individual’s model of risk.16 It could also improve efficiency and provide a means of controlling decisions. Finally, risk analysis software could structure the thinking of risk analysts and contribute to learning by directing attention to specific vulnerabilities and threats that might otherwise have been neglected. However, automated risk assessment software could not ensure learning. Analysts could “turn the crank” to output a measure of risk, without really understanding what they were doing. It seems that some organizations misunderstood the primary value of “automated” risk assessment as increased efficiency, when the greater value was the learning process. In so far as organizations were primarily interested in quickly getting an efficient measure of risk, they were likely to use automation to avoid learning rather than to enhance it. Similarly, while regulators attempted to exercise some control over computer security by requiring federal agencies to output some
April–June 2015
33
Measuring Risk: Computer Security Metrics, Automation, and Learning
measure of risk, it could not force the learning that came from measuring risk internally. In what follows, I first discuss how the US government’s interest in administrative control and accountability influenced work on three different types of computer security metrics: threat monitoring (or intrusion detection), secure-worthiness (or trustworthiness), and risk assessment. Each of these metrics requires two levels of accounting, which are valued in different ways by different communities. At one level, the outputs of metrics (individual measurements) enable a kind of administrative control and efficiency. The second level at which metrics entail an accounting—the internal process of measuring —was more closely associated with learning. Although automation could be applied at either level, its success was inversely proportional to the need for additional learning and human judgment. The remainder of the article focuses more closely on risk assessment, largely because it is so central to contemporary debates about cybersecurity policy. As we will see, researchers attempted to resolve the conflict between the main governmental proponents of risk assessment (regulators) and the main critics of risk assessment (computer security administrators in federal agencies) in the 1970s and 1980s by developing automated risk assessment solutions, but ironically, this could undermine the most valuable aspect of risk assessment—learning. These findings suggest that contemporary efforts to control and automate the collection of cyberrisk data might be better directed toward an improved understanding of the learning process.
Computer Security Regulation, Administrative Control, and Metrics Interest in quantifying and thereby controlling computer security began at least as early as the 1960s, when the Department of Defense sponsored studies of how to assure the security of military computer systems. A 1967 Defense Science Board panel chaired by Willis Ware of RAND noted that “it is difficult to make a quantitative measurement of the security risk-level” of “a security-controlling system” and that this posed challenges for “policy decisions about security control in computer systems.”17 In the early 1970s, the civilian government also became concerned about how to quantify and control computer security after the passage of several privacy laws raised questions about how to enforce com-
34
IEEE Annals of the History of Computing
pliance with the rules. For example, the 1970 Fair Credit Act established rules for the collection and dissemination of consumers’ credit information and gave consumers the right to both know and correct their records. Similarly, the 1974 Privacy Act required that federal agencies publicize the existence of databases holding personal information, provide a means for individuals to learn about and correct their records, and ensure that personal data not be shared without consent or that it not be corrupted or misused.18 In 1972, ACM and the National Bureau of Standards (NBS) cosponsored a workshop on how to control the accessibility of data (hereafter referred to as the 1972 workshop), in no small part because “legislation mandates that [the NBS] develop standards for compliance by the federal government.”19 One of five working groups at the workshop focused on measurements; it explained that metrics were needed to improve control of both “‘hard’ systems engineering dealing with physical devices and processes” and “‘soft’ systems involving society.”20 It proposed a new “data security engineering discipline,” which would straddle both areas, and identified four specific areas for quantification: risk assessment, costeffectiveness, secure-worthiness, and measures of system penetration.21 These areas presaged many metrics that the NBS and other government agencies sought to develop in the 1970s. In 1977 and 1978, the NBS and General Accounting Office (GAO) sponsored two workshops on computer security audit and evaluation (hereafter referred to as the 1977 and 1978 workshops, respectively).22,23 Although these workshops were not the first place that researchers discussed computer security metrics, they nonetheless provide important insights on the relationship between accounting and metrics. The following sections briefly review workshop discussions of three types of metrics—threat monitoring, trustworthiness, and risk assessment—showing that each of these related to accounting at two different levels. First, the outputs of such metrics provided a basis for improved efficiency and control, at least in principle. And second, many metrics described a process of accounting that required a learning process and thereby could enhance security. Threat Monitoring Some of the earliest efforts at metrics aimed to automate the audit process itself. What became
known as intrusion detection is generally traced to James Anderson’s 1980 report on threat monitoring and surveillance, but similar proposals were explored in the late 1960s. At the well-known session on security at the 1967 Spring Joint Computer Conference, H.E. Petersen and Rein Turn of RAND noted that “threat monitoring” could be accomplished by recording “all rejected attempts to enter the system or specific files, use of illegal access procedures, unusual activity involving a certain file, attempts to write into protected files, attempts to perform restricted operations such as copying files, excessively long periods of use, etc.” and potentially programming a “real-time response.”24 Similarly, the 1972 measurements working group defined measures of system penetration as “techniques to detect, measure, and set alarms automatically from within a data base when abnormal activity indicates that something is wrong.”20 The group reported that the “concept of threat monitoring is at least five years old” but “has not received widespread use in the real world. We know of one or two uses of the concept in real-time systems, but it appears the results are, at best, indeterminate.”20 Intrusion-detection metrics enjoyed somewhat more success in the 1980s, with the development of systems sponsored primarily (though not solely) by the US military and nuclear laboratories.25,26 The power of intrusion-detection metrics lay in their ability to automate tedious audit processes, thereby radically increasing efficiency. The outputs of such metrics could also be used to automatically protect systems against suspected threats, potentially improving control. However, although research on intrusion-detection metrics could provide some information about typical attack patterns, the actual use of such metrics provided little in the way of organizational learning. Indeed, the more such metrics could be automated, the less they could be expected to teach. Secure-Worthiness and Trustworthiness While intrusion-detection metrics sought to maintain accountability within an operational computer system, other researchers sought to develop metrics that would hold computer manufacturers accountable for developing more secure products. At the 1972 workshop, the group on measurements discussed secure-worthiness as analogous to the “work factor” commonly used in cryptography or the “fire rating” given to
safes and vaults. At the same workshop, a group on access controls chaired by Clark Weissman proposed measuring secure-worthiness “as a function of the cumulative probability of violating multiple domains and the cumulative security damage resulting therefrom (e.g. the improper availability of access rights).”27 This notion of secure-worthiness was similar to that of risk—that is, the probability of an event occurring multiplied by the consequences of the event. However, Weissman’s group also considered secure-worthiness as something that might be attributed to products and suggested that “Government should play an important role in this arena, possibly paralleling its role in commercial aviation, in which the FAA certifies aircraft as airworthy.”28 Weissman revisited this proposal at the 1977 and 1978 workshops on audit and evaluation of computer security. In 1977, Weissman chaired a panel that was asked to identify audit methods to ensure reliable software. The panel explicitly linked the notion of trustworthiness to accounting: A trustworthy [computer] program is one that is well documented, functionally not complex, modular, relatively short in length, integrated into a rigorously structural architecture, and produced as the result of good programming practices and sensible standards. The trustworthiness of programs is the corporate analog of having “generally accepted accounting principles.”29
The following year Weissman served on a panel that was instructed “to list the vulnerabilities of [Processors, Operating Systems, and Nearby Peripherals], and the counters to them, with some evaluation of costs.”30 However, the panel included several individuals from the computer security research community who had a different vision, including Peter Neumann of Stanford Research Institute and University of California, Los Angeles computer science professor Gerold Popek, both of whom were trying to develop formally verifiable computer programs. It also included Steve Walker, who was working on communications, command, control, and intelligence at the DoD. The panel chose to address a different question: What authoritative ways exist, or should exist, to decide whether a particular computer system is “secure enough” for a particular intended environment of operation, and, if a given system is not “secure enough” for an
April–June 2015
35
Measuring Risk: Computer Security Metrics, Automation, and Learning
intended application, what measures could or should be taken to make it so?30
What became known as the Lee panel (for its chair, Theodore Lee of Sperry Univac), recommended establishing computer security metrics to evaluate new products. Over the next five years, computer scientists first at MITRE and then at a new National Computer Security Center (NCSC) developed these criteria into the Trustworthy Computer Security Evaluation Criteria (TCSEC), which were published as the Orange Book.31,32 Like other metrics, the TCSEC worked through two levels of accounting. First, the internal process of assigning a trustworthiness rating involved accounting for various security controls, such as audit trails and discretionary access control. Second, the trustworthiness rating that emerged as the output of that process enabled computer purchasers to hold companies accountable to particular standards, at least in principle. Unfortunately, the tedious measuring process was so slow that it limited the value of the final measurement within a larger system of accountability. The main problem was that the measuring process raised questions that the authors of the Orange Book had not anticipated. Marvin Schaefer, a leader of the TCSEC development, recalls: “We thought we understood what we were writing,” but “for all our belief that we were writing with precision, only experience could show that we weren’t.”33 He noted “one fatal flaw” in the Orange Book: some issues “were treated as requirements that were basic research.”34 Although some of the questions raised by the interpretation process were answered in successors to the Orange Book, which became known as the Rainbow Series, many of them remained “terrible enigmas.”34 Some criteria, such as formal verification, leveraged automation to increase the speed of proving the trustworthiness of computers. Nonetheless, Schaefer recalls that interpretive ambiguities in the final criteria made “slow look fast in comparison” to rating computers by the TCSEC35 (emphasis in original). Metrics of trustworthiness simply could not be automated. Nonetheless, the Orange Book experience suggests that efforts to account for security within individual computer systems sparked a process of learning about computer security, even if that was only learning about what questions deserved further scrutiny. Unfortunately, while the internal process of accounting for the trustworthiness of computer systems
36
IEEE Annals of the History of Computing
shaped learning, the final outputs of the interpretation process (the final rating assigned to a computer system) had relatively little impact on computer security, if only because the process was so slow that products tended to become obsolete by the time that they were rated. Unlike intrusion-detection metrics, trustworthiness metrics were far from automatic, and they required considerable judgment. Risk Assessment Significantly, the Lee panel was explicitly instructed not to focus on risk because risk analysis was the focus of a separate effort. While product evaluation metrics sought to certify the trustworthiness of products, risk management metrics sought to identify strategies for producing the most security at lowest cost and included an evaluation of factors that extended beyond individual products (such as organizational and environmental factors). Concerns about costs was expressed at least as early as 1967. In a paper that expanded upon his well-known presentation at the 1967 Spring Joint Compute Conference, Ware described the “privacy problem … as an engineering trade-off question.”36 He explained that the “value of private information to an outsider will determine the resources he is willing to expend to acquire it,” whereas “the value of the information to its owner is related to what he is willing to pay to protect it.” Ware proposed that perhaps “this game-like situation can be played out to arrive at a rational basis for establishing the level of protection.”37 Ware and other RAND researchers thus adopted a game-theoretic approach to cost.38–40 For example, RAND researchers Rein Turn and Norman Shapiro developed formal mathematical expressions to analyze the value to an intruder of obtaining data element N, the value to the subject of keeping N confidential, and the value to the hosting organization of protecting N. This expression would determine the ideal level of resources used to protect N. Unfortunately, this formalism was not very useful because “available cost data is very limited.”41 Cost estimation soon shifted from a game-theoretic focus (a concern with the relative cost to the offense and defense) to a risk management focus (how to achieve the most security at lowest cost). In the 1970s, discussions focused on risk analysis or risk assessment rather than risk management, but they tended to presume that the goal of
such an analysis was to manage risks. In the 1980s, efforts to develop risk management metrics became more explicit. At the 1972 workshop, Turn was one member of the Measurements Working Group, which felt that risk analysis was “perhaps the most advanced” area of metrics. Drawing upon the work of IBM computer engineer and group member Robert Courtney, they proposed estimating the probability and negative financial impact of six risks: destroying, disclosing, or modifying data, each of which could happen either intentionally or unintentionally. This would allow risk assessors to calculate the annual loss expectancy (ALE). They predicted that “Risks of empirical studies and actual experience should result in actuarial tables that quantify these risks. From there it is a simple step to determine the cost/benefit trade-off as needed for a proper security system design.”20 Unfortunately, risk analysis turned out to be much more difficult, as became clear when it became the cornerstone of efforts to secure federal data processing facilities after the 1974 Privacy Act. In 1975, the US Office of Management and Budget (OMB) issued Privacy Act guidance requiring that all federal agencies use risk analysis to select safeguards for the privacy and security of their information systems.42 The NBS issued similar guidelines, emphasizing that the “first step toward improving a system’s security is to determine its security risks.”43 After identifying the most serious risks—that is, those with the highest impact and probability of occurring—organizations could select the security safeguards that would mitigate the most serious risks. Although the NBS Privacy Act guidelines emphasized the importance of risk analysis, it had little to say about how to actually conduct a risk analysis. It was not until 1979 that the NBS issued “Guidelines for Automatic Data Processing Risk Analysis,” which was principally authored by Courtney.44 The 1979 guidelines simplified Courtney’s earlier classification of risks to three: confidentiality, integrity, and availability. Acknowledging that it “would probably be impossible for the team to conceive of every event which could have a deleterious effect on data processing,” it recommended “cataloging each data file or application system on a worksheet.” The team could then use “a combination of historical data … knowledge of the system, and … experience and judgment” to make rough estimates of the likely frequency and impact
of negative events on each data file or application system.45 Risk analysis thus became a process of accounting for organizational assets and estimating the likelihood and costs of something bad happening. The guidelines acknowledged uncertainty about such likelihoods and costs and recommended explicitly requesting only order-of-magnitude estimates to avoid getting caught up in debate over precise values.44,46 At the same time, the outputs of risk assessment could be used to hold agencies accountable for adhering to the Privacy Act. However, regulators soon discovered that agencies were not using risk analysis, partly because of its shortcomings.
Risk Management in Practice In 1977, the GAO surveyed 10 federal agencies about their data security practices and found that only one had a made a serious effort at risk management (although another agency designated someone to develop risk management procedures during the GAO evaluation).42 Some managers acknowledged the value of risk management but explained that they had not used it because they lacked resources and support from more senior management. Others rejected risk management as “too theoretical and imperfect,” noting “the difficulty of determining the degree of data sensitivity, especially by the technique of determining the cost to the organization of having data elements compromised.”47 However, the GAO contended “that risk management is an adaption of the widely known and accepted problem solving process which always requires a large amount of initiative and some innovativeness but for which there is no acceptable substitute.”47 The publication of risk analysis guidelines in 1979 did not immediately improve agency compliance. A 1982 survey by the GAO once again concluded “that executive agencies do not generally use risk analysis techniques or other forms of sound administrative, physical, or technical controls” to improve computer security.48 It pointed to a circular problem: senior management did not use risk analysis because they didn’t support information security programs, but management didn’t support information security because, without risk analysis, they were “unaware of how vulnerable their information systems really are to unauthorized and illegal practices.”49 Although the GAO blamed the lack of risk assessment on the ignorance and/or negligence of senior management, the following
April–June 2015
37
Measuring Risk: Computer Security Metrics, Automation, and Learning
discussion reveals at least four reasons that agencies did not embrace risk assessment: inadequate models of risk, a paucity of data, rapidly changing threats, and high costs. Inadequate Models The most fundamental challenge for risk assessment was defining an adequate model of risk. Although risk researchers initially assumed that “actuarial tables” showing the cost and probability of various security breaches would emerge from empirical studies and real-world experience, no such tables materialized. Instead, researchers developed risk models using notions like vulnerabilities, threats, and consequences. At the 1977 National Computer Security Conference, three RAND researchers—Rein Turn, Stockton Gaines, and S. Glaseman— argued that because “we lack a sufficient understanding and a sufficient set of facts on which to base the formulation of reasonable models of the security of computer systems … there are no means for validating any one particular model [of computer security] or choosing between several different models.”50 They concluded, “we prefer to leave it as an open question whether or not a quantitative assessment methodology can ever be developed.”50 Indeed, risk assessors did not even agree on terminology. At the 1978 workshop, a session on managerial and organizational vulnerabilities and controls noted, Efforts of this group were hampered in the identification of vulnerabilities and controls by a lack of adequate definition of critical terminology such as threats, vulnerabilities, risk, risk analysis, and risk assessment. Available NBS publications and technical documents … promote confusion by often using these terms interchangeably.51
This confusion did not end with the 1979 publication of NBS guidelines for computer security risk analysis. The guidelines continued to treat terms such as threat and vulnerabilities interchangeably, and it never defined any of them explicitly. In the mid-1980s, government agencies tried to bring some order to the muddled field of computer security risk analysis. In January 1985, the Air Force Computer Security Program Office sponsored a workshop, “Risk Analysis of Federal Information Systems,” which was attended by about 50 representatives of federal agencies. As one leading participant recalled later, workshop
38
IEEE Annals of the History of Computing
attendees agreed on the need for a common model describing “the interrelationships of the components of [risk management] (e.g. threats, threat frequencies, vulnerabilities, safeguards, risk, outcomes, etc.).”52 They also agreed that the NBS and NCSC should together take a leading role in establishing methods for computer security risk analysis. In 1988 and 1989, the NBS and NCSC sponsored Computer Security Risk Management Model Builders Workshops, both of which saw considerable discussion of a standard model.53 Nonetheless, risk researchers acknowledged that no consensus was reached on a standard risk management model.54,55 Thus, the field of computer security risk management appeared less mature in the late 1980s than in the early 1970s. The 1972 measurements working group described risk analysis as the “most advanced” area of metrics, but by the late 1980s, risk researchers sought the terminological clarity possessed by “other aspects of computer security (such as trusted systems).”56 Uncertain Data Even if computer security risk researchers agreed upon a model, they faced the challenge that models might require risk assessors to input information that was simply unknown. As Turn, Gaines, and Glaseman argued in 1977, existing risk assessment models “describe what we would do if we really had all the knowledge and information about system vulnerabilities, people’s intentions regarding those vulnerabilities, and exact dollar values concerning the losses that we could expect if particular attacks should occur.”50 Such models were useless because they “assume that we are able to supply values for the parameters of the model which, in fact, we are not able to supply.”50 By contrast, Courtney, who spoke immediately prior to the RAND researchers at the 1977 National Computer Security Conference, acknowledged that precise data was not available but insisted that order of magnitude estimates were usually sufficient.46 Some researchers sought to make uncertainties more explicit in computer security risk assessment. To account for what he called “gut feelings and common-sense judgments,”57 University of California, Berkeley computer science professor Lance Hoffman borrowed from a mathematician colleague, Lofti Zadeh, who developed “fuzzy set theory.”58 Hoffman, who earned his PhD in computer science at Stanford amid the late 1960s privacy
debates and participated in both the 1972 and 1978 NBS workshops, became wellknown for embracing the “fuzzy” nature of risk analysis as well as for developing risk analysis software. In the 1970s, Hoffman and his graduate student Don Clements developed a model of security consisting of three sets: objects (each with a loss value), threats (each with a likelihood), and security features (each with a resistance to threats). They used linguistic variables such as “very high,” “high,” and “low” to specify object values, threat likelihoods, and feature resistances. Mathematically, each linguistic variable is a fuzzy set with members that comprise a compatibility function, lf with a nonfuzzy rating. For example, if lhigh (0.8) ¼ 0.9, then the rating “high” is 90 percent compatible with the nonfuzzy rating of 0.8 (see Figure 1).59 Hoffman and Clements developed a computer program, SECURATE, which prompted users to input object values, threat resistances, and the resistance of security features using intuitive linguistic variables. SECURATE could then rate the security of the system in a variety of ways—for example, by analyzing the weakest link in the system or the average security of all the components. Hoffman and Clements asked students to test SECURATE by analyzing the risks at seven large computer installations, including one at a large bank and another at a large utility. They concluded that “the system achieved its goal of increasing understanding of installation security,” continuing, a couple of users remarked that just filling out the forms made the strengths and weaknesses of an installation’s security a lot clearer. Apparently focusing their thoughts into a logical, well-defined framework enabled them to view the situation more clearly and—even before using the system—to gain some of the insights we had hoped the system would provide.60
As these comments suggest, the pedagogical value of the system—its ability to structure the thinking of computer security officers—was valuable despite questions about the accuracy of data and resulting measures of risk. Yet, as we will discuss, the pedagogical value of risk assessment was often overlooked. Rapidly Changing Threats Although risk analysis might help with training, its pedagogical value was limited by rap-
Figure 1. Sample compatibility functions for the linguistic variables “high” and “very high.”59
idly changing threats that tended to make any lessons rapidly obsolete. As early as 1977, RAND researchers argued that the “hard part” of risk analysis “is in the identification and analysis of the full range of vulnerabilities and threats that might occur in systems.”61 They pointed out that existing analyses focused on a limited set of threats—the physical security of computer centers and operating systems— and that they should be expanded to include hardware vulnerabilities, ways that operating systems could be exploited even if they functioned properly, and vulnerabilities in the procedures at computer centers, such as crash recovery, file backup, and the authorization of new computer users.46 Indeed, in the early 1970s, Courtney’s model of risk included only confidentiality and integrity; availability was added later. By the mid-1980s, new threats were emerging even more rapidly. In the late 1970s, most risk management models were focused on insider threats; by the mid-1980s, risk management models began to incorporate the threat of hackers who could penetrate insecure computer networks. In 1985 Adolph Cecula, the information security administrator at US Geological Survey who had participated in the 1977 computer security audit and evaluation workshop, noted that “a risk analysis conducted more than a few years ago would not have taken into
April–June 2015
39
Measuring Risk: Computer Security Metrics, Automation, and Learning
consideration today’s criminal computer hackers.”62 High Costs The difficulty of anticipating future threats was exacerbated by the slow and labor-intensive process of risk assessment. For information system managers, this translated into high costs. In 1986 the Congressional Office of Technology Assessment published a study of federal information technology management that reported “increasing frustration with [formal] risk analyses.”63 Agency officials complained that risk analyses “have frequently been complex, expensive, and oriented toward physical or technical security measures for large scale computing centers, at the expense of simpler, cheaper, commonsense strategies.”40 Ironically, the primary goal of risk assessment—to secure computer systems at the lowest possible cost—had become its primary liability, as risk assessments themselves became tremendously expensive and time-consuming.
Whither Risk Assessment? Despite widely acknowledged shortcomings, formal risk assessment became a principal means of holding agencies accountable for computer security. Between 1976 and 1990, the GAO issued at least 27 reports or statements on computer security that criticized agencies for not using risk assessment. Agency officials objected to the risk assessment requirement. For example, in an article for Government Computer News, Cecula outlined “two opposed viewpoints on the value of risk analysis.”62 One group, typically those who are removed from day-to-day work in information security, believe wholeheartedly in the value and necessity of risk analysis. The other group consists of the security officers who have conducted risk analysis for their own organizations. For these people, struggling for years to perform risk analysis, the removal of the risk analysis requirement is likely to be welcomed with shouts of joy.64
Cecula highlighted three problems with formal risk analysis. First and foremost, “the process is time-consuming and costly.”62 At the US Geological Survey, with over 200 minicomputers, 1,500 microcomputers, and 8,000 computer users, a formal risk analysis would cost more than $6 million. Cecula also felt that “the results are questionable,” recalling a
40
IEEE Annals of the History of Computing
case in which “we ended up picking numbers randomly for probability of occurrence and amount of loss” because there was no good information.62 Third, the need to predict potential risks in advance made risk analysis perpetually out of date. Cecula argued that the only value of formal risk analysis was in “the academic environment” and that “Once learned, it is never used again.”62 Cecula asserted that there were many alternatives to formal risk analysis that would ensure adequate security. For example, he developed baseline security requirements, such as lists of safeguards that he had concluded should be on all systems in his organization. Additionally, site managers were required to fill out a six-page questionnaire that listed possible security safeguards, asked whether each safeguard had been implemented, and if not, whether the manager was prepared to accept the additional risk. If managers were not, they were instructed to identify corrective actions. Cecula also used periodic audits to ensure compliance. Cecula argued that these qualitative methods were superior to formal, quantitative risk analysis, concluding that “It is about time organizations with computers stopped wasting untold millions of dollars studying the same risks, threats, and vulnerabilities over and over again. It is time to declare risk analysis dead.”65 But risk analysis was far from dead. The 1987 Computer Security Act required that all federal agencies with sensitive information prepare security plans that were commensurate with the risks associated with loss, disclosure, or misuse of sensitive data and that they transmit those plans to NBS and the National Security Agency.66 OMB guidance on the act required that agencies specifically report their implementation of 18 security controls, including risk assessment.67 As this suggests, risk assessment became more than a method of determining what security controls to use; by the mid-1980s, it was also seen as a control in its own right. There is some merit in this approach; as we have seen, the process of conducting an assessment could increase security awareness. However, because risk assessment ultimately depended on subjective judgments of threats, vulnerabilities, and consequences, it could also be used to justify inaction by determining that the risks were too small to merit the cost of mitigation. Evidence suggests that many federal agencies performed risk analysis merely to comply with regulations, rather than out of an interest in increasing security awareness or actually
mitigating risks. A 1989 GAO survey of how agencies determined risks to their systems found that of 60 agencies, 37 (62 percent) used “formal risk analysis prepared to comply specifically with OMB Circular(s),” and only 6 (10 percent) “performed formal risk analysis independent of the requirements.”68 Furthermore, in 1990 the GAO concluded that the 1987 Computer Security Act had minimal impact because “most agency officials viewed the [security] plans as reporting requirements, rather than as management tools.”69 As this suggests, regulatory requirements for formal risk assessment did not necessarily change the way that agencies conducted their business. Organizations could effectively treat risk assessment as a black box without gaining much from the process itself. Experimenters’ Regress and the Search for a Turnkey Solution Ironically, regulators and many researchers responded to complaints about risk analysis by trying to “automate” risk analysis, thereby burying it more deeply in a black box. By the mid-1980s, the desire to reduce the workload associated with risk analysis made automation (that is, software) a key requirement for new risk management models. At the 1985 workshop on federal information systems risk analysis, Stuart Katzke, chief of the NBS Institute for Computer Sciences and Technology’s Computer Security Division, outlined nine desirable qualities of new risk management methods. First and foremost was that it be an “expert system,” meaning that it “has computer security ‘smarts’ built into the tool.”70 This was followed by a desire for a system that was “fully automated,” meaning that it would maintain a “data base of system description as well as safeguard cost benefit alternatives” and that it be able to “execute on small systems (e.g. PC’s, micros).”70 Ironically, the last desirable quality on the list of nine items was that a risk management method be “consistent with [the] conceptual model.”70 By the end of the 1980s, some researchers acknowledged the utility of qualitative risk assessment methods, such as checklists and questionnaires. Nonetheless, the preference for quantitative methods, and a focus on the final output of risk analysis, was reflected in efforts to quantify the relative effectiveness of various methods. Indeed, the search for a standard model of computer security risk management was partly driven by a desire to quantify the relative advantages and disadvantages of different methods. In 1989, Hoffman
noted that “computer security risk assessors and most computer security risk assessment packages have their own dogma” and tended to avoid “measuring the effectiveness of any specific one.”71 He felt that a standard model was important so “that we have metrics to measure how well we are doing.”71 Hoffman was the primary supervisor for a master’s thesis published the following year at the Naval Postgraduate School, which established a “Comparative Evaluation Method for Risk Management Methodologies and Tools.”54 The thesis developed seven criteria for evaluating risk management methods (see the sidebar) and used them to evaluate four methods for risk analysis: quantitative risk analysis, checklists, scenario planning, and questionnaires. Unfortunately, efforts to evaluate criteria such as validity faced a conundrum: without knowing the real risks, it was impossible to know the accuracy of any risk analysis method. The effort to evaluate risk assessment confronted something akin to what sociologists of science have termed “experimenter’s regress,” a situation in which the validity of the experimental tool (in this case, risk analysis) depends upon an unknown outcome.72 To measure the validity of the risk analysis model required a set of test cases in which the actual risks were truly known. Similarly, to evaluate the effectiveness of risk management would require knowing how much the costs of security breaches decreased for some set of organizations that implemented risk management and to compare that with the costs of security breaches for a set of organizations that had not implemented risk management. However, this data did not exist. Not only had the actuarial tables envisioned in the early 1970s never materialized, but technology was changing so quickly that few practitioners expected threats and vulnerabilities to remain fixed. Instead, the risk management model evaluators judged validity by three proxy criteria: relevancy, scope, and practicality.54 These measures referred to different aspects of how the analysts interacted with the models, not how the models actually reflected real risk. The evaluators developed similar proxies for each of the seven suitability criteria (see the sidebar). For each method evaluated, they assessed whether the method satisfied the proxy criteria and used the average value to assess how well the method satisfied each of the seven suitability criteria. Thus, by the 1980s, computer security risk researchers
April–June 2015
41
Measuring Risk: Computer Security Metrics, Automation, and Learning
Suitability Criteria The following seven criteria for evaluating risk management methods are excerpted from a 1990 master’s thesis published at the Naval Postgraduate School.1
Consistency: Given a particular system configuration, results obtained from independent analysis will not significantly differ. Usability: The effort necessary to learn, operate, prepare input, and interpret output is generally worth the results obtained. Adaptability: The structure of the method or tool can be applied to a variety of computer system configurations (and the inputs can be easily updated as they periodically change).
Feasibility: The required data is available and can be
economically gathered. Completeness: Consideration of all relevant relation-
ships and elements of risk management is given. Validity: The results of the process represent the real
phenomenon. Credibility: The output is believable and has merit.
Reference 1. W. Garrabrants and A. Ellis, “CERTS: A Comparative Evaluation Method for Risk Management Methodologies and Tools,” master’s thesis, Naval Postgraduate School, 1990, p. 18.
were developing increasingly sophisticated methods of assessing risk assessment itself. The Most Valuable, and Neglected, Part of Risk Analysis: Learning Significantly, even practitioners who saw relatively little value in quantitative risk assessment acknowledged the qualitative value of “automated risk analysis.” For example, despite criticizing quantitative risk analysis, Cecula acknowledged two advantages associated with qualitative aspects of automation. First, “it is more difficult to forget some of the information” because the “model will remind you of what is missing.” Second was “the volume of output an automated model is capable of producing.”62 In an article featured alongside Cecula’s in Government Computer News, Hoffman highlighted similar advantages in automated methods. He argued that while “risk-analysis packages do not take the place of a trained analyst in putting safeguards in place, they can be cost-effective in reducing drudgery and freeing up analysts for efforts more productive than routine computation.”73 Furthermore, he pointed out that the software could “be useful in providing standardized reporting at all organizational levels,” and thereby “could propagate risk analysis and security awareness throughout an organization—an acknowledged key element in the success or failure of a security program.”74 Cecula and Hoffman’s comments reiterate one of the widely recognized reasons that metrics are prized: automation can improve efficiency. However, they also point to a less widely recognized value of formal risk assessment: it
42
IEEE Annals of the History of Computing
could improve learning, making it “more difficult to forget some of the information”75 and propagating “risk analysis and security awareness throughout an organization.”76 The value of learning was often overlooked by risk management researchers in the late 1980s. For example, efforts to assess risk assessment methods privileged quantification and outputs, neglecting the qualitative value of the internal learning process. Nowhere in the list of risk assessment method suitability criteria was a description of what users learned in the process of using the risk management model. Instead, the criteria focused on improving results and minimizing the work required to obtain the final output. Similarly, the value of learning was often neglected by organizations that were primarily interested in improving efficiency. Many organizations responded to the onerous requirement for risk assessments by looking for the easiest possible answer. For these users, reducing the workload associated with risk analysis also meant eliminating the most valuable part of risk analysis—the process of learning about and assessing one’s systems. Hoffman discovered this problem after he started a company based on the software package RiskCalc. Unlike SECURATE, RiskCalc did not use fuzzy metrics because Hoffman “wanted to give an economic evaluation … to show if you do this … you will save this much money.”77 Although RiskCalc worked as designed, Hoffman he concluded that RiskCalc was not the best solution: I got out of the business because after a while I decided that it wasn’t selling enough the way
it was written. People really wanted something that you could just use turnkey, turn the crank. I decided, at least at that time and probably still, that one size does not fit all. People just wanted to turn the crank and have something tell them what to do, but you can’t really do that with any integrity. You had to have enough knowledge, or if you didn’t, then you really needed to buy consulting with it and they didn’t want to pay for the consulting.77
In other words, companies wanted risk analysis solutions that eliminated the timeconsuming process of learning about their systems, not just the tedium of calculation.
Conclusion Quantitative risk analysis was initially envisioned as a straightforward actuarial exercise, but it came to be understood as a difficult process of envisioning changing threats, vulnerabilities, and consequences. Many practitioners opted to use qualitative alternatives to formal, quantitative risk assessment. Although regulators and risk management researchers sometimes acknowledged the validity of such alternatives, they continued to privilege the formal and quantitative. In 1990, the GAO was still taking agencies to task for not implementing “formal risk analyses.”78 Let’s return to our original question: in what ways has risk analysis been valuable, and to whom? As we have seen, regulators valued the output of risk analysis as a means of exercising administrative control and improving efficiency in the 1970s and 1980s. But if practitioners derived any value from formal risk analysis, it was primarily the learning gained from the internal process of measuring risk rather than the output measure of risk, which was often outpaced by fastevolving threats. Nonetheless, because that learning could be resource intensive, many companies sought to outsource or automate risk analysis, thereby undermining its greatest value. Automated methods had real value for practitioners seeking a comprehensive understanding of security, but many users instead sought to automate a process that ultimately reduced to human judgment. As suggested earlier in this article, other metrics have confronted similar pitfalls. For example, trustworthiness measurements proved impossible to automate, but efforts at measuring trustworthiness nonetheless fueled research and learning. Conversely, although the automation of intrusion-detection metrics proved
more useful, it only detects known attacks and does not provide insights or learning. Nonetheless, a continued focus on automation and the associated goals of efficient control continues to drive contemporary cybersecurity policy. Cyberscope, an automated tool for federal agencies to report on specific computer security metrics, illustrates the continued focus on measurements rather than measuring, on outputs rather than the process of learning. A similar fixation on outputs characterizes debate about the use of cybersecurity risk assessment to regulate private sector companies that own critical infrastructure. Critics argue that risk assessment is flawed because business leaders have shortterm economic incentives to dismiss the threat of a cyberattack, and therefore their output estimate of risk is too low.12 However valid this critique may be, it continues to focus attention on the outputs of risk assessment—whether they are objective—rather than focusing on the process of measuring risk and its contribution to learning. Rather than disputing whether business leaders take a realistic view of risk assessment, we might ask the prior question: how is risk assessment conducted within organizations? Does it contribute to individual or organizational learning, or is it more focused on “turnkey” solutions? In short, is risk assessment about the final measurement or the learning that comes from the process of measuring? If organizations are to gain the learning that can come from risk assessment, they will need to recognize that “the notion of security is fundamentally one of judgment rather than measurement.”7
References and Notes 1. White House, “Executive Order: Improving Critical Infrastructure Cybersecurity,” 2013; www. whitehouse.gov/the-press-office/2013/02/12/ executive-order-improving-critical-infrastructure-cybersecurity. 2. Organization for Economic Cooperation and Development, “Cybersecurity: Managing Risks for Greater Opportunities,” OECD Insights blog, 29 Nov. 2012; http://oecdinsights.org/2012/ 11/29/cybersecurity-managing-risks-forgreater-opportunities/. 3. Department of Energy, “Electricity Subsector Cybersecurity Risk Management Process,” Government Printing Office, 2012. 4. US-CERT, “Cybersecurity Questions for CEOs,” https://www.us-cert.gov/sites/default/files/publications/DHS-Cybersecurity-Questions-for-CEOs.pdf.
April–June 2015
43
Measuring Risk: Computer Security Metrics, Automation, and Learning
5. European Union Network and Information Security Agency, “Risk Management — ENISA,” 2015; https://www.enisa.europa.eu/activities/ risk-management. 6. J. Miller, “Agencies Must Use Cyberscope for FISMA Reports,” Federal News Radio, 15 Sept. 2011. 7. R.S. Gaines and N.Z. Shapiro, “Some Security Principles and Their Application to Computer Security,” ACM SIGOPS Operating Systems Rev., vol. 12, no. 3, 1978, p. 19. 8. W. Jansen, “Directions in Security Metrics Research (NISTIR 7564),” Nat’l Inst. of Standards and Technology, 2009. 9. S. Stolfo, S. M. Bellovin, and D. Evans, “Measuring Security,” IEEE Security and Privacy, vol. 9, no. 3, 2011, pp. 60–65. 10. D. MacKenzie, Mechanizing Proof: Computing, Risk, and Trust, MIT Press, 2001. 11. W. Jackson, “CyberScope Falls Flat on Improving IT Security, Feds Say,” Government Computer News, 21 Sept. 2012; http://gcn.com/articles/ 2012/09/21/cyberscope-continuous-monitoring-it-security-datapoint.aspx. 12. R. Langner and P. Pederson, “Bound to Fail: Why Cybersecurity Risk Cannot Simply be Managed Away,” whitepaper, Brookings Institution, 2013. 13. T. Porter, Trust in Numbers, Princeton Univ. Press, 1996. 14. This quote can reportedly be found in Popular Lectures and Addresses, vol. 1, “Electrical Units of Measurement,” 1883-05-03. My source is http://zapatopi.net/kelvin/quotes/. 15. S.M. Bellovin, “On the Brittleness of Software and the Infeasibility of Security Metrics,” IEEE Security and Privacy, vol. 4, no. 4, 2006, p. 96. 16. Mathematicians have argued for a similar view of mathematical proofs; see MacKenzie, Mechanizing Proof. 17. W. Ware, “Security Controls for Computer Systems: Report of the Defense Science Board Task Force on Computer Security,” Office of the Director of Defense Research and Eng., 1970. 18. For an overview of the 1974 Privacy Act, see www.justice.gov/opcl/overview-privacy-act1974-2012-edition. 19. S.K. Reed and D.K. Branstad, eds., “Controlled Accessibility Workshop Report: A Report of the NBS/ACM Workshop on Controlled Accessibility,” Government Printing Office, 1974, p. 7. 20. Reed and Branstad, “Controlled Accessibility Workshop Report,” p. 64. 21. Reed and Branstad, “Controlled Accessibility Workshop Report.” 22. Z.G. Ruthberg and R.G. McKenzie, eds., Audit and Evaluation of Computer Security, Proc. NBS Invitational Workshop, held at Miami Beach,
44
IEEE Annals of the History of Computing
23.
24.
25. 26.
27. 28. 29. 30. 31.
32.
33. 34. 35. 36. 37. 38. 39. 40.
41. 42.
Florida, March 22–24, 1977, Government Printing Office, 1977, p. 256. Z.G. Ruthberg, ed., Audit and Evaluation of Computer Security II: System Vulnerabilities and Controls, Proc. NBS Invitational Workshop, held at Miami Beach, Florida, November 28–30, 1978, Government Printing Office, 1978, section 8, p. 4. H.E. Petersen and R. Turn, “System Implications of Information Privacy,” Proc. AFIPS Spring Joint Computer Conf., 1967, p. 293. R. Bace, Intrusion Detection, MacMillan Technical Publishing, 2000. J. Yost, “The March of IDES: A History of the Intrusion Detection Expert System,” to published in IEEE Annals of the History of Computing, 2015. Reed and Branstad, “Controlled Accessibility Workshop Report,” p. 19. Reed and Branstad, “Controlled Accessibility Workshop Report,” p. 21. Ruthberg and McKenzie, Audit and Evaluation of Computer Security, section 8, p. 7. Ruthberg, Audit and Evaluation of Computer Security II, section 8, p. 4. J. Yost, “A History of Computer Security Standards,” The History of Information Security: A Comprehensive Handbook, K. De Leeuw and J.A. Bergstra, eds., Elsevier, 2007, pp. 595–621. M. Schaefer, “If A1 Is the Answer, What Was the Question? An Edgy Na€ıf’s Retrospective on Promulgating the Trusted Computer Systems Evaluation Criteria,” Proc. 20th Ann. Computer Security Applications Conf., 2004, pp. 204–228. Schaefer, “If A1 Is the Answer, What Was the Question?” p. 217. M. Schaefer, interview with R. Slayton, 2 Dec. 2014. Schaefer, “If A1 Is the Answer, What Was the Question?” p. 224. W.H. Ware, “Security and Privacy in Computer Systems,” RAND Corp., 1967, pp. iii–iv. Ware, “Security and Privacy in Computer Systems,” p. 16. Petersen and Turn, “System Implications of Information Privacy,” pp. 291–300. Ware, “Security and Privacy in Computer Systems.” R. Turn and N.Z. Shapiro, “Privacy and Security in Databank Systems—Measures of Effectiveness, Costs, and Protector-Intruder Interactions,” Proc. AFIPS Fall Joint Computer Conf., part I, 1972, pp. 435–444. Turn and Shapiro, “Privacy and Security in Databank Systems,” p. 442. General Accounting Office, “Automated Systems Security: Federal Agencies Should Strengthen
43.
44.
45. 46.
47. 48.
49.
50.
51. 52.
53. 54.
55.
56.
57. 58.
59.
60. 61.
Safeguards Over Personal And Other Sensitive Data,” Government Printing Office, 1979. National Bureau of Standards, “Computer Security Guidelines for Implementing the Privacy Act of 1974,” Government Printing Office, 1975, p. 9. National Bureau of Standards, “Guidelines for Automatic Data Processing Risk Analysis; Federal Information Processing Standards Publication 65,” Government Printing Office, 1979. NBS, “Guidelines for Automatic Data Processing Risk Analysis,” p. 9. R.H. Courtney, “Security Risk Assessment in Electronic Data Processing Systems,” Proc. AFIPS Nat’l Computer Conf., 1977, p. 97. GAO, “Automated Systems Security,” p. 29. General Accounting Office, “Federal Information Systems Remain Highly Vulnerable to Fraudulent, Wasteful, and Abusive Practices,” Government Printing Office, 1982. GAO, “Federal Information Systems Remain Highly Vulnerable to Fraudulent, Wasteful, and Abusive Practices,” p. 28. S. Glaseman, R. Turn, and R.S. Gaines, “Problem Areas in Computer Security Assessment,” Proc. AFIPS Nat’l Computer Conf., 1977, p. 108. Ruthberg, Audit and Evaluation of Computer Security II, section 4, p. 3. S. Katzke, “A Government Perspective on Computer Security Risk Management,” Proc. Computer Security Risk Management Model Builders Workshop, 1988, p. 13. Katzke, “A Government Perspective on Computer Security Risk Management,” pp. 2–20. W. Garrabrants and A. Ellis, “CERTS: A Comparative Evaluation Method for Risk Management Methodologies and Tools,” master’s thesis, Naval Postgraduate School, 1990. N. Lewis, “Using Binary Schemas to Model Risk Analysis,” Proc. Computer Security Risk Management Model Builders Workshop, 1988, pp. 35–48. L.J. Hoffman, “Risk Analysis and Computer Security: Towards a Theory at Last,” Computers and Security, vol. 8, no. 1, 1989, pp. 23–24. G. Alexander, “Computer Security,” Mosaic, vol. 9, no. 4, 1978, pp. 2–10. L.A. Zadeh, “The Concept of a Linguistic Variable and its Application to Approximate Reasoning—I,” Information Sciences, vol. 8, no. 3, Jan. 1975, pp. 199–249. D. Clements, E.H. Michelman, and L.J. Hoffman, “SECURATE: Security Evaluation and Analysis Using Fuzzy Metrics,” Proc. AFIPS Nat’l Computer Conf., 1978, pp. 531–540. Clements, Michelman, and Hoffman, “SECURATE,” p. 533. Courtney, “Security Risk Assessment in Electronic Data Processing Systems,” p. 108.
62. A. Cecula, “Consider Alternatives to Formal Risk Analysis,” Government Computer News, 1985, p. 60. 63. Office of Technology Assessment, “Federal Government Information Technology: Management, Security, and Congressional Oversight,” Government Printing Office, 1986, p. 75. 64. Cecula, “Consider Alternatives to Formal Risk Analysis,” p. 59. 65. Cecula, “Consider Alternatives to Formal Risk Analysis,” p. 61. 66. Computer Security Act of 1987, Public Law No. 100-235, US Statutes at Large, 1987, p. 47. 67. General Accounting Office, “Computer Security: Government Wide Planning Process Had Limited Impact,” Government Printing Office, 1990. 68. General Accounting Office, “Computer Security: Compliance With Security Plan Requirements of the Computer Security Act,” Government Printing Office, 1989, pp. 6, 16. 69. GAO, “Computer Security: Government Wide Planning Process Had Limited Impact,” p. 3. 70. Katzke, “A Government Perspective on Computer Security Risk Management,” p. 15. 71. Hoffman, “Risk Analysis and Computer Security,” p. 23. 72. H. Collins, Changing Order: Replication and Induction in Scientific Practice, SAGE, 1985. 73. L. Hoffman, “PC Software for Risk Analysis Proves Effective,” Government Computer News, Sept. 1985, p. 58. 74. Hoffman, “PC Software for Risk Analysis Proves Effective,” p 58. 75. Cecula, “Consider Alternatives to Formal Risk Analysis,” p. 60. 76. Hoffman, “PC Software for Risk Analysis Proves Effective,” p. 56. 77. L. Hoffman, interview with R. Slayton, 1 July 2014. 78. General Accounting Office, “Financial Markets: Tighter Security Needed,” Government Printing Office, 1990, p. 10.
Rebecca Slayton is an assistant professor in the Science & Technology Studies Department and the Judith Reppy Peace and Conflict Studies Institute, both at Cornell University. Her research examines how new fields of expertise emerge and how expert arguments become influential, focusing on quantification and risk management. Slayton has a PhD in physical chemistry from Harvard University. Contact her at
[email protected].
April–June 2015
45