Predicting Internet Attacks: On Developing An ... - Semantic Scholar

3 downloads 10670 Views 68KB Size Report
Although recent Internet attacks have been widely publicized, the scope and frequency of Internet .... (2/7/00), eBay (2/8/00), Buy.com (2/8/00), Amazon.com ... public domain products whose internal structures are widely available and hence ...
Copyright 2000. Published in the Proceedings of the 18th Annual International Communications Forecasting Conference (ICFC-2000) Seattle WA. USA, September 2000. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from David Loomis, ICFC-2000 Program Chair, Email: [email protected] telephone/fax: 309-438-7979/5228; hardcopy/postal mail: Illinois State University, Campus Box 4200, Normal IL 61761 USA.

Predicting Internet Attacks: On Developing An Effective Measurement Methodology* William Yurcik1

David Loomis2

Illinois State University {wjyurci,dloomis}@ilstu.edu

Alexander D. Korzyk, Sr.3 University of Idaho [email protected]

Abstract: Although recent Internet attacks have been widely publicized, the scope and frequency of Internet attacks over time has not been well documented. To our knowledge, this is the first paper to explicitly state and attempt to address some of the problems in developing an effective measurement methodology with the ultimate goal of creating an ability to predict Internet attacks. We discuss recent Internet attacks and summarize the current state of Internet security. We also examine the closely related issue of damage cost estimation as an important tool for organizations to calculate the appropriate level of investment in protecting against Internet attacks. As part of effective measurement methodology we define metrics for the scope and frequency of Internet attacks.

*

supported in part by grants from the John Deere Corporation and State Farm Insurance author for correspondence; Assistant Professor, Department of Applied Computer Science; additional contact information: telephone/fax (309) 438-8016/5113 hardcopy (postal) mail 202 Old Union, Campus Box 5150, Normal IL. 61790 USA 2 Assistant Professor, Department of Economics 3 Assistant Professor, Department of Business 1

1.0

Introduction

The number of Internet attacks being investigated by the U.S. Federal Bureau of Investigation (FBI) doubled from 547 in 1998 to 1,154 in 1999. FBI Director Louis Freeh stated before a March 2000 Senate Subcommittee on “CyberCrime; “In short, even though we have markedly improved our capabilities to fight cyber-intrusions, the problem is growing even faster.”[4] Some of the increase can be attributed to more attacks being reported than previously because of the initiation of the INFRAGARD Partnerships for Protection. The FBI is the lead Federal Agency selected by the Presidential Commission on Critical Infrastructure Protection for INFRAGARD. Freeh identified threats including disgruntled employees, crackers or virus writers seeking thrill or financial gain, criminal groups, and terrorist organizations. Other threats include activists, extremists, domestic terrorists, vendors/contractors, and teenage vandals. Speaking publicly about similar Internet threats nine months earlier, George Tenet, Director of the U.S. Central Intelligence Agency (CIA), revealed for the first time that several foreign governments have Internet attack organizations actively targeting the U.S.4 Although Internet attacks have been widely publicized, the scope and frequency of Internet attacks over time has not been well documented. To our knowledge, this is the first paper to explicitly state and attempt to address some of the problems in developing an effective measurement methodology with the ultimate goal of creating an ability to predict Internet attacks. The ability to predict the scope, frequency, and type of Internet attacks is a critical element to begin improving the survivability of information systems upon which humans depend for services. The Defense Advanced Projects Agency (DARPA) sponsored a workshop in June 1999 on the following topic: “Countering Cyber-Terrorism”.[14] Since the large-scale investigation called “Operation Solar Sunrise,” a much greater paranoia has been evident among Internet users about being used as a launch point. The one question that dominated this workshop was: “Why hasn’t it (a large-scale Internet attack) happened yet?” The consensus was that a large Internet attack was imminent and it may have even occurred (and we did not know it yet). Although the Y2K transition went smoothly, events (which we document later in this paper) eventually occurred which proved our educated guess correct. The logical next question is what could we have done to prevent such Internet attacks? One potential answer explored in this paper is that by analyzing the scope and frequency of Internet attacks we may be able to predict and thus better defend against attacks by providing early warnings. Protecting against all possible Internet attack scenarios, an unbounded system, is not possible but using knowledge of past attacks does provide a capability beyond reactive defense – predictability of future attacks such that they can repelled or deterred. The remainder of this paper is organized as follows: Section 2 describes recent events, which confirmed our suspicions at the June 1999 DARPA workshop. Section 3 summarizes the state of Internet security as one systemic environmental variable contributing to Internet attacks. Section 4 examines an easier problem than prediction: assessing cost damage after an Internet attack has occurred. Section 5 makes an original attempt to develop an effective measurement methodology, which could be used to predict Internet attacks. We close with conclusions and future research directions in Section 6.

2.0

Recent Events

In February 2000, a series of coordinated distributed denial-of-service (DDoS) attacks were launched against major U.S. corporations.5 Not only did the attacks prevent 5 of the 10 most popular Internet websites from serving its customers but the attacks also slowed down the entire Internet – Keynote Systems measured a 60% degradation in the performance of 40 other websites that had not been attacked.[11] While the consensus analysis of these Internet attacks is that they were technologically 4

Testimony before the Senate Committee on Governmental Affairs, June 1998. Some of the nations targeting the U.S. include China, Cuba, Iran, Russia, and North Korea. 5 The companies in the order there were attacked are: Yahoo! (2/7/00), eBay (2/8/00), Buy.com (2/8/00), Amazon.com (2/8/00), CNN (2/8/00), ZDNet.com (2/9/00), E*Trade (2/9/00), Excite At Home (2/9/00), and Datek (2/9/00).

2

unsophisticated (executing a downloadable program and using thousands of personal computers unknown to their owners as zombies to magnify the attack intensity), it is particularly disturbing the ease at which major corporate operations can be disrupted and the lack of defenses to prevent such attacks from reoccurring in the future. In May 2000, the ILOVEYOU virus shutdown E-mail systems of governments and businesses around the world for up to several days.6 The phrase ILOVEYOU is a teaser for an Email attachment, which, if opened, would infect computers running Microsoft operating systems. Within 24 hours of the earliest infections, a new variation of the virus spread identical to the first except that the attachment was renamed VERYFUNNY.[1] This global infection was reminiscent of the MELISSA virus Internet attack on Email systems in March 1999.

3.0

Systemic Problems with Internet Security

Although the February 2000 DDoS attack on major corporate websites did not cause critical or lasting damage, this single event has taken the threat to E-commerce and critical infrastructure security/survivability out of the realm of the abstract and into the realm of the real. But are the February 2000 DDoS and May 2000 ILOVEYOU attacks aberrations? If history is any indication, the information technology community is incapable of constructing networked information systems that can consistently prevent successful attacks. Even an organization where network and computer security is paramount such as the U.S. Department of Defense (DoD) has continuously demonstrated how susceptible it is to attack.7[12] There are many factors, which contribute to the poor state of security on the Internet – here we will focus on what we believe are the top four factors: First, most software today is tested for security certification (assurance testing) by the penetrateand-patch approach – when someone finds an exploitable security “hole” the software manufacturer issues a “patch”. The intellectual complexity associated with software design, coding, and testing virtually ensures the presence of “bugs” in software that can be exploited by attackers.8 This approach has proved inadequate since after-the-fact security leaves bug vulnerabilities open until they are exploited. However, software manufacturers find this approach economically attractive – why invest time and money in assurance testing if consumers are not willing to pay a premium. Second, software vulnerabilities will be discovered in commercial-off-the-shelf (COTS) and public domain products whose internal structures are widely available and hence can be analyzed by attackers. The ability to discover software vulnerabilities and the homogeneity of software from the same manufacturer (Microsoft, Sun Microsystems) make it possible for a single-attack strategy to have a wideranging and devastating impact. It is widely estimated that 90% of the world’s computers are on the same hardware platform (Intel) with the same software (Microsoft MS/DOS-Windows). Third, poor system administration practices result in a system remaining susceptible to vulnerabilities even after corresponding patches have been issued from software manufacturers. An otherwise secure system that does not fix a previously unknown vulnerability after a publicly announced patch has been released from the software manufacturer is instantly an open target for exploitation. Poor 6

Examples of organizations which shut down their Email systems include: Ford Motor Company, Microsoft Corporation, Estee Lauder Companies, the U.S. Army, the U.S. Navy, and half the companies in the entire nation of Estonia.[1] USAToday also reported the Pentagon, U.S. Congress, Bears Stearns, British Parliament, the Danish Government, and Nomura Securities in Japan. [“Tainted Love” USA Today, Cover Story Section B, May 5, 2000] 7 Specifically network security refers to protection of traffic as it traverses a network end-to-end or between nodes and computer security refers to operating system authentication of users (identity) and file system privileges. However, since networks consist of special-purpose computers (switched, routers, hubs) and computers rely on network/bus connectivity for peripherals and distributed services (Email, www, file system) we will not continue to make a distinction between network and computer security for the purposes of this paper. 8 The existence of covert channel access or “trap doors” into hardware (Intel) and software (Microsoft) is well documented with the most popular example being “Easter Eggs” found in software.

3

maintenance practices can neutralize existing security safeguards. Lastly, even if special software from one manufacturer is “certified/assured”, it is executed on the same system with “non-assured” software such that new vulnerabilities are introduced into the system as a whole.9 When COTS products are incorporated as components of a larger system, the entire system becomes vulnerable to attacks based on the exploitable COTS bugs again making it possible for a singleattack strategy to have a wide-ranging and devastating impact. The natural escalation of attacks is demonstrating that no practical systems can be built that are invulnerable to attack. Despite industry’s efforts (or in spite of them), there can be no assurance that any system will not be compromised. The ubiquity of the Internet has only acerbated this problem by enabling remote, anonymous, and most especially automated attacks.

4.0

Damage Cost Estimates

In order to plan the appropriate level of investment in protecting information assets, damage cost estimates become an important tool to organizations from individual companies to budget investments by sovereign nations at the other end of the spectrum.[15] For instance, an organization would not want to spend more on implementing protection schemes for Internet attacks than the estimated cost in damage from being a victim of such an attack. Quantitatively, the optimal level of investment in protection schemes for Internet attacks is the expected value of estimated damage cost multiplied by the probability of a successful attack over a specified time period. [7] The most widely quoted study on financial losses attributed to computer crime in the U.S. is a joint study released in March 2000 by the Computer Security Institute (CSI) and the FBI.[13] This study estimates damage cost of computer crime to be $10 Billion for the calendar year 1999. The FBI definition of financial losses or cost of the attack includes recovery costs, lost business, legal expenses, wiretap and traces. In order for FBI to investigate the crime, the financial losses must be more than $5,000. The study is based on the responses of 643 major corporations and public agencies (representing 10% of the U.S. economy) who reported actual computer crime losses of $266 Million in 1999 – more than double the amount reported in 1998. The survey used statistics reported by the FBI that only 25% surveyed actually reported a computer crime to law enforcement in 1999, down from 32% in 1998. The survey methodology then interpolated the total damage cost estimate for 10% of the economy using the non-report multiplication factor of four (thus multiplying $266M by four by ten). The damage loss is largely from financial fraud ($56M), theft of proprietary information, and sabotage ($27M) with 59% of respondents identifying computer attacks originating from the Internet as opposed to 38% who detected computer attacks from internal organization computers. Of the 74% of respondents who reported losses due to security attacks only 42% were willing or able to quantify the damage cost resulting from such attacks. The CSI/FBI survey has perhaps raised more questions than it has answered. The first series of questions surround the media reporting of the survey results itself. Two articles incorrectly reported, in large bold headlines, that the damage cost estimate of the report is $266M.[2,3] Secondly, how were damage costs quantified. Explicit costs include the cost in time and materials to repair and replace damaged computer software. More difficult costs to quantify include shutting down an Email sever for a period of time, disconnecting from the Internet for a period of time, or delayed/lost transactions. Many small to medium enterprises made a conscious decision to not have a continued operations plan in the event of an attack because of fiscal constraints. They were willing accept the risk and associated losses related to the attack. Had they used a decision-making model such as the Secure Electronic Commerce Decision-making Model, they may not have accepted the risk.[8] Even the Federal Government fails to provide sufficient security when dollars are immaterial to the value of the information because no one is held accountable for information security.[9] To examine a specific case, the aggregate economic impact of the February 2000 DDoS attacks 9

Developing systems via software integration and reuse rather than customized design and coding is a cornerstone of modern software engineering (Software Engineering Institute at Carnegie Mellon University).

4

has been estimated as high as $1.2B and to as low as minimal due to the replaceability of Internet services. The $1.2B figure was calculated by the Yankee Group and includes the expense of security upgrades, consulting fees, and losses in market capitalization from tumbling stock prices.10[6] On the other hand, Internet transactions lost from DDoS outage were replaceable by telephone, fax, or delayed to a later time. Gary Richter, terrorism expert at Sandia National Laboratory, states: “For an hour or two people couldn’t do their day trades or buy some books. Well, five years ago you couldn’t do those things anyway.” Individual victim dot.com organizations have different view of course since their entire revenue stream depends on transparent Internet access.11 Another source of damage cost estimates are the court trials of suspected attackers. For example, in the December 1999 court trial of David L. Smith, the Melissa virus was estimated to cause $80M in damage.[1] Simson Garfinkel, a computer security expert worries that law enforcement, companies, and the press regularly inflate damage cost estimates: “If you are a law enforcement organization it makes the crime look more serious. If you are a company, it allows you to get more money from insurance. And if you are the press, it makes the story more sensational.”[6] Jennifer Granick, a defense attorney specializing in computer crime recommends that a standard formula be devised to calculate the severity of attacks: “We are putting the question of how serious a societal harm has been done in the hands of these companies without any real check or balance. If we are going to punish somebody by sending them to prison, we don’t want to send them to prison for something that is just a nuisance.”[6] In one particularly egregious example prosecutors valued a 12-page paper stolen from BellSouth at $79,449 based on the $31,000 cost of computer it was typed on, $6,000 for the printer, and $6,200 for a project manager. At the trial it was revealed that the identical document was sold to the public for $13 a copy.[6] In the near future, information system disruptions due to Internet attacks will become less of an inconvenience and more of threat to economic and defense security. In fact, a Presidential Commission12 was formed in 1996 to study protecting critical national infrastructures including the Internet and the U.S. DoD is studying Internet attack rules of engagement including Internet attacks as an act of war. Chris Christiansen, an analyst at IDC, succinctly summarized a consensus sentiment about the opportunity costs resulting from recent Internet attacks: “I think last year 1999 was very evident that security was a reactive Band-Aid remedy for a core infrastructure problem. It’s not a matter of how much money they (ECommerce companies) lost but how much more money they would have made.”[2]

5.0

Developing a Forecasting Methodology

The authors are aware of only one prior paper on the topic of forecasting Internet security attacks using various forecasting models to most accurately predict Internet security.[7] Prior to this research, Internet attack prediction was primarily anecdotal. In general, there have been three waves of Internet attacks. The first wave was physical attacks against computer operating systems. The second wave of attacks targeted vulnerabilities in application software including denial-of-service. The third wave of attacks, which have started in 2000, target human behavior as exhibited in Internet fraud schemes including identity theft, stock manipulation, and false 10

Including security prevention in the cost estimate lead one noted Internet security expert to state; “That’s like saying you don’t need to get a lock for your front door unless somebody breaks in.”[6] 11 Dot.com revenue streams are currently focused on gaining market share for future investment value at this stage of E-Commerce evolution. As for the revenue stream lost by E-Commerce sites during the DDoS attacks some have figured (tongue-in-cheek) that since dot.coms lose money on every sale they make they may actually come out ahead by not making any sales![10] 12 Executive Order 13010 on July 15 1996 formed the Presidential Commission on Critical Infrastructure Protection (PCCIP). The PCCIP issued the following report on October 13, 1997 - Critical Foundations: Protecting Americas Infrastructures - and has now disbanded after forming new government organizations to continue to address this issue. For more information see [15].

5

information. Following the Internet Worm attack in November 1988, the Defense Advanced Research Projects Agency (DARPA) established the Computer Emergency Response Team/Coordination Center (CERT/CC) at Carnegie Mellon University’s Software Engineering Institute. CERT/CC has since served as a high-level organization responsible for Internet-related incident response for the U.S. Department of Defense. Other CERTs have formed with specific constituencies but CERT/CC remains the most well-known.13 Table 1 shows CERT statistics that have recently been released on the Internet.14 An incident refers to a group of distinctive Internet attacks that can be distinguished from other attacks.[5] Each incident reported may involve one site or hundreds (or even thousands) of sites, and some incidents have ongoing activity for long periods of time (i.e., more than a year). The value given is an annual aggregate for all attacks. A vulnerability refers to the documented weakness in a system which allows unauthorized action.[5]

Number of Incidents Reported Year 1988 1989 1990 1991 Incidents 6 132 252 406 Vulnerabilities Reported Year 1995 1996 Incidents 171 345

1992 773

1993 1334

1997 311

1994 2340

1998 262

1995 2412

1996 2573

1997 2134

1999 417

1998 3734

1999 9859

Jan-June 2000 442

Table 1: CERT/CC Statistics - tracking reported incidents and vulnerabilities There are intervening variables that affect the validity of these CERT/CC statistics: • • • • • • • •

the Internet is growing at unprecedented rates such that incidents/vulnerability increases may result directly from this growth15 the mix of machines (hardware and software) on the Internet is constantly changing such that extrapolation is questionable multiple CERTs and other security investments (investigators, security devices) have evolved such that there are now efficient mechanisms to detect and report incidents that otherwise would not have been detected or reported in the past16 reporting rates for security incidents are notoriously low 17 false alarms rates are statistically significant18 poor record keeping by CERTs19 multiple CERTs with overlapping jurisdictions changing nature of Internet attacks (manual to automated script attacks)

A common taxonomy is needed to improve Internet attack statistics. A taxonomy systematically groups a body of knowledge into categories based on shared characteristics or traits. Beyond organization, a taxonomy would facilitate sharing between CERTs and a standard terminology such that the same problem is not addressed under different terminology. 13

Other representative CERTs include CERTs for NASA, Australia, Stanford University, Apple Computer, and the SAIC Corporation. 14 http://www.cert.org/stats/cert_stats.html 15 the estimated growth of the Internet is tracked by several sources including: Matrix News www.mids.com and Mark Lotter’s annual Internet Domain Survey www.nw/zone/WWW, these surveys report annual growth rates starting at 100% to as high as 380% 16 US DoD Operation Solar Sunrise reported a detection rate of 4% in 1995 17 in a joint CSI/FBI study, the FBI stated incident reporting rates of 32% in 1998 and 25% in 1999 18 CERT/CC reports false alarms from 1989 to 1995 that vary from 4% to 8%.[5] Roy Maxion/Carnegie Mellon University has presented false positive rates for Intrusion Detection Systems (IDS) that vary up to 100% based on characteristics of the dataset. 19 For example, record-keeping at CERT/CC were informal from 1988-1992 consisting primarily of Email.[5]

6

There have been many proposed taxonomies for security attacks.20 Unfortunately there is no consensus on the use of a single taxonomy, actual attacks can be classified into multiple categories within a given taxonomy (not mutually exclusive), and new attacks continue to appear which do not fit cleanly into proposed taxonomies. Two other sources of Internet attack statistics include: (1) the Federal Computer Incident Response Capability (FedCIRC) which serves as the central coordination and analysis facility dealing with computer security issues affecting the civilian agencies and departments of the Federal Government and (2) the ATTRITION hacker website. FEDCIRC list monthly/annual incident frequency from October 1998 to present based on their own taxonomy which includes ports probed/scanned and tool identification.21 ATTRITION tracks website defacement statistics from January 1999 to present (daily/monthly/annually) including their own taxonomy which including categories for operating systems, organizations, countries, and top-level domain suffix.22 Available metrics that could be collected to develop an Internet attack prediction methodology include: • • • • • • • • • • • • •

Type of Internet attack based on a common taxonomy Number/percentage of Internet attack frequency growth Number/percentage of detected/undetected Internet attacks Number/percentage of successful/unsuccessful Internet attacks Number/percentage of reported/unreported Internet attacks Number/percentage of automated Internet attacks Types of automated Internet attacks – tools used/ports probed Stationarity of Internet attacks over time (day/day of week/month/year/season) Duration of Internet attacks (day/month/year) Number of hosts involved in Internet attacks Damage Cost estimate for distinct Internet attacks Geographical location (physical and virtual mapping) of Internet attacks Targeted systems (location/organization/vendor/operating system)

A popular metric currently used by corporations and the military but not tracked by CERTs or included in any taxonomy is a Zombie metric. A Zombie metric is the number of “zombies” on monitored systems. A zombie is an unattached process (program with corresponding memory) which was introduced (infected) via a system vulnerability. These zombies can be used to launch indirect distributed denial-ofservice attacks. Institutions and corporations which track zombies have sounded alerts warnings when there is a dramatic increase in zombie activity. Some organizations use “honeypots” which are machines specially designed to attract attacks while being closely monitored. While these metrics may provide an accurate view of the current state of Internet attack activity, it is not clear that future attacks will be incremental in nature slightly modifying or building from prior attacks – future attacks may be chaotic, deviating drastically from prior known attacks. The use of automated and distributed tools makes discontinuous steps in Internet attacks possible.

6.0

Conclusions

This paper makes a first attempt at developing an effective measurement methodology for predicting Internet attacks. The value of such a predictive capability is that advance knowledge would 20

Internet attack taxonomies have been proposed by (in alphabetical order): Matt Bishop, FedCIRC, Fred Cohen, John Howard, Internet Security Systems (ISS) Inc., Cynthia Irvine, Ivan Krsul, Carl Landwehr, U. Lindqvist/E. Jonsson, Dan Lough, Peter Mell, MITRE Corporation, Peter Neumann, Eugene Spafford, among others. 21 www2.fedcirc.gov/stats/ 22 www.attrition.org/mirror/attrition/stats.html

7

enable protective measures to repel or deter individual attacks or series of attacks. In Section 5 we address the complex methodology issues that make Internet attack prediction a difficult problem while also proposing specific directions for future research. Ultimately the value of working toward being able to predict Internet attacks will be determined by the new security and network knowledge made available (possibly relevant to other areas) and the opportunity cost of being attacked without advanced knowledge. To do other than work toward such a capability is to accept Internet attacks as being part of the overhead of being linked to the Internet in much the same way having a telephone in your house brings with it unwanted annoyances from telemarketers. However, the distinction drawn this telephone analogy is that an Internet attack has the possibility to be much more than an minor annoyance, in the near future there will be significant Internet-based individual services in addition to the over-riding national economic and defense security implications. Technology can not stop Internet attacks from occurring, external threats continue to grow. The only prudent strategy is awareness and caution - developing a predictive capability is an important part of this approach. We can only hope society has not waited too long to prevent the Internet from becoming another victim of the “Tragedy of the Commons”.

7.0

References

[1] Ted Bridis, “Poisonous Message’s Potential To Destroy Files Prompted Vast E-mail Shutdown,” Wall Street Journal, May 5, 2000, pp. B1, B4. [2] Brian Fonseca, “Cyber-attacks Hit Coffers for $260 Million in 1999,” Infoworld, March 27, 2000, p. 12. [3] Ann Harrison, “Survey: Cybercrime Cost Firms $266M in ’99,” ComputerWorld, March 27, 2000, p. 28. [4] “FBI Chief Says Cyber Attacks Doubled in a Year,” HPCwire, Article 17365, March 2000. [5] John Howard, An Analysis of Security Incidents on the Internet, 1989-1995, Ph.D. Dissertation, Department of Engineering and Public Policy/Carnegie Melon University, 1997. [6] Brendan Koerner. “Finally an Arrest: But How Much Damage Do Cyberattacks Cause?” U.S. News and World Report, May 1, 2000, p. 48. [7] Alexander Korzyk Sr. “A Forecasting Model For Internet Security Attacks,” National Information System Security Conference, 1998. [8] Alexander Korzyk Sr. “A Decision-Making Model for Securely Conducting Electronic Commerce,” National Information System Security Conference, 1998. [9] Alexander Korzyk Sr. and A. James Wynne, “Who Should Really Manage Information Security in the Federal Government?,” National Information System Security Conference, 1997. [10] Steven Levy and Brad Stone, “Hunting the Hackers,” Newsweek, February 21, 2000, pp. 39-44. [11] Mathew G. Nelson, “Attacks on E-Businesses Trigger Security Concerns,” InformationWeek, February 14, 2000, pp. 28-30. [12] Peter Neumann, Computer-Related Risks, ACM Press, New York 1995. [13] Charles Pillar, “Cyber-Crime Loss at Firms Doubles to $10 Billion,” Los Angeles Times, March 22, 2000.

8

[14] William Yurcik, David Tipper, and Gary Sharp, “National Critical Infrastructure Protection Plans,” completed manuscript under peer review (contact authors), April 2000. [15] William Yurcik, “Adaptive Multi-Layer Network Survivability: A Unified Framework for Countering Cyber-Terrorism,” Workshop on Countering Cyber-Terrorism, Information Sciences Institute (ISI)/ University of Southern California (USC), June 1999.

9

Suggest Documents