An Experimental Evaluation to Determine if Port Scans - CiteSeerX

3 downloads 540 Views 533KB Size Report
using target computers for monitoring attackers and collecting ..... IIS. Not rated. Input vulnerability. Figure 4. List of Vulnerabilities Left on Both Target Computers.
1

An Experimental Evaluation to Determine if Port Scans are Precursors to an Attack

Susmit Panjwani, Stephanie Tan, Keith M. Jarrin, and Michel Cukier Center for Risk and Reliability Department of Mechanical Engineering University of Maryland College Park, MD 20742 {spanjwan, sjt, kmj, mcukier}@umd.edu

Abstract This paper describes an experimental approach to determine the correlation between port scans and attacks. Discussions in the security community often state that port scans should be considered as precursors to an attack. However, very few studies have been conducted to quantify the validity of this hypothesis. In this paper, attack data were collected using a test-bed dedicated to monitoring attackers. The data collected consist of port scans, ICMP scans, vulnerability scans, successful attacks and management traffic. Two experiments were performed to validate the hypothesis of linking port scans and vulnerability scans to the number of packets observed per connection. Customized scripts were then developed to filter the collected data and group them on the basis of scans and attacks between a source and destination IP address pair. The correlation of the filtered data groups was assessed. The analyzed data consists of forty-eight days of data collection for two target computers on a heavily utilized subnet.

1. Introduction Traditional approaches to security validation have not been quantitative, focusing instead on specifying procedures that should be followed during the design of a system (e.g., the Security Evaluation Criteria [1, 2]). When quantitative methods have been used, they have typically been very formal (e.g., [3]), aiming to prove that certain security properties hold given a specified set of assumptions, or quite informal, using a team of experts (often called a “red team,” [4]) that is skilled in 1

This research was supported by NSF CAREER award 0237493.

the practice of security and has complete knowledge of the system being studied. An alternative approach, which has received much less attention by the security community, has been to try to quantify the behavior of an attacker, and his impact on the ability of a system to provide certain security-related properties. Goseva-Popstojanova et al. [5] presented a model of an intrusion tolerant system using a state transition diagram. Jha et al. [6] combined modeling, the use of formal logic, and a Bayesian analysis approach. Ortalo et al. [7] modeled the system as a privilege graph (that is similar to the scenario graph in [6]). Combining the privilege graph with assumptions on the attacker behavior, the authors obtained an attack state graph. A variable called the effort was introduced characterizing the ease or difficulty to reach a given privilege level. The mean effort to security failure (METF) was then estimated based on experimental data. This paper focuses on trying to assess one specific attacker behavior based on experimental data. More specifically, this paper estimates the correlation between a port scan and an attack. Such estimates can then be used in one of the previously mentioned models for quantifying security. This paper describes a test-bed using target computers for monitoring attackers and collecting attack data. Various scripts were developed to filter and analyze the data. We describe each step taken to filter the original traffic (that consisted of management and malicious activity) into various scans and attacks directed to the target computers. The classification into scans and attacks is based on the number of packets per connection. Two experiments were conducted to indicate the relevance of such classification. The correlation between scans and attacks were studied by first focusing on the scans and identifying if attacks followed them,

then analyzing the attacks and identifying the ones that had been preceded by a scan. The paper is organized as follows. Section 2 precisely defines the different types of scans considered in this paper: port scans, ICMP scans, and vulnerability scans. Section 3 reviews the literature tackling the issue of port scan characterization and filtering. In Section 4, we describe the test-bed used for the experiment. We also conducted two additional experiments to better characterize two types of scans in Section 5. Section 6 is dedicated to the filtering and analysis of the collected data. Finally, the results obtained on the correlation between scans and attacks are presented in Section 7.

2. Definitions In the introduction, we used the term of port scan as a general term that stands for “checking for an exploitable target.” We now present a more precise definition of the different scans discussed in this paper. A scan can be defined as a reconnaissance technique in which the attacker tries to determine something about the target host (i.e., Is the host alive? What services are running on the host? What is the host’s operating system? Is there an exploitable vulnerability?) The different types of scans considered in this paper are ICMP scans, port scans and vulnerability scans. An ICMP scan is used to check the availability of a target machine and to fingerprint the target operating system. An ICMP scan uses the information provided by ICMP control messages. An ICMP scan provides less information compared to a port or vulnerability scan. A port scan is used to check for open or closed ports and for used or unused services. The services may or may not have a vulnerability that the attacker could exploit. How these port scans establish a connection, terminate a connection, and exchange messages in the event of a successful/unsuccessful connection or termination of a connection is described in [8]. Moreover, the implementation of the TCP/IP stack is operating system dependent and hence the attacker can use this information to fingerprint the target operating system. We have covered the scanning techniques that make use of the TCP protocol suite to gather information. Since the information that is leaked by using these protocol-dependent scanning methods can be defined in terms of the type of packets and the flow of the packets in a connection, algorithms for detecting these scans can be developed. Such algorithms are more difficult to develop when the scanning method is checking for specific vulnerabilities within specific services or applications. This type of scanning method is known as vulnerability scans. More precisely, a vulnerability scan can be used to fingerprint the presence or absence of an

exploitable vulnerability. Since vulnerabilities differ, most techniques to fingerprint them will also differ, making it difficult to develop a generic algorithm to detect them.

3. Related Work Most of the related work on port scans focuses on characterizing port scans and filtering them from network traffic. Intrusion detection systems like Snort [9] and BRO [10] detect port scans based on models that look for IP addresses that make more than X connections in Y seconds. NSM [11] uses a similar algorithm that checks for source IP addresses making connections to more than fifteen other hosts. GrIDS [12] creates activity graphs representing aggregated network activity in which nodes are represented by hosts and a connection or traffic is represented by edges. Furthermore, GrIDS analyzes these graphs to detect large-scale attacks. Anomaly-based approaches are also used to detect port scans. Emerald [13] constructs statistical profiles for the subjects. In addition, Emerald matches a short-term weighted profile of a subject behavior to a long-term weighted profile. Port scans for example are detected as a sudden increase in SYN traffic from a single source IP address. Ertoz et al. [14] uses a heuristic-based approach to develop algorithms and techniques to determine source IP addresses of port scans with fewer false positives. Apart from port scan detection, some work exists on characterizing the distribution of the type of scans as seen in traffic generated on a production sub-network [15] and in radiated traffic (i.e., sent to non-existent IP addresses) [16]. Lee et al. [15] also classified port scans based on the number of hosts and ports scanned by the source host in a given amount of time. Some case studies analyzing attacks and scans are available in [17, 18]. Each of these case studies focused on one exploit, detailing how the vulnerability was probed and how the vulnerability was exploited by describing the different steps of the attack. These examples are based on forensics to understand step by step how the target computers were compromised.

4. Experimental Setup The experimental test-bed is based on target computers built for the sole purpose of being attacked. Other computers closely monitor these target computers but attackers do not notice that they are observed. This architecture thus allows: 1) collecting data at the host, application and network levels, 2) filtering user traffic from attacker traffic, 3) correlating data collected at the host and network level and 4) controlling the target computers from an isolated monitoring network. Since

Figure 1. Test-bed Architecture

there are no real users on the target computers, there is no concern associated with filtering user traffic from at tack traffic. This approach also avoids the issue of having to store huge amounts of data since no user traffic data is collected. This architecture is similar to the one developed by the Honeynet project [18]. However, we are not using the same tools for data collection and monitoring (e.g., Ethereal instead of tcpdumps because of the better graphical and analysis capability, correlation scripts instead of the honeynet management console, the use of Snort for the sole purpose of event alerting, tracelog instead of Sebek [19] (on Windows)). Moreover, a data filtering module, an image control module and a data correlation module have been developed. The experimental results provided in this paper are based on the test-bed shown in Figure 1 consisting of two target computers. Functional descriptions of the components in Figure 1 are as follows: • Access Control: This module restricts the propagation of attacks from the target computers by using a reverse firewall in which outbound connections are monitored instead of inbound connections. Customized firewall scripts were developed to limit attackers from initiating outbound connections and at the same time allowing the data collection engine to send data securely to the management network. The







scripts were developed on the IPTables firewall running on Linux RedHat 9. Data Collection: This module collects real time data at the network, host and application level. Data in the format of tcpdumps are collected using Ethereal [20]. The data are uploaded every six hours on a database located on the testbed, using MySQL version 4 as a buffer for temporary data storage. These data are then uploaded daily on a centralized database running Oracle 9i. Customized scripts were developed in Perl to parse the data into the Oracle database. Event Logging: This is a sub-module that aids data storage by collecting and storing system, application and security logs. Syslog data are collected using ‘syslogd’ running on Fedora Core 1. As for the data collected in the data collection module, these logs are uploaded every six hours on a database using MySQL version 4. Finally, these data are uploaded each day on a centralized database using Oracle 9i. Event Alerting: This module is used to alert the research team about any attacker activity and system failure allowing for immediate administrative response. These alerts can also be used for forensic purpose to understand data and event interactions in sequence. We are using Swatch [21] to monitor syslogs data and







Snort [9] alerts in real time. Swatch runs on RedHat 9 and uses send-mail. Image Control: This module controls the deployment and the maintenance of current versions of operating systems, applications and the release of patches on the target machines. This module also allows re-imaging the target computers in case of corruption. We are using Ghost Enterprise version 8 developed by Symantec [22]. Data Filtering: This module filters out the management traffic generated on the network from the collected traffic. The resulting traffic thus only consists of malicious activity. The filtering is done at multiple stages in the data collection and analysis process and is detailed in Section 6. Data Correlation: This module analyzes the filtered data to produce the results given in Section 6 and 7 using Perl and PL/SQL scripts on an Oracle database.

5. Port Scans and Vulnerability Scans As mentioned in Section 4, the data collected consists of management traffic and malicious activity since no “normal” user was using either of the two target computers. The malicious traffic collected includes port scans, vulnerability scans and attacks. To improve data analysis data, we conducted two experiments to better characterize port scans and vulnerability scans.

5.1 Characterization of Port Scans As can be seen from [8] three packets are sufficient to finish a TCP handshake and establish a connection. The information about open ports and services can be gathered by using as few as two packets. To corroborate these specifications experimentally and to evaluate the distribution of port scans we developed an experiment based on a well-known network scanner. More specifically, we used an isolated network consisting of two computers. On one computer, we ran the network scanner Nmap version 3.75 for Windows [23]. On the other computer, we recorded all the packets going through the network using the network protocol analyzer Ethereal version 0.10.7 [20]. We ran all the different basic types of port scans available in Nmap and used Ethereal to capture, measure, and group the connections based on the number of packets. The results of the number of packets per connection and the number of connections associated with a given number of packets when running Nmap are provided in Figure 2. Note that we did not observe any connection with some specific number of packets (e.g., 5, 6, 10, 11). From

Figure 2, over 19,946 (i.e., 99.76%) port scans consist of two packets per connection, 19,961 (i.e., 99.83%) port scans consist of three or less packets per connection and 19,971 (i.e., 99.88%) port scans consist of four or less packets per connection. These results show that five packets per connection can be used as a threshold to characterize port scans. This threshold is confirmed in theory since full TCP handshakes consist of three packets and a fourth packet is included as a possible reset packet.

Total

No. Packets in Connection 2 3 4 7 8 9 12 17 18 21 33 N/A

No. of Connections 19,946 15 10 2 4 10 1 1 1 1 3 19,994

Percentage of Connections 99.76 0.075 0.05 0.01 0.02 0.05 0.005 0.005 0.005 0.005 0.015 100

Figure 2. Nmap: Analysis of the Number of Packets per Connection

5.2 Characterization of Vulnerability Scans Since the collected traffic data consist of attacks and vulnerability scans besides ICMP and port scans, we analyzed vulnerability scans to see if the number of packets could also be used for characterization. Therefore, we ran NeWT 2.1 [24], a Windows version of Nessus [25] developed by Tenable Network Security with more than 4,000 plug-ins enabled in addition to the plug-ins characterized as “dangerous” (i.e., leading to DoS attacks) on the same isolated network as in Section 5.1 and recorded all packets going through the network using Ethereal version 0.10.7 [20]. Ethereal captured the packets and generated CSV files from grouped connections. The files were exported into a database and correlated by number of packets per connection. Figure 3a and 3b provide the results of the number of packets per connection and the number of connections associated with a given number of packets when running NeWT without and with the port scan module. Note that we did not observe any connection with some specific number of packets (e.g., 5, 14, 15). From Figure 3a and 3b, 0.053% (without the port scan module) and 0.0485% (with the port scan module) of the vulnerability scans consist of four or less packets per connection. This result shows that the threshold of five or less packets is relevant to differentiate port scans from vulnerability scans. However, note that 4,024 connections consisting

of six packets are port scans. This observation requires reassessing the use of five packets as a threshold for port scans. When analyzing these six packet connections, we observed that they consisted of half reversed port scans performed three times. We developed a script to identify these port scans and counted them as special half reverse port scans with a connection of two packets.

Total

No. Packets in Connection 2 3 4 6 7 8 9 10 11 12 13 33 39 40 47 53 N/A

Without Port Scans No. of Percentage of Connections Connections 1 0.004 10 0.041 2 0.008 24,173 98.641 40 0.163 60 0.245 71 0.290 9 0.037 8 0.033 124 0.506 1 0.004 2 0.008 1 0.004 1 0.004 1 0.004 2 0.008 24,506 100

Figure 3a. Nessus: Analysis of the Number of Packets per Connection Without the Port Scans Module

Total

No. Packets in Connection 2 3 4 6 7 8 9 10 11 12 13 33 39 40 47 53 N/A

With Port Scans No. of Percentage of Connections Connections 1 0.0035 11 0.038 2 0.007 28,197 98.815 43 0.151 60 0.210 71 0.249 9 0.031 8 0.028 124 0.434 1 0.003 3 0.010 1 0.0035 1 0.0035 1 0.0035 2 0.007 28,535 100

Figure 3b. Nessus: Analysis of the Number of Packets per Connection With the Port Scans Module From Figure 3a and 3b, 99.91% (without the port scan module) and 99.92% (with the port scan module) of the vulnerability scans consist of connections having between six and twelve packets. And only 0.03% (with

and without the port scan module) of the vulnerability scans consist of connections having more than twelve packets. Based on these experimental results and since an individual characterization of the vulnerability plugins would be fastidious, we decided to characterize vulnerability scans as connections having between five and twelve packets (besides six packet connections recognized as half reverse port scans repeated three times that are counted as special half reverse port scans). Connections with less than five packets are characterized as port scans. Connections with more than twelve packets are characterized as attacks.

6. Data Filtering and Data Analysis We collected data during forty-eight days on a test bed consisting of two target computers deployed at the University of Maryland’s Institute for Systems Research. The selected subnet is an unmonitored subnet in which IP addresses are dynamically assigned to users. Both target computers ran Windows 2000 and had the same services (i.e., IIS, FTP, Telnet, NetTime [26]) and the same vulnerabilities maintained during the whole data collection. Twenty-five vulnerabilities from 2000 to 2004, shown in Figure 4, were selected to cover various services and different levels of criticality. Note that most UDP traffic is filtered at the gateway level on the chosen subnet by the campus security administrators (which is the reason why the experiment focuses on ICMP and TCP traffic). The number of outbound traffic was limited: 10 TCP connections per hour, 15 ICMP connections per hour and 15 other connections per hour. Moreover, the target computers were re-imaged twice during the experiment.

6.1 Data Filtering As mentioned in Section 4, the traffic data collected was filtered at multiple stages before being analyzed. Figure 5 indicates the different flows of traffic. Filtering is completed at the data collection level by the way we setup the data collection engine. We use a machine acting in a bridge mode using two network interfaces in which one interface is connected to the firewall and other interface to the network of target computers. Only external data coming from the Internet and having as destination the target computers are routed over this bridge (i.e., attack traffic). We use a third management interface for sending the data collected to our management network so that all the communication to and from the management network is routed over this third management interface and is subsequently separated from the attack traffic. Data is collected using Ethereal [20] packet sniffer. The collected data consisted of 908,963 packets of malicious and management

Year 2004 2004 2004 2004 2004 2004 2003 2003 2003 2003 2003 2002 2002 2002 2002 2001 2001 2001 2001 2001 2001 2001 2000 2000 2000

Bulletin Number MS04-012 MS04-012 MS04-012 MS04-012 MS04-011 MS04-011 MS03-010 MS03-018 MS03-026 MS03-049 MS03-039 MS02-062 MS02-018 MS02-018 MS02-004 MS01-041 MS01-044 MS01-026 MS01-026 MS01-026 MS01-016 MS01-014 MS00-086 MS00-078 MS00-057

Service RPC/DCOM RPC/DCOM RPC/DCOM RPC/DCOM LSA LSA RPC End Point Mapper IIS RPC Interface Worstation Service RPC IIS FTP HTTP Telnet RPC IIS IIS FTP FTP IIS WebDAV IIS Exchange IIS IIS IIS

Criticality Critical Important Low Low Critical Moderate Important Important Critical Critical Critical Moderate Critical Critical Moderate Not rated Not rated Not rated Not rated Not rated Not rated Not rated Not rated Not rated Not rated

Vulnerability Description Race condition Input vulnerability Buffer overflow Input vulnerability Buffer overflow Buffer overflow Vulnerability not clearly specified Memory vulnerability Buffer overflow Buffer overflow Buffer overflow Memory vulnerability Vulnerability not clearly specified Buffer overflow Buffer overflow Input vulnerability Input vulnerability Vulnerability not clearly specified Memory vulnerability Vulnerability not clearly specified Input vulnerability Input vulnerability Vulnerability not clearly specified Input vulnerability Input vulnerability

Figure 4. List of Vulnerabilities Left on Both Target Computers

activity: 1) attack traffic from the Internet and 2) management traffic like spanning tree protocol (STP) traffic generated by the bridge, DNS resolutions and NTP queries. The data is parsed into a format that can be stored in a MySQL database. The data is then parsed on a protocol basis to filter out the rest of the management traffic. Moreover, traffic not directed to either target computers is filtered out of the dataset. The statistics on the number of connections and packets collected are shown in Figure 6. Total Packets Captured Distinct TCP and ICMP connections identified in filtered traffic TCP connections going to target computer 1 ICMP connections going to target computer 1 TCP connections going to target computer 2 ICMP connections going to target computer 2

908,963 59,468 5,776 7,203 7,274 2,457

Figure 6. Summary of Collected Traffic

6.2 Data Analysis To analyze the 22,710 collected connections going to the two target computers (i.e., attack traffic), we developed scripts to split the data into ICMP scans, port scans, vulnerability scans and attacks. ICMP scans were easily identified based on the protocol type. Port scans,

vulnerability scans and attacks were split using the conclusions of Section 5: • Less than five packets in a connection is defined as a port scan, • Between five and twelve packets in a connections is defined as a vulnerability scan (besides connections of six packets recognized as three half reverse port scans are defined as a special case of half reverse port scan) and • More than twelve packets in a connection is defined as an attack. The goal of this experiment was to analyze the correlation between scans and attacks. Therefore, we only kept unique scans and attacks. For example, if multiple port scans were launched from one specific source IP address towards one of the target computers, we only recorded that this source IP address had launched at least one port scan without recording the actual number of port scans. Similarly, if one source IP address launched several attacks (of the same type or of different types) against one of the target computers, we only recorded that at least one attack had been launched from that source IP address against the target computer. In Figure 7 the link between the 22,710 connections of malicious activity into unique ICMP scans, port scans, vulnerability scans and attacks is shown.

Figure 5. Flows of Traffic on the Test-bed

Malicious Activity ICMP Scans Port Scans Vulnerability Scans Attacks Total

No. of Records 9,660 8,432 2,583 2,035 22,710

No. of Unique Records 3,007 779 1,657 760 6,203

Nessus in Section 5.2. The scripts we developed recognized these scans and counted the six-packet scan as a special case of a single half reverse scan. The results provided after parsing forty-eight days of collected data on port scans are shown in Figure 10.

Figure 7. Distribution of Scans Leading to an Attack

6.3 Distribution of Port Scans Types To better analyze the port scans that have been collected, we attempted to characterize different port scans. This characterization is based on common practice [27, 28] and the design of network port scanners like Nmap [29]. We propose to model a port scan through the state machine shown in Figure 8. Depending on the state in which the port scan terminates, port scans can be classified in five different categories. These five categories are shown in Figure 9. Scripts in Perl were developed to parse the collected malicious activity. Apart from detecting the five types of scans, we also observed scans in which six packets were used. These scans were actually a half reverse scan performed three times. We already mentioned this issue when analyzing the vulnerability scans provided by

Figure 8. Representation of Port Scans

Scan Type Full Open Half Open Full Reverse Half Reverse Incomplete

Connection Termination State State 4 State 5 State 6 State 3 State 2

Figure 9. Classification of Port Scans

Scan Type Full Open Half Open Full Reverse Half Reverse Incomplete Total

No. Detected 10 355 6 7,667 62 8,100

Percentage 0.12 4.38 0.07 94.66 0.77 100

Figure 10. Distribution of Port Scans Note first that not all port scans were covered by this model. From the 8,432 port scans collected, 8,100 could be classified using the state machine model of Figure 10, representing 96.1% of the collected port scans. A significant majority of scans (over 94%) are half reverse scans. Half open scans reach over 4%. These scans (99%) consist of connections that contain only two packets. The very large majority of the port scans are due to connections that are not established (a full connection would significantly increase the visibility of the attacker and is therefore avoided by attackers). This result thus shows the relevance of using a test-bed like the one described in Section 4 to better understand existing launched attacks.

7. Experimental Results We now have identified the ICMP scans, port scans, vulnerability scans and attacks in the collected traffic. The next step is to analyze the correlation between attacks and scans. We first considered all scans and for each one checked if an attack followed the scan. We then analyzed all the attacks and identified which scan(s), if any, preceded the attack.

7.1 Scans Followed by Attacks For each scan and combination of scans (from a specific source IP address towards one of the target computers), we checked if at least one attack (from the same source IP address towards the same target computer) followed the scan(s). Each scan and combination of scans, the associated number of scans observed, the number of scans followed by an attack and the percentage of scans leading to an attack are presented in Figure 11. Focusing on scans, we observed from Figure 11 that almost none of the ICMP scans were followed by an attack. Moreover, only 4% of the port scans were followed by an attack. However, over 21% of the vulnerability scans were followed by an attack. These percentages are rather low indicating that the detection of an ICMP/port/vulnerability scan might be a poor indicator that an attack will follow.

Type of Scan

No. Scans Observed

No. Scans Leading to Attack

Port ICMP Vulnerability Port & ICMP Port & Vulnerability ICMP & Vulnerability Port & ICMP & Vulnerability

694 2,797 1,399 11 59

28 1 296 0 42

Percentage Scans Leading to Attack 4.03 0.04 21.16 0 71.19

184

5

2.72

15

7

46.67

Figure 11. Distribution of Scans Leading to an Attack Considering combinations of scans, the combination of an ICMP scan and a vulnerability scan leads to an attack in about 3% of the cases. In the case of a combination of the three scans, an attack followed in more than 46% of the cases. However, only a few cases of three scans originating from the same source IP address were observed (15) requiring interpreting this percentage with caution. The best indicator that an attack will follow was the combination of a port scan and a vulnerability scan. For over 71% of the port scan and vulnerability scan combinations, an attack followed. These results showed that the identification of port scans and vulnerability scans launched from a specific source IP address is a good indicator that an attack will follow from the same source IP address. These experimental results showed that when focusing on the scans, the main scans leading to an attack were: 1) a combination of port and vulnerability scans, 2) a combination of port, ICMP and vulnerability scans, 3) a vulnerability scan and 4) a port scan.

7.2 Attacks Preceded by Scans For each of the 760 attacks collected from different source IP addresses (if more than one attack was collected from a source IP address, we only counted one attack since the goal of this experiment was to correlate attacks with scans and not to analyze the attacks themselves) we checked if any scan or combination of scans preceded the attack (from the same source IP address towards the same target computer). The number and percentage of direct attacks (i.e., attacks not preceded by any scan) and attacks preceded by different scans and combinations of scans are provided in Figure 12. We observed from Figure 12 that more than 50% of the attacks were not preceded by a scan. This observation puts in perspective the previous results obtained when focusing on the scans. However, over 38% of the attacks were preceded by a vulnerability

scan. Port scans and combinations of port and vulnerability scans preceded 3-6% of the attacks. These experimental results show that the majority of the attacks were not preceded by any scan. When scans preceded attacks, the most frequent ones were: 1) a vulnerability scan, 2) a combination of port and vulnerability scans and 3) a port scan. Type of Scan Port ICMP Vulnerability Port & ICMP Port & Vulnerability ICMP & Vulnerability Port & ICMP & Vulnerability None

Percentage Attacks

No. Attacks Preceded by a Scan 28 1 296 0 42 5 7

3.68 0.13 38.95 0 5.53 0.66 0.92

381

50.13

Figure 12. Distribution of Scans Preceding an Attack

8. Conclusions To evaluate the security of a computing system, the system weaknesses (i.e., vulnerabilities) need to be identified and the threat on the system needs to be assessed. This paper focused on the threat by trying to better understand common attacks launched against computer systems. More specifically, the paper analyzed the correlation between scans and attacks. This correlation is important for determining if a scan can be used as a signal that an attack might follow. The paper tackles the issue by trying to experimentally assess the link between a scan and an attack. The experiment was based on a test-bed deployed at the University of Maryland dedicated to monitoring attackers and collecting data on attacks. The filtering and the analysis of the collected data consisting of management traffic and malicious activity during the forty-eight days were described. The number of packets per connection was used to separate port scans, vulnerability scans, and attacks. We used Nmap and Nessus to demonstrate the relevance of using the number of packets as a separator between scans and attacks. The experimental results showed that over 50% of the attacks were not preceded by a scan. Among the scans leading the more frequently to an attack were vulnerability scans and combinations of port and vulnerability scans. Therefore, port scans combined with vulnerability scans might be a relevant indicator of a coming attack. However, based on the results of this experiment, only port scans did not appear to be a good indicator of a future attack. Therefore, we can state that, based on the experiment conducted in this paper, port

scans should not be considered as precursors to an attack. The described experiment provides a first step into answering the question if port scans are a good indicator of a future attack. This experiment now can be expanded into a longer data collection period, target computers deployed in other locations and different sets of vulnerabilities left on the target computers.

Acknowledgments The authors would like to thank The Institute for Systems Research and the Office for Information Technology for their support in implementing a test-bed for collecting attack data at the University of Maryland. In particular, we thank Michael Wilson and his team for the help, material, and room offered to conduct this project. We thank Gerry Sneeringer and his team for permitting the deployment of the test-bed. We also thank Melvin Fields and Dylan Hazelwood for providing some of the computers used in the testbed.

References [1] U.S. Department of Defense Standard, Department of Defense Trusted Computer System Evaluation Criteria (“Orange Book”), DOD 5200.28-STD, Library No. S225,7ll, Dec. 1985. http://www.radium.ncsc.mil/tpep/library/rainbow/5200.28STD.html [2] ISO/IEC International Standards (IS) 15408-1:1999, 154082:1999, and 15408-3:1999, “Common Criteria for Information Technology Security Evaluation”: Part 1: “Introduction and General Model,” Part 2: “Security Functional Requirements,” and Part 3: “Security Assurance Requirements,” Version 2.1, August 1999 (CCIMB-99-031, CCIMB-99-032, and CCIMB99-033). http://csrc.nist.gov/ cc/ccv20/ccv2list.htm [3] C. Landwehr, Formal Models for Computer Security, Computer Surveys, vol.13, no.3, Sept. 1981. [4] J. Lowry, An initial foray into understanding adversary planning and courses of action, in Proc. DARPA Information Survivability Conference and Exposition II. DISCEX’01 p. 123-33, 2001. [5] K. Goseva-Popstojanova, F. Wang, R. Wang, F. Gong, K. Vaidyanathan, K. Trivedi, and B. Muthusamy, Characterizing Intrusion Tolerant Systems Using A State Transition Model, in Proc. DARPA Information Survivability Conference and Exposition II. DISCEX’01, 2001. [6] S. Jha and J. M. Wing, Survivability Analysis of Networked Systems, in Proc. of the 23rd International Conference on Software Engineering (ICSE 2001), pp. 307-317, 2001. [7] R. Ortalo, Y. Deswarte, and M. Kaaniche, Experimenting with quantitative evaluation tools for monitoring operational security, IEEE Transactions on Software Engineering, vol.25, no.5, p. 633-50, Sept.-Oct. 1999. [8] http://www.faqs.org/rfcs/rfc793.html [9] http://www.snort.org/ [10] http://bro-ids.org/

[11] L.T.Heberlein, G.Dias, K.Levitt, B. Mukherjee, J. Wood, and D. Wolber, A network security monitor, in Proc. Symposium on Research in Security and Privacy, pp. 296-304, 1990. [12] S. Staniford-Chen, S. Cheung, R. Crawford, M. Dilger, J. Frank, J. Hoagland, K. Levitt, C. Wee, R. Yip, and D. Zerkle, GrIDS A Graph-Based Intrusion Detection System for Large Networks, in Proc. 19th National Information Systems Security Conference, 1996. [13] http://www.sdl.sri.com/projects/emerald/ [14] L. Ertoz, E. Eilertson, P. Dokas, V. Kumar and K. Long, Scan Detection – Revisited, AHPCRC Technical Report 2004127 [15] C. B. Lee, C. Roedel, E. Silenok, Detection and Characterization of Port Scan Attacks, http://www.cs.ucsd.edu/users/clbailey/PortScans.pdf [16] R. Pan, V. Yegneswaran, P. Barford, V. Paxson, and L. Peterson, Characteristics of Internet Background Radiation, in Proc. ACM SIGCOMM’04, 2004. [17] L. Spitzner, Honeypots: Tracking Hackers, AddisonWesley, 2002. [18] The Honeynet Project, Know Your Enemy, AddisonWesley, 2002. [19] http://www.honeynet.org/tools/sebek/ [20] http://www.ethereal.com/ [21] http://swatch.sourceforge.net/ [22] http://www.symantec.com/ [23] http://www.insecure.org/nmap/ [24] http://www.tenablesecurity.com/newt.html [25] http://www.nessus.org/ [26] http://sourceforge.net/projects/nettime [27] S. McClure, J. Scambray, and G. Kurtz, Hacking Exposed: Network Security Secrets & Solutions, McGraw-Hill, 1999. [28] J. Chirillo, Hack Attacks Revealed: A Complete Reference for UNIX, Windows, and Linux with Custom Security Toolkit, Wiley, Second Edition, 2002. [29] M. Wolfgang, Host Discovery with nmap, 2002, http://www.net-security.org/dl/articles/discovery.pdf

Suggest Documents