Preemptive Resource Management: Defending against ... - CiteSeerX

3 downloads 1519 Views 176KB Size Report
Operating Systems, Resource Management, Security, Exe- cutable Contents ..... cesses. When a free page frame is not available, the mem- ory manager is ...
Preemptive Resource Management: Defending against Resource Monopolizing DoS Wataru Kaneko Kenji Kono Department of Computer Science, Department of Computer Science, Graduate School of Information Science and Technology, University of Electro-Communications University of Tokyo 1-5-1 Chofugaoka Chofu-shi 7-3-1 Hongo Bunkyo-ku, Tokyo, JAPAN Tokyo, JAPAN PRESTO, Japan Science and Technology Corporation email: [email protected] email: [email protected] Kentaro Shimizu Department of Computer Science, Graduate School of Information Science and Technology, University of Tokyo 7-3-1 Hongo Bunkyo-ku, Tokyo, JAPAN email: [email protected] ABSTRACT Resource-monopolizing Denial-of-Service (DoS) is one form of malicious code attack. An attacking code exclusively uses shared resources and attempts to drop the responsiveness of the machine so low that it is practically useless. Resource-monopolizing DoS is difficult to prevent in commodity operating systems, because they allow every process to compete for shared resources in an uncontrolled manner. This paper presents preemptive resource management, a scheme we developed to defend against resourcemonopolizing DoS. Preemptive resource management extensively applies priority and preemption to every type of resource. It controls resource allocation based on priority, and preempts resources allocated to lower-priority processes based on the availability of shared resources. The experimental results we obtained suggest that preemptive resource management prevents resource-monopolizing DoS from impinging on the responsiveness of other innocent processes. KEY WORDS Operating Systems, Resource Management, Security, Executable Contents, Malicious Code

1

Introduction

Resource-monopolizing Denial-of-Service (DoS) is a new kind of security threat on the Internet. This form of threat involves an attacker distributing malicious code through the Internet, using e-mails, WWW, and peer-to-peer file sharing systems. When a user downloads and executes distributed code, the code exclusively uses, or monopolizes, shared resources such as CPU time, physical memory, and I/O bandwidth, and attempts to drop the responsiveness

of the user’s machine so low that it is practically useless. Resource-monopolizing DoS is one form of DoS attack; the attacked machine denies all services to the user because no service can proceed as there is severe contention over shared resources. Resource-monopolizing DoS exploits a serious weakness in the resource management of current operating systems. Current operating systems allow all processes to compete for shared resources in an uncontrolled manner. They rely on users being sufficiently cooperative not to overload the system. Therefore, if an attacking process attempts to intentionally monopolize shared resources, the current system does not have any support to regulate simultaneous access to shared resources. For example, if an attacking process invokes an enormous amount of disk access, the current system cannot regulate I/O access and thus falls into a resource-monopolized state. To prevent resource-monopolizing DoS, the operating system must regulate all access to all kinds of resources such as CPU time, memory, and I/O bandwidth. This paper proposes a novel scheme of resource management, called preemptive resource management, that enables us to prevent resource-monopolizing DoS. The key idea behind the proposed scheme is to extensively use priority and preemption on every resource. Priority and preemption have already been used for scheduling CPU time. A notable feature of our scheme is to apply these to other resources such as memory and I/O bandwidth. Every process is assigned a resource priority which represents the importance of the process. If a process is assigned a higher resource priority, a necessary resource is allocated to the process even if there is contention over the resource. If the resource is not available, the higher-priority process preempts the resource allocated to lower-priority processes. In

this way, we can control access to shared resources by adjusting resource priorities. Our scheme can prevent resource-monopolizing DoS. When we execute a potentially malicious code, we assign to it a resource priority lower than innocent processes. By doing so, shared resources are allocated to the innocent processes rather than to the malicious code even if the code attempts to intentionally monopolize resources. Even if we assign a higher priority to a malicious code by mistake, we can dynamically lower its priority during execution. If the priority is lowered, the resources monopolized by the malicious code are rapidly preempted by innocent processes and the state of resource-monopolization disappears. We built a prototype system by modifying a Linux 2.4.7 kernel. It manages three important resources: CPU time, memory, and disk bandwidth. The experimental results based on the prototype suggest that our scheme can smoothly resolve resource contention and prevent resourcemonopolizing DoS. In the original Linux, the response time of an interactive process increases over 100 times under attack, but in our prototype it increases negligibly. The rest of the paper is organized as follows. Section 2 discusses how resource-monopolizing DoS creates a serious situation, and discusses the limitations of existing strategies. Section 3 proposes preemptive resource management and Section 4 describes the implementation. Section 5 presents our experimental results. We discuss related work in Section 6 and draw conclusions in Section 7.

2

Resource Monopolizing DoS

The threat created by resource-monopolizing DoS is even greater than other security threats such as distributed DoS and buffer overflow attacks, because even a novice user can create an attack code. Not only a program code but also a document file can initiate a resource-monopolizing DoS. If a user creates a document file that makes its viewer consume a large amount of resources, the viewer becomes a monopolizer. For example, an attacker can create a resource-monopolizing PostScript or PDF or PowerPoint file and place it on his web page, disguising it as an interesting document. The document viewer suddenly becomes a monopolizer when it processes the malicious document.

2.1

Impact on Responsiveness

Resource-monopolizing DoS is an attack in which an untrusted code, called a monopolizer, intentionally monopolizes, or exclusively uses, shared resources such as CPU time, physical memory, and I/O bandwidth. A monopolizer may try to monopolize only one type of resource, for instance CPU time, by executing an infinite loop. A more powerful monopolizer tries to monopolize many types of resources at the same time. For example, if a monopolizer successfully monopolizes physical memory, other resources such as CPU time and disk bandwidth are also mo-

Figure 1. Impact of Resource Monopolizing DoS.

nopolized because the system requires both of these for paging-in and -out. To reproduce the impact of resource-monopolizing DoS, we conducted the following experiment on a 733MHz Pentium III PC with 128MB of main memory, running Linux 2.4.7. In the experiment, we ran five monopolizers concurrently, each of which accessed a 25MB data set repeatedly. These monopolizers cooperatively attempted to monopolize CPU time, physical memory, and disk bandwidth at the same time. To emulate an attacked process, a simple process was executed that repeatedly touched a 4MB data set and then slept for 5 seconds. This process emulated system behavior of an “interactive” process. We measured the “response time” of the interactive process; the response time corresponded to the time required to touch the entire data set. In the experiment, the interactive process was executed and then, 120 seconds later, the five monopolizers were executed. Figure 1 shows the results. Before the monopolizers were executed, the response time for the interactive process was a constant 27 msec. After they were executed, the response time dramatically increased up to 10,000 msec. The machine under attack was almost frozen. After 240 seconds elapsed, we lowered the CPU priority of the monopolizers but the response time did not improve in the least.

2.2

Existing Strategies

At first glance, killing a monopolizer is the simplest strategy to prevent resource-monopolizing DoS. If the monopolizer is killed, other processes can proceed normally since the operating system releases the resources allocated to the monopolizer. However, when the system is severely overloaded, it takes a great deal of time to kill a process because process-controlling commands like UNIX ps and

kill cannot proceed due to the shortage of resources. In the worst case, all we can do is reboot the machine. If we reboot, we lose all transient data, such as documents edited by word processor, unless we saved these prior to the attack. We need a more sophisticated scheme to prevent resource-monopolizing DoS. A commodity operating system has insufficient mechanisms to control resource allocation. Existing strategies can be classified into the following three categories: (1) lowering CPU priority, (2) enforcing quotas on resources, and (3) applying real-time system techniques. The first strategy is to use CPU priority, which is supported in many commodity operating systems. Before running a potential monopolizer, we lower the CPU priority of the monopolizer. This works well if the CPU time is only for the resource monopolized. If another resource — such as memory and I/O bandwidth — is monopolized, there is no contention for the CPU time and thus CPU prioritization is ineffective. Furthermore, if we misidentify and run monopolizers without lowering their CPU priorities, the machine suffers from resource-monopolizing DoS and we have no way to recover from the attack. The second strategy is to enforce fixed quotas on every resource per process or group of processes. By enforcing quotas, a monopolizer cannot exclusively use shared resources. However, this strategy would not allow idle resources to be utilized effectively, and the overall throughput and responsiveness of the system would drop significantly. As mentioned earlier, a trusted program like a document viewer may suddenly become a monopolizer when it processes malicious data. To avoid this happening, trusted processes must run with the quotas enforced. Thus, each process is given only a small portion of the entire resources even if there are a great amount of idle resources. The third strategy is to apply techniques developed in the domain of real-time systems, where many sophisticated schemes of resource management have been developed. In a real-time system, an application requests guaranteed resources or deadlines, and the system guarantees the requests through admission control, based on the availability of resources [9, 3]. By applying these techniques, we can guarantee that trusted processes have an adequate amount of resources but the application must make explicit reservations for these. Therefore, it is not a straightforward procedure to simply apply them to a general-purpose operating system which runs a wide-variety of resourceunaware applications.

3

Preemptive Resource Management

To address limitations in the existing strategies, we developed a novel scheme called preemptive resource management. This scheme applies the concepts, priority and preemption, extensively to all resources including memory and I/O bandwidth. Compared with each strategy described above, our scheme has the following benefits. First, even if a monopolizer exclusively uses resources other than CPU

time, we can prevent resource-monopolizing DoS because we can control access to every resource. Second, idle resources are utilized effectively. If there is an idle resource, it is allocated to the process requesting it. Third, an application does not need to make explicit reservations for resources. Our resource manager automatically allocates or preempts shared resources according to resource priorities. Furthermore, if there is no resource-monopolizing process, preemptive resource management works almost in the same way as an ordinary system. Therefore, the user is not forced to pay additional overheads in preparing for resource-monopolizing DoS.

3.1

Resource Preemption

Preemptive resource management allows a process to preempt the resources of other processes. When a process requests an unavailable resource, it is allowed to preempt one that has already been allocated to another process. Even if a monopolizer exclusively uses a shared resource, other processes can preempt it from the monopolizer to prevent resources from being monopolized. To avoid uncontrolled preemption, we introduce the concept of resource priority that will be discussed in Section 3.2. Preemption is a familiar process in CPU time scheduling. Most operating systems support the preemption of CPU time to implement preemptive multitasking. It seems reasonable to extend this concept to other resources to prevent resource-monopolizing DoS. However, if we extend this concept naively, we can not prevent monopolization because the preemption itself requires additional resources. In particular, preempting physical memory is difficult because the system requires many resources to preempt a page frame. It requires not only CPU time but also disk bandwidth to page in the page frame. In the worst case, preemption takes considerable time and the machine falls into a denial-of-service state.

3.2

Resource Priority

As previously discussed, our scheme introduces the concept of resource priority to control preemption. Resource priority is used to determine which process can preempt whose resources. When a process requests a resource that has already been allocated to another process, the resource priorities determine whether the requesting process can preempt the resource or not. If the requesting process has a higher priority than the process using the resource, it is allowed to preempt the resource. If the requesting process has a lower or equal priority, preemption is not allowed. Resource priorities can be dynamically changed. If a process undertakes resource monopolizing DoS, we can dynamically lower the priority for that process. By doing this, we can recover from the resource-monopolized state. Resource priority enables the user to control resource allocation qualitatively. In other words, the user does

not need to worry about specifying quantitative parameters such as 37% of CPU time, 4.5MB of physical memory, and 18Mbps of network bandwidth. If we use resource reservation or the fixed quota approach, the user would have to specify such parameters. Although our approach cannot control resource allocation in detail, we believe that prioritization does suffice to prevent resource-monopolizing DoS.

resource-monopolizing DoS. In what follows, we explain the implementation on three resources: physical memory, disk bandwidth, and CPU time. Our implementation is dependent on Linux but preemptive resource management can be ported to other operating systems.

4.1 4.1.1

3.3

Putting into Practice

By controlling resource priorities, the user can avoid or recover from resource-monopolizing DoS. For the purposes of explanation, we assume that three resource priorities have been prepared, although the user can define an arbitrary number of priorities using our implementation. As to how many priorities should be prepared is a management policy issue as is how to assign which priority to which process. Both of these are beyond the scope of this paper. To explain how resource priorities are used, we prepare three priorities: admin, normal, and suspect. The admin is the highest priority while the suspect is the lowest. Most processes are assigned normal priority. If all processes are assigned the same priority, the system apportions shared resources in the same way as in an ordinary system that does not support preemptive resource management. Highest priority admin is assigned to process-controlling commands such as Unix ps and kill. We also assign admin priority to commands to control resource priorities. As mentioned earlier, a trusted program may suddenly become a monopolizer when it processes malicious data. Therefore the process assigned normal priority suddenly becomes a monopolizer. In such a situation, we can quickly invoke process-controlling commands to lower the resource priority of the monopolizer, because these commands have been assigned the highest priority. By lowering the priority of the monopolizer, the other processes can acquire necessary resources by preempting them from the monopolizer. As a result, we can recover from resource-monopolizing DoS. If we execute a suspicious process that may perform resource-monopolizing DoS, we assign suspect priority to the process before execution. If the suspicious process does not attack, the resources allocated to it are not preempted, and therefore it can proceed normally.

4

Implementation

We modified a Linux 2.4.7 kernel and built a prototype system for preemptive resource management. In this management, the resource manager must track down the process using each resource to determine that process’s resource priority. To prevent resource-monopolizing DoS, preemption must be implemented not to adversely affect the performance of higher-priority processes. If preemption itself takes too much time, we cannot rapidly recover from

Physical Memory Tracking down process using page frame

In preemptive resource management, a higher-priority process is allowed to preempt the page frame of lower-priority processes if there is no free page frame. Our memory manager is required to track down the process using each page frame and determine the resource priority of that process. Tracking down the process is easy if a page frame is mapped to a single user address space. In this case, the process running in the address space is obviously using the page frame. However, if a page frame is mapped to several processes, the process using the page frame is not so obvious. Unfortunately, a page frame is often shared among several processes. For example, a kernel resource such as file cache and I/O buffer is shared among several processes and there is ambiguity about which process is using the resource. In our implementation, the memory manager regards the requester of the page frame to be its user. The requester is the process that requests the page frame for the first time. If the process is using too many page frames, it naturally becomes their requester and the memory manager can identify which process is monopolizing physical memory.

4.1.2

Preemption and latency hiding

To implement preemptive resource management, the memory manager is required to select a victim page based on resource priorities. Since Linux’s memory manager employs an LRU algorithm, we incorporated preemptive resource management into this algorithm. Linux employs a variant of Mach’s FIFO with a second chance [5]. A page frame is in one of three states: active, inactive-dirty, and inactive-clean. If a page frame has been accessed recently, it enters the active state. If a page frame has not been accessed recently, it enters inactive-dirty or inactive-clean states. When the number of free page frames falls below a threshold value, the pagedaemon is activated. It picks up some page frames that have not been accessed recently but which are in the active state, and turns them into the inactive-dirty state. Then, the pagedaemon picks up some page frames in the inactive-dirty state, and turn their states into the inactive-clean state. If there is a page frame whose reference bit is set, it is returned to the active state. If the page frame is dirty (i.e., updated), it is first written back to the backing store. Page frames in the inactive-clean state are eventually selected as victims.

In our implementation of preemptive resource management, a victim page is selected based on resource priorities as well as page states. If there is an inactive-dirty or clean page, our memory manager selects the victim page in the same way as the original Linux. If there are no inactivedirty or -clean pages, our memory manager tries to preempt the page frame of lower-priority processes. It searches for a page frame in the active state whose user has a resource priority lower than the priority of the process requesting the page frame, and it picks it as a candidate to be the victim. Then the memory manager turns the state of the frame into the inactive-dirty state, no matter how recently it has been accessed. In this way, a higher-priority process preempts the page frame allocated to a lower-priority process. Thereafter, like the original Linux, the page is written back to the backing store, if necessary, and its state is turned into the inactive-clean state. Eventually the page is selected as the victim. To hide preemption latency, we made two optimizations. First, the memory manager reserves a predefined number of page frames. When a process preempts a page frame, the page frame is allocated from these reserved frames. By this optimization, the higher-priority process that requests a page frame immediately restarts execution without waiting for the memory manager to select a victim frame. Second, the memory manager preempts page frames eagerly; that is, it swaps out more than one victim page when an inactive-dirty page is asynchronously written back to the backing store. This is because a higher-priority process will preempt more page frames sooner or later, if it preempts one page frame.

4.1.3

Utilizing inactive page frames

In preemptive resource management, the memory manager must be careful in reusing inactive page frames, especially when a lower-priority process requests a page frame but the inactive page frames all belong to higher-priority processes. When a free page frame is not available, the memory manager is activated quite frequently, and every time it is activated, it clears the reference bits for the page frames. Therefore, even if a reference bit is cleared, the page frame may be used actively by a higher-priority process. If the memory manager selects such a page frame as a victim, an inversion of resource priorities occurs; a lower-priority process “preempts” a page frame used actively by a higherpriority process. To avoid priority inversion, we reuse page frames of higher-priority processes if, and only if, an adequate number of page frames are allocated to them. If this is the case, reusing page frames improves the overall performance of the system, without impinging seriously on the performance of higher-priority processes. To determine whether an adequate number of page frames has been allocated or not, a time stamp is maintained for each priority. Each time stamp maintains how long the processes in that priority have not been making any requests for additional

page frames. If the time stamp indicates no requests have been made for a predefined period of time, the memory manager reuses the page frames. By doing this, an inactive page frame can be reused by lower-priority processes.

4.1.4

In-kernel memory

Preemptive resource management can be applied to inkernel memory resources such as file caches and I/O buffers. In our prototype, these resources are also managed preemptively. The memory manager tracks down the process using those resources, and preempts them as needed, in the same way as described in the previous sections.

4.2 4.2.1

Disk I/O Tracking down process requesting disk I/O

In preemptive resource management, a higher-priority process is allowed to perform disk I/O even if lower-priority processes have requested disk I/O before them. The disk scheduler tracks down the process requesting disk I/O to obtain the resource priority for that process. Tracking down the process is easy if a disk I/O is requested by an ordinary user process. If disk I/O is requested by a system process such as pagedaemon, tracking it down is not so easy. Here, the scheduler must discover the process that caused pagedaemon to request the disk I/O for paging. If a higherpriority process requests paging, the requested I/O must be prioritized higher. If a lower-priority process does this, the requested I/O must be prioritized lower. In our prototype, before pagedaemon issues a disk I/O request, it changes its own resource priority to that of the process requesting paging. As a result, the disk I/O request issued by pagedaemon is assigned the same priority as the process requesting paging.

4.2.2

Preempting disk I/O

To implement preemptive resource management, the disk scheduler is required to perform disk I/O based on resource priorities. In Linux, the disk scheduler has a queue that stores I/O requests. In our prototype, the scheduler sorts I/O requests in resource-priority order. The I/O requests from higher-priority processes are placed in a queue before the I/O requests from lower-priority processes. In this way, a higher-priority process preempts I/O requests from lower-priority processes. The I/O requests from the equalpriority processes are sorted by the elevator algorithm to reduce seek time.

4.3

CPU time

Priority and preemption are both used to schedule CPU time in Linux. The user indirectly controls scheduling pri-

(a) Preemptive Resource Management

(b) Linux 2.4.7

Figure 2. Response Time of Interactive Process.

ority by changing the “nice” value of each process. If the nice value is higher, the process tends to have lower priority. If the nice value is lower, the process tends to have higher priority. To ease implementation, we used the nice value to control resource priority on CPU time. If resource priority is higher, we set the nice value lower. If resource priority is lower, we set the nice value higher. The nice value does not implement preemptive resource management strictly. A number of lower-priority processes can consume considerable aggregate CPU time, and adversely degrade the performance of higher-priority processes. We believe that strict implementation is possible if we apply the techniques developed in the domain of real-time systems, but it would require a great deal of efforts.

5

Experiments

This section demonstrates that preemptive resource management can prevent resource-monopolizing DoS. The experiments were conducted using a 733MHz Pentium III PC with 128MB of SDRAM and 20GB of UltraDMA/66 HDD. Our prototype is based on Linux 2.4.7. In the experiments, an “interactive” process is attacked by a group of monopolizers that cooperatively attempt to monopolize the CPU time, memory and disk I/O. To confirm that resource-monopolizing DoS was prevented, we measured the response time of the interactive process being attacked by the monopolizers. As a comparison, we also measured the response time on the original Linux 2.4.7. Interactive process This is a simple program to emulate the system behavior of an interactive process. The interactive process repeatedly accesses a data array from head to end and then sleeps for 5 seconds. The size of the data array simulates the working set size of the interactive process. The time needed to access the array

simulates the response time of the interactive process, and the sleep time simulates the waiting time for an input from the user. The working set size (i.e., the size of the array) is initially 4MB but increases to 32MB during execution. Monopolizing process The group of attacking processes attempts to monopolize CPU time, physical memory, and disk I/O. Each process allocates 25MB of memory and accesses it repeatedly in a tight loop. We executed five attacking processes concurrently. The attacking processes exclusively use CPU time and physical memory (total of 125MB), and the thrashing caused by memory monopolization has the side effect of wasting disk bandwidth. Figures 2(a) and (b) show the response time of the interactive process on our prototype and on the original Linux 2.4.7. To investigate resource allocation, we also measured the resident set size (RSS), i.e. the amount of physical memory mapped to the interactive process. Figures 3(a) and (b) show the RSS on our prototype and on the original Linux. For the first 120 seconds, no attacking processes are executed. The response time on both platforms is constantly 27 msec. As Fig. 3 shows, the interactive process acquires 4MB of memory. This is reasonable since the working set size of the interactive process is 4MB at this point of time. Then, 120 seconds later, we started the attack by executing the five monopolizers. Here, the resource priorities of the monopolizers were set to the same priority as the interactive process. Therefore, the response time of the interactive process drastically increases up to 10,000 msec on both platforms. As shown in Fig. 3, the interactive process fails to acquire physical memory. After 240 seconds had elapsed, we lowered the resource priorities of the monopolizers. As Fig. 2(a) shows, the response time of the interactive process rapidly recovers

(a) Preemptive Resource Management

(b) Linux 2.4.7

Figure 3. Resident Set Size (RSS) of the Interactive Process.

to reach a constant of 27 msec. By lowering the monopolizers’ resource priorities, the interactive process now has a higher priority and starts preempting the memory allocated to the monopolizers. As Fig. 3(a) shows, the RSS recovers up to 4MB after the priority is changed. This demonstrates that preemptive resource management works properly to prevent resource-monopolizing DoS. After 360 seconds had elapsed, we changed the working set size of the interactive process to 32MB. As Fig 2(a) shows, the response time increases up to 10,000 msec but preemption works well and the response time rapidly drops to 250 msec. This is because the interactive process successfully acquires 32MB of memory (Fig. 3(a)). The response time is increased because the array size is increased from 4MB to 32MB.

6

Related Work

The work most related to ours is the Stealth distributed scheduler [6]. Stealth provides two levels of resource priority and allows a high-priority process to preempt resources from low-priority processes. The goal of Stealth is to balance the load of workstation clusters but ours is to defend against resource-monopolizing DoS. This difference in respective goals led us to totally different system designs and optimizations. Our method allows the user to define an arbitrary number of priorities and alter the priorities dynamically, but Stealth does not. To enable rapid adaptation to dynamically changing priorities, we made several optimizations to hide the latency of preemption. In Stealth, the priority is fixed and there is no optimization to hide this latency. Liedtke et al. [7] presented a mechanism that prevents physical memory from being monopolized. Their approach is based on a µ-kernel. When a page frame is unavailable, a memory requester must donate some of its memory to

the µ-kernel. In their approach, if an attacking process acquires many page frames before other processes are executed, those page frames cannot be preempted. Therefore, it is difficult to recover from resource-monopolizing DoS. Performance isolation [12] is a resource scheduler designed for shared-memory multiprocessors. A group of processes called SPU is guaranteed a prefixed amount of resources. Resources can be lent to other SPUs, and revoked when needed again by the loaning SPU. Since performance isolation is not designed to defend against resourcemonopolizing DoS, shared resources such as file caches are charged to the kernel SPU, and therefore a monopolizer can exclusively use those resources. Sullivan and Seltzer [11] presented extensions to lottery-scheduling [13]. A group of processes called currency has a number of lottery tickets, and the more tickets a currency has, the more resources it can acquire. If a monopolizer acquires lottery tickets, other processes can not preempt them. We will now describe some of user-level techniques to control resources. MS Manners [4] regulates resource access at the user level. It regulates resource access in cooperation with user processes. We cannot use MS Manners to defend against resource-monopolizing DoS because the monopolizer is not cooperative. Brown and Mowry [2] developed a compiler-based technique to control physical memory allocation. In their approach, the operating system is extended to support primitives to control memory allocation, and the compiler inserts code to call these primitives into the source code. To ease tracking of a process using a resource, the resource container [1] decouples the notion of a resource principal from that of a process. Scout [8, 10] makes an execution path in the kernel explicit. These abstractions can be seamlessly integrated with our preemptive resource management since the resource principal is explicit.

Our work is also different from the whole class of real-time systems because these primarily use resource specification and admission control as a means to guaranteeing resource allocation. Preemptive resource management is not designed to provide guarantees, does not require resource specification, and does not use admission control.

7

Conclusion

This paper described preemptive resource management, a scheme developed for defending against resourcemonopolizing DoS. An attacking process exclusively uses shared resources and attempts to drop the responsiveness of the machine so low that it is practically useless. To control resource allocation, our scheme extensively applies priority and preemption to every resource. The resource manager preempts the resource allocated to the attacking process and allocates it to other innocent processes. The experimental results suggest that preemptive resource management prevents an attacking process from impinging on the performance of higher-priority processes. Our prototype can be enhanced in two ways. First, we can incorporate techniques like intruder detection to automatically configure resource priorities for processes. Second, our system can be enhanced to defend against attacks that attempt to overload shared local servers. For example, an attacking process might intentionally overload the X server to degrade the performance of processes relying on the same X server. To deal with such an attack, we need a resource principal finer in granularity than a process.

References [1] G. Banga, P. Druschel, and J. C. Mogul. Resource containers: A new facility for resource management in server systems. In Proc. of USENIX Symp. on Operating Systems Design and Implementation, pages 45–58, 1999. [2] Angela Demke Brown and Todd C. Mowry. Taming the memory hogs: Using compiler-inserted releases to manage physical memory intelligently. In Proceedings of the 4th USENIX Symposimum on Operating Systems Design and Implementation (OSDI2000), October 2000. ¨ [3] J. Bruno, E. Gabber, B. Ozden, and A. Silberschatz. The eclipse operating system: Providing quality of service via reservation domains. In Proc. of USENIX Annual Technical Conference, pages 235–246, 1998. [4] John R. Douceur and William J. Bolosky. Progress-based regulation of low-importance processes. In Proceedings of the 17th ACM Symposium on Operating Systems Principles (SOSP’99), December 1999. [5] R. P. Draves. Page replacement and reference bit emulation in Mach. In Proceedings of the Second USENIX Mach Symposium, pages 201–212, November 1991. [6] Phillip Krueger and Rohit Chawla. The stealth distributed scheduler. In Proceedings of the 11th IEEE International

Conference on Distributed Computing Systems, pages 336– 343, 1991. [7] Jochen Liedtke, Nayeem Islam, and Trent Jaeger. Preventing denial-of-service attacks on a µ-kernel for WebOSes. In Proceeding of the 6th IEEE Workshop on Hot Topics in Operating Systems (HotOS), Chatham (Cape Code), MA, May 5–6 1997. [8] D. Mosberger and L. L. Peterson. Making paths explicit in the scout operating system. In Proc. of USENIX Symposimum on Operating Systems Design and Implementation, pages 153–167, 1996. [9] Ragunathan Rajkumar, Kanaka Juvva, Anastasio Molano, and Shouichi Oikawa. Resource kernels: a resource-centric approach to real-time and multimedia systems. In SPIE Vol. 3310, December 1997. [10] O. Spatscheck and L. L. Peterson. Defending against denial of service attacks in scout. In Proc. of USENIX Symposimum on Operating Systems Design and Implementation, pages 59–72, 1999. [11] David G. Sullivan and Margo I. Seltzer. Isolation with flexibility: A resource management framework for central servers. In Proceedings of 2000 USENIX Annual Technical Conference, June 2000. [12] Ben Verghese, Anoop Gupta, and Mendel Rosenblum. Performance isolation: Sharing and isolation in shared-memory multiprocessors. In Proceedings of the 8th International Conference on Architectural Support for Programming Languages and Operating Systems, October 1998. [13] C. A. Waldspurger and W. E. Weihl. Lottery scheduling: Flexible proportional-share resource management. In Proc. of USENIX Symposium on Operating Systems Design and Implementation, 1994.

Suggest Documents