Statistically Regulating Program Behavior via ... - Google Sites

2 downloads 105 Views 507KB Size Report
Permission to make digital or hard copies of all or part of this work for personal or classroom use ... the server colle
Statistically Regulating Program Behavior via Mainstream Computing Mark Stephenson

Ram Rangan ∗

IBM Austin Research Lab NVIDIA [email protected] [email protected]

Emmanuel Yashchin

IBM Watson Research Center IBM Austin Research Lab [email protected] [email protected]

Abstract

Mainstream computing is a collaborative methodology that leverages the rarity of unanticipated system state in order to protect users. At a high level, our approach allows a user to say, “ensure that my program’s behavior conforms with at least 99.9% (or some other user-defined percentage) of the usage patterns for this program.” Put another way, we ask users to specify a tolerance for failure, pf ail , which bounds the rate at which the system will flag anomalies (which can be due to system liabilities, or can simply be benign false positives on legitimate executions). Statistically then, the more mainstream, or “normal” a user’s usage is, the less likely it is for the user to encounter an anomaly for a given setting of pf ail . Mainstream computing tracks program-level runtime statistics for an application across a community of users. Similar to other invariant tracking systems, mainstream computing constantly profiles applications in an effort to determine likely invariants for a program’s operands and control flow. Unlike prior art, our system provides statistical bounds on false positive rates, and we ask the user to set the bounds appropriately. This approach is analogous to the “privacy” slider bar present in some web browsers that allows users to easily trade functionality of the browser for potential loss of privacy. It is the mainstream computing server’s responsibility to generate, with statistical guarantees, the set of constraints that satisfy a user’s requests. As with prior art on collaborative infrastructures, the server collects data from multiple clients, creating a large corpus of data from which it can create constraints. Unlike previous work, however, we show that mainstream computing can create valuable models by only consulting a small portion of the corpus. We argue that this property of mainstream computing is crucial because it limits the influence rogue users may have on constraint creation. The novel contributions of this paper are as follows:

We introduce mainstream computing, a collaborative system that dynamically checks a program—via runtime assertion checks—to ensure that it is running according to expectation. Rather than enforcing strict, statically-defined assertions, our system allows users to run with a set of assertions that are statistically guaranteed to fail at a rate bounded by a user-defined probability, pf ail . For example, a user can request a set of assertions that will fail at most 0.5% of the times the application is invoked. Users who believe their usage of an application is mainstream can use relatively large settings for pf ail . Higher values of pf ail provide stricter regulation of the application which likely enhances security, but will also inhibit some legitimate program behaviors; in contrast, program behavior is unregulated when pf ail = 0, leaving the user vulnerable to attack. We show that our prototype is able to detect denial of service attacks, integer overflows, frees of uninitialized memory, boundary violations, and an injection attack. In addition we perform experiments with a mainstream computing system designed to protect against soft errors. Categories and Subject Descriptors D. Software [D.2. Software Engineering]: D.2.4. Program Verification General Terms Reliability, Security

1.

Introduction

A variety of issues threaten the stability of today’s systems: code vulnerabilities, soft errors, insider threats, race conditions, hardware aging, etc. While there is no doubt that these threats are dangerous, we are fortunate that they rarely present themselves. The vast majority of the time, code running on modern systems executes in a manner consistent with user and programmer expectations. Current protection mechanisms tend to be designed for a specific vulnerability (e.g., buffer overruns, or illegal control-flow transfers). In this paper we introduce mainstream computing, which by simply detecting and enforcing likely program properties, naturally provides some level of protection against a wide variety of systems liabilities. ∗ Contributed

Eric Van Hensbergen

• We introduce mainstream computing. • We show that mainstream computing will likely generate un-

tainted constraints, even when malicious users are part of the collaborative community. • We show that mainstream computing systems can protect

against buffer overruns, integer overflows, memory free bugs, denial of service attacks, and injection attacks.

to this paper while employed at IBM Austin Research Lab.

• We show that mainstream computing systems can be used to

recover from many soft errors. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CGO’10 April 24–28, 2010, Toronto, Ontario, Canada. c 2010 ACM 978-1-60558-635-9/10/04. . . $10.00 Copyright

2.

Mainstream Computing

At the conceptual level, mainstream computing attempts to automatically whitelist common behavior, and log, reject, sandbox, or repair abnormal behavior. This section describes the many components of a mainstream computing system. We begin with a high1

-'./0)!1'.!simple +",



" &."$($(2!)*&!+#,

!

!

! 4B678

%$3

456B8

$9

4"565:8

$9

4:65::8

!

;

;

4:6558

45678

$9

4"565::8

;

4:67BA

)*&B

(

B>B

Suggest Documents