The Internet's any-to-any, best-effort communication model is widely heralded as ... We argue that the next generation Internet must provide first-class support for ...... Cisco ios quality of service solutions configuration guide. www.cisco.com. 4 ...
Revised
NeTS—FIND: Enabling Defense and Deterrence through Private Attribution Alex C. Snoeren, Yoshi Kohno†, Stefan Savage, and Amin Vahdat Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92093-0404 †
Department of Computer Science and Engineering University of Washington Seattle, WA 98192-2350 October 2007–September 2008
1
1
Project description
The Internet’s any-to-any, best-effort communication model is widely heralded as one of the main reasons for its success. The absence of strong architectural restrictions has allowed Internet protocols to support a tremendous variety of applications on the same network substrate. Yet these same “architectural freedoms” have also emerged as ripe vulnerabilities for adversaries trying to exploit the network to their ends. For example, best-effort delivery and global addressing—both bullwarks of the modern Internet—are also the features that enable distributed denial-of-service (DDoS) attacks to overwhelm remote sites on demand. Indeed, today’s Internet infrastructure is extremely vulnerable to motivated and well-equipped attackers. A testament to this reality is the plethora of tools readily available, from covertly exchanged exploit programs to publicly released vulnerability assessment software, to degrade performance or even disable vital network services. The consequences of these attacks are serious and, increasingly, financially disastrous. Thus, it is clear that any future network architecture must learn from these lessons and bring network security issues to the fore. However, while most research in network security focuses largely on defenses— mechanisms that impede the activities of an adversary— practical security, paraphrasing Butler Lampson, requires a balance between defenses and deterrence. While defenses may block an adversary’s current attacks, without a meaningful risk of being caught the adversary is free to continue and improve their attacks with impunity. Creating such a deterrent is usually predicated on an effective means of attribution—tying an individual to an action. In the physical world this is achieved through physical forensic evidence—DNA, fingerprints, writing samples, etc.—but attribution can be uniquely challenging in the digital domain. In particular, the anonymous nature of the IP protocol makes it difficult to accurately identify the true origin of an IP datagram if the source wishes to conceal it. The network routing infrastructure is stateless and based largely on destination addresses; no entity in an IP network is officially responsible for ensuring the source address is correct. Many routers employ ingress filtering to limit source addresses of IP datagrams from a stub network to addresses belonging to that network, but not all routers have the resources necessary to examine the source address of each incoming packet, and ingress filtering provides no protection on transit networks. Furthermore, spoofed source addresses are legitimately used by network address translators (NATs), Mobile IP, and various unidirectional link technologies such as hybrid satellite architectures. A number of traceback systems have been proposed (including two by the PIs [64, 69]) to address this issue, but all suffer from significant deployment obstacles, typically requiring timely and active cooperation between victims and a variety of network service providers with whom they may have no relationship Finally, even if these systems were perfect in stopping address spoofing, IP addresses fundamentally represents only a topological location in the network, and not a physical origin. While identifying a network location may be sufficient to enable operational defenses such as packet filtering, it does little to establish the meaningful forensic evidence required for deterrence. Indeed, by the time an attack is detected and help is requested from a network service provider the attacker may have long since departed. Thus, through spoofing, the use of wireless hot-spots, DHCP, tunneling, address-prefix hijacking, or any number of methods an attacker on today’s Internet can transmit offending IP packets with virtually no risk of being identified. We argue that the next generation Internet must provide first-class support for accurate, robust, and tamper-proof packet attribution, while—critically—preserving appropriate levels of host and user privacy. Balancing privacy, identity and accountability requires a fundamental redesign of the Internet architecture: in particular, we propose that every packet be self-identifying. The contents of every packet, including this identifying information, is independently verifiable and non-repudiatable; every router or host is able to verify that any given packet is both attributable and unmodified. The confidentiality of possibly private information, however, including details concerning the packet’s origin or route through the network, is preserved unless appropriate law enforcement agencies are engaged. With private, per-packet attribution as a basic primitive, the next generation Internet would have the needed level of accountability to defend against denial-of-service attacks, botnets, and other forms of malware. The threat of accurate forensic analysis and legally enforceable liability provides a level of deterrence unimaginable in today’s Internet. Furthermore, the inherent immutability of attributable packets significantly enhances the power and scope of the Internet defense arsenal: it becomes straightforward to effectively cordon off various networks and regions of the network, blocking unwelcome traffic from specified hosts and regions of the network.
1
1.1
Objectives and significance
The goal of our work is to design and evaluate a key fundamental component of a secure next-generation Internet architecture: privacy-preserving, per-packet attribution. We propose a mechanism that not only enables non-repudiatable traceback and attack mitigation by empowering individual network elements to ensure traffic is authentic, but preserving sender privacy through the use of shared-secret key escrow. Our work will combine new cryptographic developments with experience gained through the PIs’ previous work on Internet traceback, secure routing, and attack mitigation technologies. We intend to demonstrate the utility of per-packet attribution in both deterrence and defense. Our defensive strategy is based on the concept of controlled connectivity, both in terms of the sources and destinations involved, and the content of the traffic itself. We summarize the individual components of our proposed security architecture below: • Privacy-preserving per-packet attribution: We will develop a packet attribution mechanism based on group signatures that allows any network element to verify that a packet was sent by a member of a given group. Importantly, however, actually attributing the packet to a particular member requires the participation of a set of trusted authorities, thereby ensuring the privacy of individual senders. One challenge with any attribution mechanism is determining the true source: while traffic may have originated at any given host, that host may simply be relaying traffic on behalf of another. In particular, overlay networks exhibit precisely this behavior. We will leverage techniques developed as part of the Platypus [57] secure source-routing system to enable native overlay support. While Platypus has a number of important advantages, in the context of this work, it allows us to address the “stepping stone” problem of properly tracking attributions in an overlay network. • Content-based privacy assurance: In addition to preventing attacks from outside a network, a comprehensive security architecture should also prevent private information from leaking out of a network. We will explore content-based inverse firewalls that inspect the content of traffic leaving a secured network, ensuring that sensitive information is kept within an enterprise. Taken together, we believe these components represent a substantial step towards a comprehensive security architecture for the next generation Internet. While this proposal considers a fundamental redesign of the Internet architecture, the requirements are informed by the PIs’ deep knowledge of today’s security threats through their involvement with the NSF-funded Center for Internet Epidemiology and Defenses. Another key collaborator is Tadayoshi Kohno, who recently received his Ph.D. from UCSD and is now on the faculty of the University of Washington. Kohno has a broad research record in practical cryptography and has also pioneered opportunistic mechanisms for identifying the physical origin of machines [42]. He has been heavily involved in the design of our packet attribution approach and we believe his ability to translate between the formal world of cryptographic primitives into the practical engineering of real-world
2
General plan of research
As Peter Steiner’s famous New Yorker cartoon states, “On the Internet, nobody knows you’re a dog.” Indeed, a functional anonymity is implicit in the Internet’s architecture since the lowest level identifiers—network addresses—are inherently virtual and insecure. An IP address only specifies a topological location—not a physical machine—and is easily forged on demand. Thus, it can be extremely challenging to attribute an on-line action to a particular physical origin (let alone to a particular individual). It is our position that any future Internet architecture must move past this limitation and provide a post-hoc means to attribute the physical origin of any individual packet, message or flow. In particular, we argue that network traffic should be self-identifying: each packet is tagged with a unique non-forgeable label identifying the physical machine that sent it. While such attribution may not definitively identify the individual originating a packet, it is the critical building block for subsequent forensic analysis, investigation and correlation; it provides a a beachhead onto the physical scene of the crime. At the same time, there is a natural tension with this need for attribution with users desire (or legal rights) to privacy. Solutions that do not balance these interests have faced critical challenges to deployment. To wit, in January of 1999, Intel Corporation announced that new generations of its popular Pentium microprocessors
2
would include a new feature: the Processor Serial Number (PSN). The PSN was a per-processor unique identifier that was intended as a building block for future security applications. However, even though this feature was completely passive, civil libertarians and consumer rights groups quickly identified potential risks to privacy stemming from an available globally unique identifier. Among the specific concerns raised in complaints filed with the Federal Trade Commission were the general expectation that Internet users have to anonymity, the chilling effect of privacy loss on consumer’s use of the Internet, the potential for a global identifier being used to cross reference personal information, the potential for identity theft by forging the PSN of others [53]. In April 2000, Intel abandoned plans to include the PSN in future versions of its microprocessors. Together, these observations drive the following informal requirements for a future network architecture: • Post-hoc Authentication. A properly empowered and authorized agency (a function of policy which we discuss later) should be able examine a packet label – even months after the packet was sent – and unambiguously determine the physical machine that generated it. This requirement also implies that labels be non-forgeable (and hence non-replayable). • Privacy. Packet labels must be non-identifying to a normal observer. Thus, the encoded identifying information in these labels must be opaque, in a strong sense, to an unprivileged observer. Moreover, the labels must not serve as an identifier (even an opaque one) so different packets from the same source must carry distinct labels. Overall, a user should have at least the same expectation of anonymity that they have in today’s Internet excepting authorized investigations. On its face, a potential solution to this problem is to digitally sign each packet using a per-source key that is in turn escrowed with a trusted third party. If a single third party can not trusted (which seems likely), then the scheme may accommodate multiple third-parties responsible for different sets of machines, and/or a secret sharing approach in which multiple third parties must collaborate to generate the keying material to validate the origin of a signature (e.g. the ACLU and the DOJ must both agree that an investigation is warranted). However, there is a critical vulnerability in this approach. Since a normal observer cannot extract any information from a packet label, nothing prevents an adversary from incorrectly labeling their packets (i.e., with random labels). Any attempts at post-hoc authentication will be useless. Thus, to be practical, our attribution architecture has an additional requirement: • Attributability. Any observer on the network is empowered to validate a packet label. That is, to prove that a packet could be attributed if necessary, but without revealing any information about the actual physical originator. While this group of requirements appears quite challenging to satisfy, surprisingly there is a well-known cryptographic tool – the group signature – that neatly unties this particular Gordian knot. Group signatures were first introduced by Chaum and van Heyst [28] with security properties fully formalized in papers by Bellare, Micciancio, and Warinschi [13] and Bellare, Shi, and Zhang [14]. We briefly describe their operation below using the model of [13]: Group Signature overview A group signature scheme assumes a privileged entity – the group manager – and a group of n unprivileged members, denoted 1, 2, . . . , n (in this text we assume n is known, but [14] extends the model to include groups of dynamic size). the group manager has a secret key gmsk, each group member i ∈ {1, . . . , n} has its own secret signing key gsk[i], and there is a single public signature-verification key gpk. A group signature scheme exports four methods KeyGen, Sign, Verify, and Open. The first is used in the setup process (to create gpk, gmsk, gsk), while group members use the second algorithm and their secret key to sign messages; if the i-th member wishes to sign a message m, the signature σ would be Sign(gsk[i], m). Anyone can use the verification algorithm Verify and the group’s public key gpk to check whether a messagesignature pair (m, σ) is valid; specifically, Verify(gpk, m, σ) returns a bit, where a 1 bit denotes a valid message-signature pair. Informally, if a group member i uses its secret key gsk[i] to sign a message m and get back a signature σ, anyone with access to the signature-verification key gpk can verify that (m, σ) is a valid message-signature pair under the secret key of some group member. Finally, the group manager can
3
use the opening algorithm Open and its secret key gmsk to recover the identity of a member from a signature; i.e., Open(gmsk, m, σ) returns an identity i ∈ {1, . . . , n} or, in the case of an error, the error code ⊥. Among the key security properties of group signature schemes are anonymity and traceability. 1 Informally, a group signature scheme is fully-anonymous if a set of colluding members cannot learn information about the signers’ identity i (the set of colluding members can actually include i), and is fully-traceable if a set of colluding members cannot create a valid message-signature pair that the group manager cannot trace back to one of the colluding parties. Together these properties also imply other properties like unforgeability, exculpability and framing-resistance [13]. For further exploration, the papers [17, 18, 20] capture the state-of-the-art in the design of secure group signature schemes. Application to traffic attribution We apply group signatures to our problem as follows. Each machine is a member of some group and is provided with a secret signing key. Exactly how groups are constructed is very much a policy issue, but one pragmatic approach is that each computer manufacturer defines a group across the set of machines they sell. This is a particularly appealing approach because it side-steps the key distribution problem, as manufacturers are now commonly including Trusted Platform Modules (TPM) that encode unique cryptographic information in each of their machines (e.g. the IBM Thinkpad being used to type this proposal has such a capability). Moreover, a tamper-resistant implementation is valuable to prevent the theft of a machine’s signing key. However, this also implies that the manufacturer will either act as group manager in any investigations (i.e. will execute Open under subpoena) or will escrow their gmsk (or shares thereof) to other third parties. Given a secret signing key, each machine uses it to sign the packets that they send. This signature covers all non-variant protocol fields and payload data as well as a random nonce generated on a per-packet basis. The nonce, the name of the group and the per-packet signature are all included in a special packet header field that is part of the network layer. Any recipient can then examine this header, and use Verify to validate that a packet was correctly signed as a member of the associated group (and hence could be authenticated by the group manager). However, in this proposal we envision verification as not the sole province of the recipient, but as being a responsibility of the network as well. Thus, a network provider could block all un-attributable packets from being transited – providing a blanket “protection” for all of its clients. This is similar in motivation to the source-address filtering that is used to mitigate spoofing in today’s Internet, but would provide a far stronger guarantee [37]. Moreover, this approach does not impose any restriction on the relationship between provider and customer. Thus, it allows considerable flexibility for innovation at the network layer (i.e., it is compatible with open “hotspot” access, mesh networks, overlays, etc.) and does not impose any bi-lateral administrative cost on providers. That said, this approach is not without challenges. We outline several of these below: • Packet transformation. A number of technologies modify packet content in flight between sender and receiver. Perhaps the best known of these is Network Address Translation (NAT) which has become a mainstay of the modern Internet. In general, these technologies present problems because changing the content implicitly invalidates the associated packet signature. Moreover, it is not possible to “fix” the signature in place (as is done to the Internet checksum to account for decrements of the Time-To-Live (TTL) field) due to the strong the traceability property of group signatures. Thus, our architecture requires strong end-to-end content invariance and therefore places requirements on the design of other network-layer services. For example, while today’s implicit NAT systems could not be supported, first-class address translation services such as Cheriton’s TRIAD [30] could co-exist peacefully. • Overhead. Our proposal to use group signatures on a per-packet basis is highly aggressive. Group signatures are public-key systems and thus carry all the traditional overheads of such systems. First among these, the size of group signatures is far longer than traditional packet labels. For example, even the short group signatures of Boneh et al. can comprise almost 200 bytes [17]. In high-bandwidth applications, with large packet sizes, this space overhead may be inconsequential, but in lower-bandwidth wireless environments it may be significant. A second issue, in is that the time overhead of current schemes can be quite poor – frequently requiring one or more modular exponentiations. 1 Note
these are not information-theoretic properties, but are based on computational hardness (typically assuming a polynomial-time bounded adversary).
4
Thus, a key research goal of this work is to reduce or mitigate these issues to make practical implementation feasible. In addition to making real improvements to the underlying algorithms, the PIs have significant engineering experience in making balanced tradeoffs between algorithmic correctness and performance in real implementations [65, 69, 64]. For example, a system might use a chained-MAC to bind packets in a flow together and then use a single signature to identify the entire flow. Alternatively, rather than verify all packets, the network might randomly sample packets for verification until a verification fails. Finally, a system might combine these approaches with a challenge-response mechanism where sources are asked to prove that some of their packets are attributable. For these, and other such optimizations, it is our intention to quantify both their performance impact as well as how they weaken the security guarantees made by our idealized approach. • Stepping stones. While packet signatures provide a way to attribute the machine transmitting a packet, adversaries can still evade detection by laundering their attacks through a compromised host – a socalled “stepping stone”. Thus the compromised host will be directly implicated in the attack, while the attackers machine will not. This problem is fundamental because I/O causality is not a property tracked by software. The best current solutions to this problem infer the presence of causality by correlating traffic patterns moving in and out of a host [87]. While we do not innovate in this inferring, the post-hoc attribution property of our architecture allows a host or network to log a packet signature in a flow and latter used this to “connect-the-dots” when traffic correlation can be inferred. Finally, since ours is a fundamental architectural exploration it opens up significant degrees of freedom in how it is applied. To wit, the policy implications of a robust packet attribution mechanism are many and varied. First, there are significant trust issues tied to who manages a group. Since the owner of a group can violate privacy there, their trustworthiness must be beyond reproach. Thus group managers may need to be constructed to provide natural checks and balances against abuse. At the same time, it is not at all clear who should be able to authorize the “opening” of a packet. The Internet is an international entity and one without any overarching controlling legal authority. What then should IBM do when served with a warrant to open a packet? Whose laws should apply? Those in the country the requester resides, those where IBM resides, those where the packet was found, or those belonging to the host country (or nationality) of the owner of the machine that sent the packet? There are compelling cases for each version. Moreover, taken together these two issues also raise the specter of policy conflicts. Will U.S. government networks be willing to allowed to receive packets signed by groups in Iran? Will they be willing to buy Thinkpad laptops that are members of the Lenovo group in China? Further, our architecture allows the possibility of a set of labels for a given packet. For example, a packet might be signed with several groups – any or all of which might be necessary for admission into a given network. Effectively, each label is not only a capability to attribute a packet but also a declaration of jurisdiction. Ad-hoc groups might form that defy traditional legal structures. For instance, certain networks might insist that packet have membership in a group that has weaker standards for authorizing an open operation. Similarly, users might form their own groups with stronger privacy restrictions and use their collective power to pressure networks to accept it. Finally, while we have discussed packet labels as only being written by the sender this is not a strong requirement. Indeed, the case can be made for a more complex structure in which some labels are also written by the source network. Thus a network (or host) could express reception policies such as “Accept packets whose source labels have acceptable legal standards for opening AND are attached to networks in an actionable jurisdiction”. At this time the PIs do not have a position on the social value of these policies, but we believe that understanding them is critical to ultimately designing a mechanism that is both flexible and broadly acceptable.
3
Data confinement
Moving up to the application level, it is becoming increasingly important to control the flow of information across administrative boundaries. For example, consider a government network that stores both public and private information. The public information needs to be shared using standard protocols such as HTTP. However, network administrators may wish to ensure that private information never leaves some network
5
boundary. For instance, once an HTTP server is setup to distribute the public information, it then becomes relatively easy to advertently or inadvertently place private information such that it can leave the network. We aim to devise a scheme to prevent unauthorized data from leaving a network boundary. Here we assume the presence of a service that determines whether a particular data object (typically a file) is safe to distribute outside the internal network. As part of this research, we will investigate techniques for potentially automatically making such a determination. However, in the simplest realization a human being makes this determination on a per-object basis. If the service determines an object to be safe, it loads one or more firewalls at the boundary of the sensitive network with sufficient state to allow approved objects to pass through. The underlying network topology must be deployed in such a fashion that all packets entering and leaving the network pass through these firewalls. Standard firewalls typically attempt to prevent unauthorized or dangerous packets to enter the network. The firewalls in our architecture additionally attempt to ensure that packets containing any unauthorized or sensitive packets do not leave the network, while still allowing public information to pass through. Important research questions include the required state at the firewalls to disambiguate between public and private objects, how to reconstruct higher-level protocol communication (e.g., based on HTTP) from individual packet communication, whether such inspection can take place at line speed for high speed networks, and the appropriate policy for terminating connections once the firewall is unable to verify a particular communication. As an initial step, we intend to calculate rolling checksums of public information in a manner similar to rsync. These checksums can then be calculated on a rolling basis across packet boundaries to ensure that the checksums match the profile of known good objects. One of the primary challenges lies in pulling out the protocol “chatter” from actual object transfer (for example, HTTP request and response data). We plan to employ protocol parsers with appropriate wildcarding to distinguish between (approved) protocol exchange and the objects embedded in the transfers. Of course, there are a number of potential attacks against such a system. Sensitive data may be encoded and transmitted through simple “dictionary” exchanges, timing attacks in the protocol exchange, etc. One goal of this work is to identify these attacks and prevent them to the extent possible. For instance, we will identify known good communication patterns for particular protocols to prevent encoding attacks from taking place. Similarly, we will make recommendations for small modifications to protocols such as HTTP that might make them more robust against certain types of such attacks. Overall, while our proposed techniques will not prevent unauthorized objects to exit a sensitive network boundary, we believe that they can make networks significantly more robust while still enabling legitimate communication and while operating with standard protocols.
4
Related work
Early, high-profile distributed denial-of-service attacks, mounted largely with spoofed source addresses, spurred the development of a number of traceback techniques which could enable operators to determine the true source of attack traffic [78, 63, 69]. The availability of these tools, along with increased vigilance and deployment of reverse-path filtering, has dramatically decreased the power of such attacks. Unfortunately, spoofed flooding attacks were merely the first salvo in an arms race between motivated cybercriminals and network operators. In an effort to simultaneously obscure their identity and increase the number of potential attack sources available to them, attackers are now recruiting third-party end hosts to construct large-scale botnets, reported to number in the 100s of thousands. The key aspect of these botnet-powered attacks is the use of third-party computing resources. Hence, the foremost security concern of today’s network operators is to defend their networks against attacks launched from an ever-increasing set of well-intentioned but poorly secured networks that fall prey to botnet infiltration. Because these networks are so numerous, various researchers have proposed to fundamentally restrict the Internet’s default best-effort any-to-any service model using a variety of technologies, including capability models [6, 9, 79, 57, 22], proof-of-work schemes [41] and, more recently, a ’default off’ communication model [10]. The attractiveness of the ’default off’ model lies in its simplicity: if an Internet host cannot contact you, it can do you no harm (through the network, anyway). The key differentiator between our proposal and the previous capability-based schemes is the inability of routers, firewalls, or even destinations to
6
determine the packets origin without the consent of either the sender or a (group of) trusted third party(s). We believe this distinction is critical to ensuring privacy in the next generation Internet. Obviously, the effectiveness of all these schemes—including ours—depends critically on the ability to a priori identify hosts that should be allowed to communicate. Unfortunately, prior schemes are either application specific, require manual configuration, or insert intrusive interactive mechanisms such as CAPTCHAs [75, 41]. Of course, separating the wheat from the chaff is not a new problem. Similar issues have been studied in the context of developing white and black lists, SPAM filtering, and peer-to-peer reputation systems. In comparison the key distinction of our work is the lack of dependence on any particular application or protocol. We explore the potential of generating communities of interest based only upon communication patterns observable inside the network. Of course, community-of-interest-based filtering is complementary to most other mitigation techniques, including those described above and other generic techniques like traffic scrubbing. Finally, while we focus on monitoring attack traffic inside a network backbone, a number of alternatives exist. In principle end-users can share information regarding malware, for example [5] envisions the sharing of past bad behavior in a distributed database, while DShield [1] is an open service allowing users to share their firewall logs. In practice, however, these approaches are difficult to realize because of authentication and trust concerns, many of which are alleviated by our proposal.
7
References [1] Distributed intrusion detection system. www.dshield.org. [2] A. Aggarwal, S. Savage, and T. Anderson. Understanding the performance of tcp pacing. In Proceedings of IEEE Infocom Conference, pages 1157–1165, Tel-Aviv, Israel, Mar. 2000. [3] W. Aiello, C. Kalmanek, P. McDaniel, S. Sen, O. Spatscheck, and J. V. der Merwe. Analysis of Communities of Interest in Data Networks, March 2005. Passive and Active Measurement Workshop. [4] T. H. P. . R. Alliance. Know your enemy: Tracking botnets. http://www.honeynet.org/papers/bots. [5] M. Allman, E. Blanton, and V. Paxson. An architecture for developing behavioral history. SRUTI Workshop, July 2005. [6] D. G. Andersen. Mayday: Distributed filtering for Internet services. In Proceedings of the 4th USENIX Symposium on Internet Technologies and Systems (USITS), pages 31–42, Seattle, WA, Mar. 2003. [7] D. G. Andersen, H. Balakrishnan, M. F. Kaashoek, and R. Morris. Resilient Overlay Networks. In Proceedings of the Symposium on Operating Systems Principles, October 2001. [8] D. G. Andersen, A. C. Snoeren, and H. Balakrishnan. Best-path vs. multi-path overlay routing. In Proceedings of the USENIX/ACM Internet Measurement Conference, pages 91–100, Miami, FL, Oct. 2003. [9] T. Anderson, T. Roscoe, and D. Wetherall. Preventing Internet denial-of-service with capabilities. In Proceedings of the 2nd ACM Workshop on Hot Topics in Networks (HotNets-II), Cambridge, MA, Nov. 2003. [10] H. Ballani, Y. Chawathe, S. Ratnasamy, T. Roscoe, and S. Shenker. Off by default! Proceedings of the Fourth Workshop on Hot Topics in Networking, Nov 2005. [11] J. Bellardo and S. Savage. Measuring packet reordering. In Proceedings of the ACM/USENIX Internet Measurement Workshop (IMW), pages 97–105, Marseille, France, Nov. 2002. [12] J. Bellardo and S. Savage. 802.11 denial-of-service attacks: Real vulnerabilities and practical solutions. In Proceedings of the USENIX Security Symposium, pages 15–28, Washington, D.C., Aug. 2003. [13] M. Bellare, D. Micciancio, and B. Warinschi. Foundations of group signatures: Formal definitions, simplified requirements, and a construction based on general assumptions. pages 614–629, 2003. [14] M. Bellare, H. Shi, and C. Zhang. Foundations of group signatures: The case of dynamic groups. 2005. [15] R. Bhagwan, S. Savage, and G. M. Voelker. Understanding availability. In Proceedings of the International Workshop on Peer To Peer Systems (IPTPS), pages 256–267, Berkeley, CA, Feb. 2003. [16] R. Bhagwan, K. Tati, Y.-C. Cheng, S. Savage, and G. M. Voelker. Totalrecall: System support for automated availability management. In Proceedings of the 1st ACM/USENIX Symposium on Networked Systems Design and Implementation (NSDI), pages 337–350, San Francisco, CA, Mar. 2004. [17] D. Boneh, X. Boyen, and H. Shacham. Short group signatures. pages 41–55, 2004. [18] X. Boyen and B. Waters. Compact group signatures without random oracles. Cryptology ePrint Archive, Report 2005/381, 2005. http://eprint.iacr.org/. [19] R. Braynard, D. Kosti´c, A. Rodriguez, J. Chase, and A. Vahdat. Opus: an Overlay Peer Utility Service. In Proceedings of the 5th International Conference on Open Architectures and Network Programming (OPENARCH), June 2002. [20] J. Camenisch and A. Lysyanskaya. Signature schemes and anonymous credentials from bilinear maps. pages 56–72, 2004. 1
[21] N. Cardwell, S. Savage, and T. Anderson. Modeling tcp latency. In Proceedings of IEEE Infocom Conference, pages 1742–1751, Tel-Aviv, Israel, Mar. 2000. [22] M. Casado, T. Garfinkel, A. Akella, D. Boneh, N. McKeowon, and S. Shenker. Sane: A protection architecture for enterprise networks. In Proceedings of the 3rd ACM/USENIX Symposium on Networked Systems Design and Implementation (NSDI), San Jose, CA, May 2006. [23] S. Chandra, C. Ellis, and A. Vahdat. Multimedia Web Services for Mobile Clients Using Quality Aware Transcoding. In Proceedings of the Second ACM/IEEE International Conference on Wireless and Mobile Multimedia, August 1999. [24] S. Chandra, C. Ellis, and A. Vahdat. Application-Level Differentiated Multimedia Web Services Using Quality Aware Transcoding. IEEE Journal on Selected Areas of Communications (JSAC), 18(12), December 2000. [25] S. Chandra, C. S. Ellis, and A. Vahdat. Differentiated Multimedia Web Services Using Quality Aware Transcoding. In INFOCOM 2000 - Nineteenth Annual Joint Conference of the IEEE Computer And Communications Societies, March 2000. [26] S. Chandra, C. S. Ellis, and A. Vahdat. Managing the Storage and Battery Resources in an Image Capture Device (Digital Camera) using Dynamic Transcoding. In Third ACM International Workshop on Wireless and Mobile Multimedia (WoWMoM’00), August 2000. [27] S. Chandra, A. Gehani, C. S. Ellis, and A. Vahdat. Transcoding Characteristics of Web Images. In Multimedia Computing and Networking 2001 (MMCN’01), January 2001. [28] D. Chaum and E. van Heyst. Group signatures. pages 257–265, 1991. [29] Y.-C. Cheng, U. Hoelzle, N. Cardwell, S. Savage, and G. M. Voelker. Monkey see, monkey do: A tool for tcp tracing and replaying. In Proceedings of the USENIX Annual Technical Conference, pages 87–98, Boston, MA, June 2004. [30] D. R. Cheriton and M. Gritter. TRIAD: a Scalable Deployable NAT-based Internet Architecture. Technical report, Stanford University Department of Computer Science, Jan. 2000. [31] Chuck Cranor and Theodore Johnson and Oliver Spatscheck. Gigascope: a stream database for network applications. In Proceedings of ACM SIGMOD, June 2003. [32] B. N. Chun, Y. Fu, and A. Vahdat. Bootstrapping a distributed computational economy with peer-topeer bartering. In Proceedings of the First Workshop on Economics of Peer-to-Peer Systems, Berkeley, CA, June 2003. [33] M. El-Gendy, A. Bose, and K. Shin. Evolution of the internet qos and support for soft real-time applications. Proceedings of the IEEE, July 2003. [34] D. Ely, S. Savage, and D. Wetherall. Alpine: A user-level infrastructure for network protocol development. In Proceedings of the 3rd USENIX Symposium on Internet Technologies and Systems (USITS), pages 171–183, San Francisco, CA, Mar. 2001. [35] D. Ely, N. Spring, D. Wetherall, S. Savage, and T. Anderson. Robust congestion signaling. In Proceedings of the 9th International Conference on Network Protocols (ICNP), pages 332–341, Riverside, CA, Nov. 2001. [36] C. Estan, S. Savage, and G. Varghese. Automatically inferring patterns of resource consumption in network traffic. In Proceedings of the ACM SIGCOMM Conference, pages 137–148, Karlsruhe, Germany, Aug. 2003. [37] P. Ferguson and D. Senie. Network ingress filtering: Defeating denial of service attacks which employ IP source address spoofing. RFC 2267, Jan. 1998.
2
[38] Y. Fu, J. S. Chase, B. Chun, S. Schwab, and A. Vahdat. Sharp: An architecture for secure resource peering. In Proceedings of the 19th ACM Symposium on Operating System Principles (SOSP), Bolton Landing, NY, Oct. 2003. [39] J. Ioannidis and S. M. Bellovin. Implementing pushback: Router-based defense against ddos attacks. Network and Distributed System Security Symposium, Feb 2002. [40] F. Junqueira, R. Bhagwan, K. Marzullo, S. Savage, and G. M. Voelker. The phoenix recovery system: Rebuilding from the ashes of an internet catastrophe. In Proceedings of the 9th USENIX Workshop on Hot Topics in Operating Systems (HotOS-IX), pages 73–78, Lihue, HI, May 2003. [41] S. Kandula, D. Katabi, M. Jacob, and A. Berger. Botz-4-sale: Surviving organized ddos attacks that mimic flash crowds. In Proceedings of the 2nd ACM/USENIX Symposium on Networked Systems Design and Implementation (NSDI), Boston, MA, May 2005. [42] T. Kohno, A. Brodio, and kc claffy. Remote Physical Device Fingerprinting. In Proceedings of the IEEE Symposium and Security and Privacy, Oakland, CA, May 2005. Award paper. [43] D. Kosti´c, A. Rodriguez, J. Albrecht, A. Bhirud, and A. Vahdat. Using random subsets to build scalable network services. In Proceedings of the 4th USENIX Symposium on Internet Technologies and Systems (USITS), Seattle, WA, Mar. 2003. [44] D. Kosti´c, A. Rodriguez, J. Albrecht, and A. Vahdat. Bullet: High bandwidth data dissemination using an overlay mesh. In Proceedings of the 19th ACM Symposium on Operating System Principles (SOSP), Bolton Landing, NY, Oct. 2003. [45] J. Ma, G. M. Voelker, and S. Savage. Self-stopping worms. In Proceedings of the ACM Workshop on Rapid Malcode (WORM), pages 12–21, Washington D.C., Nov. 2005. [46] P. Marques, N. Sheth, R. Raszuk, B. Greene, J. Mauch, and D. McPerson. Dissemination of flow specification rules. IETF Internet-Draft, August 2005. [47] P. McDaniel, S. Sen, O. Spatscheck, J. V. der Merwe, W. Aiello, and C. Kalmanek. Enterprise Security: A Community of Interest Based Approach, Jan 2006. Network and Distributed System Security Symposium. [48] J. Mirkovic, S. Dietrich, D. Dittrich, and P. Reiher. Internet Denial of Service: Attack and Defense Mechanims. Prentice Hall, 2005. [49] A. Mizrak, Y.-C. Cheng, K. Marzullo, and S. Savage. Fatih: Detecting and isolating malicious routers. In Proceedings of the IEEE Conference on Dependable Systems and Networks (DSN), pages 538–547, Yokohama, Japan, June 2005. Award paper. [50] D. Moore, V. Paxson, S. Savage, C. Shannon, S. Staniford, and N. Weaver. Inside the slammer worm. IEEE Security and Privacy, 1(4):33–39, July 2003. [51] D. Moore, C. Shannon, G. M. Voelker, and S. Savage. Internet quarantine: Requirements for containing self-propagating code. In Proceedings of the IEEE Infocom Conference, pages 1901–1910, San Francisco, CA, Apr. 2003. [52] D. Moore, G. M. Voelker, and S. Savage. Inferring Internet denial of service activity. In Proceedings of the USENIX Security Symposium, pages 9–22, Washington, D.C., Aug. 2001. Best paper. [53] D. K. Mulligan, K. McEldowney, J. M. Garry, and B. Givens. Supplment to Intel Complaint, Potential Hard to Individuals. Center for Democracy and Technology filing with the Federal Trade Commision, Apr. 1999. [54] J. Networks. Policy framework configuration guide. www.juniper.net.
3
[55] C. Partridge, A. C. Snoeren, W. T. Strayer, B. Schwartz, M. Condell, and I. Casti˜ neyra. FIRE: Flexible intra-AS routing environment. In Proceedings of the ACM SIGCOMM Conference, pages 191–203, Stockholm, Sweden, Aug. 2000. [56] C. Partridge, A. C. Snoeren, W. T. Strayer, B. Schwartz, M. Condell, and I. Casti˜ neyra. FIRE: Flexible intra-AS routing environment. IEEE Journal on Selected Areas in Communication, 19(3), Mar. 2001. [57] B. Raghavan and A. C. Snoeren. A system for authenticated policy-compliant routing. In Proceedings of the ACM SIGCOMM Conference, pages 167–178, Portland, OR, Sept. 2004. [58] L. A. Sanchez, W. C. Milliken, A. C. Snoeren, F. Tchakountio, C. E. Jones, S. T. Kent, C. Partridge, and W. T. Strayer. Hardware support for a hash-based IP traceback. In Proceedings of DARPA Information Survivability Conference and Exposition (DISCEX), Anaheim, CA, June 2001. [59] S. Savage. Sting: a tcp-based network measurement tool. In Proceedings of the 2nd USENIX Symposium on Internet Technologies and Systems (USITS), pages 71–79, Boulder, CO, Oct. 1999. Best student paper. [60] S. Savage, T. Anderson, A. Aggarwal, D. Becker, N. Cardwell, A. Collins, E. Hoffman, J. Snell, A. Vahdat, G. M. Voelker, and J. Zahorjan. Detour: a case for informed internet routing and transport. IEEE Micro, 19(1):50–59, Jan. 1999. [61] S. Savage, N. Cardwell, D. Wetherall, and T. Anderson. Tcp congestion control with a misbehaving receiver. ACM Computer Communications Review, 29(5):71–78, Oct. 1999. [62] S. Savage, A. Collins, E. Hoffman, J. Snell, and T. Anderson. The end-to-end effects of internet path selection. In Proceedings of the ACM SIGCOMM Conference, pages 289–299, Cambridge, MA, Sept. 1999. [63] S. Savage, D. Wetherall, A. Karlin, and T. Anderson. Practical network support for IP traceback. In Proceedings of the ACM SIGCOMM Conference, pages 295–306, Stockholm, Sweden, Aug. 2000. [64] S. Savage, D. Wetherall, A. Karlin, and T. Anderson. Network support for IP traceback. IEEE/ACM Transactions on Networking, 9(3):226–237, June 2001. [65] S. Singh, C. Estan, G. Varghese, and S. Savage. Automated worm fingerprinting. In Proceedings of the 6th ACM/USENIX Symposium on Operating System Design and Implementation (OSDI), pages 45–60, San Francisco, CA, Dec. 2004. [66] A. C. Snoeren. Adaptive inverse multiplexing for wide-area wireless networks. In Proceedings of the 4th IEEE Global Internet Symposium (GlobeCom), pages 1665–1672, Rio de Janiero, Brazil, Dec. 1999. [67] A. C. Snoeren, K. Conley, and D. K. Gifford. Mesh based content routing using XML. In Proceedings of the 18th ACM Symposium on Operating System Principles (SOSP), pages 160–173, Banff, Canada, Oct. 2001. [68] A. C. Snoeren, C. Partridge, L. A. Sanchez, C. E. Jones, F. Tchakountio, S. T. Kent, and W. T. Strayer. Hash-based IP traceback. In Proceedings of the ACM SIGCOMM Conference, pages 3–14, San Diego, CA, Aug. 2001. Best student paper. [69] A. C. Snoeren, C. Partridge, L. A. Sanchez, C. E. Jones, F. Tchakountio, B. Schwartz, S. T. Kent, and W. T. Strayer. Single-packet IP traceback. IEEE/ACM Transactions on Networking, 10(6):721–734, Dec. 2002. [70] A. C. Snoeren and B. Raghavan. Decoupling policy from mechanism in Internet routing. In Proceedings of the 2nd ACM Workshop on Hot Topics in Networks (HotNets-II), pages 81–86, Cambridge, MA, Nov. 2003. [71] C. Systems. Cisco ios quality of service solutions configuration guide. www.cisco.com.
4
[72] R. Teixeira, K. Marzullo, S. Savage, and G. M. Voelker. In search of path diversity in isp networks. In Proceedings of the USENIX/ACM Internet Measurement Conference, pages 313–318, Miami, FL, Oct. 2003. [73] A. Vahdat, J. S. Chase, R. Braynard, D. Kosti´c, P. Reynolds, and A. Rodriguez. Self-organizing subsets: From each according to his abilities, to each according to his needs. In Proceedings of the International Workshop on Peer To Peer Systems (IPTPS), Cambridge, MA, Mar. 2002. [74] A. Vahdat, K. Yocum, K. Walsh, P. Mahadevan, D. Kosti´c, J. Chase, and D. Becker. Scalability and accuracy in a large-scale network emulator. In Proceedings of the 5th ACM/USENIX Symposium on Operating System Design and Implementation (OSDI), Boston, MA, Dec. 2002. [75] L. von Ahn et al. Captcha: Using hard ai problems for security. Proc. EUROCRYPT, 2003. [76] M. Vrable, J. Ma, J. Chen, D. Moore, E. VandeKieft, A. C. Snoeren, G. M. Voelker, and S. Savage. Scalability, fidelity and containment in the potemkin virtual honeyfarm. In Proceedings of the 20th ACM Symposium on Operating System Principles (SOSP), pages 148–162, Brighton, UK, Oct. 2005. [77] X. Xipeng and L. Ni. Internet qos: a big picture. IEEE Network, Mar/Apr 1999. [78] A. Yaar, A. Perrig, and D. Song. An endhost capability mechanism to mitigate DDoS flooding attacks. Proceedings of the IEEE Symposium on Security and Privacy, May 2004. [79] X. Yang, D. Wetherall, and T. Anderson. A dos-limiting network architecture. In Proceedings of the ACM SIGCOMM Conference, Philadelphia, PA, Aug 2005. [80] K. Yocum, E. Eade, J. Degesys, D. Becker, J. Chase, and A. Vahdat. Toward Scaling Network Emulation using Topology Partitioning. In Proceedings of the 11th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), October 2003. [81] H. Yu and A. Vahdat. Building Replicated Internet Services Using TACT: A Toolkit for Tunable Availability and Consistency Tradeoffs. In Proceedings of the Second International Workshop on Advanced Issues of E-Commerce and Web-based Information Systems, June 2000. [82] H. Yu and A. Vahdat. Efficient Numerical Error Bounding for Replicated Network Services. In Proceedings of the 26th International Conference on Very Large Databases (VLDB), September 2000. [83] H. Yu and A. Vahdat. Combining Generality and Practicality in a Conit-Based Continuous Consistency Model for Wide-Area Replication. In The 21st IEEE International Conference on Distributed Computing Systems (ICDCS), April 2001. [84] H. Yu and A. Vahdat. The Costs and Limits of Availability for Replicated Services. In Proceedings of the 18th ACM Symposium on Operating Systems Principles (SOSP), October 2001. [85] H. Yu and A. Vahdat. Minimal Replication Cost for Availability. In Proceedings of the ACM Principles of Distributed Computing, July 2002. [86] H. Yu and A. Vahdat. Consistent and automatic service regeneration. In Proceedings of the 1st ACM/USENIX Symposium on Networked Systems Design and Implementation (NSDI), San Francisco, CA, Mar. 2004. [87] Y. Zhang and V. Paxson. Detecting stepping stones. In Proc. 9th USENIX Security Symposium, Aug. 2000.
5