Building and Evaluating P2P Systems using the Kompics Component ...

3 downloads 4767 Views 362KB Size Report
ing P2P systems in simulation, local execution, and distributed deployment. ... port type represents a service or a protocol abstraction. It specifies the types of ...
Building and Evaluating P2P Systems using the Kompics Component Framework∗ Cosmin Arad Royal Institute of Technology (KTH)

Jim Dowling, Seif Haridi Swedish Institute of Computer Science (SICS)

[email protected]

{jdowling,seif}@sics.se

Abstract—We present a framework for building and evaluating P2P systems in simulation, local execution, and distributed deployment. Such uniform system evaluations increase confidence in the obtained results. We briefly introduce the Kompics component model and its P2P framework. We describe the component architecture of a Kompics P2P system and show how to define experiment scenarios for large dynamic systems. The same experiments are conducted in reproducible simulation, in real-time execution on a single machine, and distributed over a local cluster or a wide area network. This demonstration shows the component oriented design and the evaluation of two P2P systems implemented in Kompics: Chord and Cyclon. We simulate the systems and then we execute them in real time. During real-time execution we monitor the dynamic behavior of the systems and interact with them through their web-based interfaces. We demonstrate how component-oriented design enables seamless switching between alternative protocols. Keywords-peer-to-peer; evaluation; component framework; design; simulation; experimentation; deployment.

I. I NTRODUCTION Comprehensive evaluation of P2P systems comprises analysis, simulation, and live performance measurements. We present Kompics [1], a model for building reconfigurable distributed systems from event-driven components. Kompics systems can be uniformly evaluated in large-scale reproducible simulation and distributed deployment, using both the same system code and the same experiment scenarios. Very similar in spirit, but without a hierarchical component model, is the ProtoPeer [2] toolkit for prototyping and evaluating P2P systems. Mace [3] generates distributed systems code from a high-level specification while Splay [4] allows system specification in a high-level language. II. KOMPICS AND THE P2P C OMPONENT F RAMEWORK Kompics is a component model targeted at building distributed systems by composing protocols programmed as event-driven components. Kompics components are reactive state machines that are executed concurrently by a set of workers. Components communicate by passing data-carrying typed events through typed bidirectional ports connected by channels. Ports are event-based component interfaces. A port type represents a service or a protocol abstraction. It specifies the types of events sent through the port in each *This work was funded by the SELFMAN EU project, contract 34084.

SomeApplication

ChordMain

ChordPeer

JettyWebServer

– ChordPeerPort

– Web

+ – ChordPeerPort

+ – Web +

ChordMonitorServerMain JettyWebServer – Web +

Web

ChordMonitorServer

Web

ChordWebApplication

ChordMonitorClient – Network – Timer –

CS

– +

CS

– SON +

CS

– FDS

+

– Timer

Bootstrap

BootstrapClient – Network

– Timer

+ – Network +

Network

MinaNetwork

FailureDetector

+

JettyWebServer

FDS

– Web +

FailureDetector (P) – Timer – Network + – Timer +

Timer

JavaTimer

BootstrapServerMain

– FailureDetector +

– Timer +

Network

MinaNetwork

SON

Chord – Network

– Network +

Timer

JavaTimer

Web

BootstrapServer – Network +

Network

MinaNetwork

– Timer +

Timer

JavaTimer

Figure 1. The left figure shows the architecture of a Chord process. The Chord protocol is implemented by the Chord component using Network, Timer, and FailureDetector abstractions. The Network and Timer abstractions are provided by the MinaNetwork [5] (which handles connection management and message serialization) and JavaTimer components. The ChordMonitorClient periodically inspects the Chord status (CS port) and sends it through the network to the ChordMonitorServer (top right). The ChordWebApplication renders this status on a web page upon request from the JettyWebServer [6] (which provides web browser access). On the right we have the component architectures of the monitoring and bootstrap server.

direction. A component either provides (+) or requires (–) a port. Components may encapsulate subcomponents. The Kompics runtime supports pluggable component schedulers. The default scheduler is multi-threaded and executes components in parallel on multi-core machines. We use a single-threaded scheduler for reproducible simulation. We developed a set of utility components and methodology for building and evaluating P2P systems. Service abstractions for network and timers can be provided by different component implementations. The framework contains reusable components that provide bootstrap and failure detection services. System-specific components are developed for global system monitoring and web-based interaction. We highlight the elements of the P2P framework in the architecture of our Chord implementation illustrated in Figure 1. III. D EFINING AN EXPERIMENT SCENARIO We designed a Java domain-specific language (DSL) for expressing experiment scenarios for P2P systems. We call a stochastic process, a finite random sequence of events, with a specified inter-arrival time distribution. Here is an example scenario composed of 3 stochastic processes:

StochasticProcess boot = new StochasticProcess() {{ eventInterArrivalTime(exponential(2000)); // ˜2s raise(1000, chordJoin, uniform(16)); }}; // 1000 joins StochasticProcess churn = new StochasticProcess() {{ eventInterArrivalTime(exponential(500));// ˜500ms raise(500, chordJoin, uniform(16)); // 500 joins raise(500, chordFail, uniform(16)); }}; // 500 failures StochasticProcess lookups = new StochasticProcess() {{ eventInterArrivalTime(normal(50)); // ˜50ms raise(5000, chordLookup, uniform(16), uniform(14)); }}; boot.start(); // start churn.startAfterTerminationOf(2000, boot); // sequential lookups.startAfterStartOf(3000, churn); // in parallel terminateAfterTerminationOf(1000, lookups);// terminate

1000 peers join in a space of 0..216 . The inter-arrival time between 2 consecutive joins is exponentially distributed with a mean of 2s. A churn process starts 2s after. Every 500ms on average (exp), a new peer joins or an existing peer fails. In parallel with the churn process, 5000 lookups are initiated uniformly around the ring (0..216 ) for keys in the first ring quadrant (0..214 ). The experiment terminates 1s after lookups are done. IV. E XPERIMENT PROFILES We can reuse the same experiment scenario to drive simulation or local real-time execution experiments, as well as remote experiments where the system nodes are distributed over the machines of a cluster (possibly running ModelNet [7]) or a testbed like PlanetLab [8] or Emulab [9]. During simulation and local execution (see Figure 2) we model the network at the message-level. In simulation, we execute the same system code built for deployment. Calls for the current system time are trapped and the current simulation time is returned. Simulation enables deterministic replay, debugging, reproducible results, and large-scale experiments without loss of accuracy. We developed an infrastructure for deploying and executing distributed experiments. Experiment scenarios are locally interpreted by a Master component which coordinates a set of remote Slaves. Each Slave resides on a machine available for the experiment and it manages a set of system peers. ChordSimulationMain (ChordExecutionMain) ChordSimulator

+ – Web

+ + ChordPeerPort+ + + – Web Web + + – ChordPeerPort Web + + – ChordPeerPort + – –Web – ChordPeerPort Web + + – ChordPeerPort + + – –Web – ChordPeerPort ChordPeer ChordPeerPort Web + + – ChordPeerPort + + – –Web – ChordPeer ChordPeer Web – ChordPeerPort – ChordPeer + + ChordPeer +ChordPeer Network+ + + –Timer Timer +ChordPeer +ChordPeer –Network –Network –Timer + +ChordPeer + + –Timer –Network –Timer –Network + + –Network + + –Timer –Network –Timer –Network –Timer – –Network – –Timer + + + – Network – Timer – ChordExperiment +

Network

+

Timer

+

ChordExperiment

P2pSimulator (P2pOrchestrator)

JettyWebServer – Web +

Web

ChordMonitorServer – Network – Timer +

Web

BootstrapServer – Network – Timer

Figure 2. The simulation architecture with all peers and the bootstrap and monitor servers within one process. ChordSimulationMain is executed using a single-thread simulation scheduler for deterministic replay and simulated time advancement. P2pSimulator is generic. It interprets experiment scenarios and sends system-specific scenario events (e.g. chordJoin, chordLookup) to the ChordSimulator which manages the ChordPeers (same from Figure 1). The P2pSimulator provides a Network abstraction and can be parameterized with a custom network latency and bandwidth model. For real-time local execution we replace the P2pSimulator with a P2pOrchestrator, which interprets the same experiment scenario but in real time. In addition, components are executed by the default multi-threaded scheduler.

V. D EMONSTRATION OVERVIEW This demonstration consists of evaluations of two P2P systems developed in Kompics: Chord [10] and Cyclon [11]. Each system is first evaluated in a reproducible simulation experiment. We reuse the same experiment scenario to execute the systems in real time. We observe the dynamic behavior of the systems though the web interface of the monitoring server, which aggregates the global system state periodically. We also inspect the local state of a few system nodes though their web interfaces and we interact with Chord by manually issuing lookups from different nodes. We reuse the same scenario definition to drive a distributed experiment where nodes are deployed remotely on some cluster machines or on PlanetLab [8]. We repeat some of the previous system interactions. This illustrates the uniform experience of evaluating real systems across simulation, local execution, and distributed deployment. We use a BitTorrent [12] system developed in Kompics, in a simulation experiment, to demonstrate a realistic bandwidth emulation model. Finally, we return to local execution to experiment live with different scenario definitions. VI. S UMMARY We briefly introduced the Kompics component model and we described the component architecture of the Chord overlay developed using the Kompics P2P framework. We showed how to define experiment scenarios for large and dynamic systems and how the same experiments are conducted in reproducible simulation, in real-time execution on a single machine, and distributed over a local cluster or a wide area network. The source code used for this demonstration, including the Kompics runtime, the P2P framework, experiment scenarios, and implementations of Chord, Cyclon, and BitTorrent, is available online at http://kompics.sics.se. R EFERENCES [1] C. Arad, J. Dowling, and S. Haridi, “Developing, simulating, and deploying peer-to-peer systems using the Kompics component model,” in COMSWARE’09. [2] W. Galuba, K. Aberer, Z. Despotovic, and W. Kellerer, “Protopeer: a p2p toolkit bridging the gap between simulation and live deployement,” in SimuTools, 2009. [3] C. E. Killian, J. W. Anderson, R. Braud, R. Jhala, and A. M. Vahdat, “Mace: language support for building distributed systems,” in PLDI ’07. [4] L. Leonini, E. Rivi`ere, and P. Felber, “Splay: distributed systems evaluation made simple (or how to turn ideas into live systems in a breeze),” in NSDI’09. [5] (2004-2009) Apache MINA. [Online]. Available: http://mina.apache.org/ [6] (1995-2009) Mortbay Jetty. [Online]. Available: http://www.mortbay.org/jetty/ [7] A. Vahdat, K. Yocum, K. Walsh, P. Mahadevan, D. Kosti´c, J. Chase, and D. Becker, “Scalability and accuracy in a large-scale network emulator,” SIGOPS Oper. Syst. Rev., vol. 36, no. SI, 2002. [8] B. Chun, D. Culler, T. Roscoe, A. Bavier, L. Peterson, M. Wawrzoniak, and M. Bowman, “PlanetLab: an overlay testbed for broad-coverage services,” SIGCOMM Comput. Commun. Rev., vol. 33, 2003. [9] B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler, C. Barb, and A. Joglekar, “An integrated experimental environment for distributed systems and networks,” in OSDI’02. [10] I. Stoica, R. Morris, D. Liben-Nowell, D. Karger, M. F. Kaashoek, F. Dabek, and H. Balakrishnan, “Chord: A scalable peer-to-peer lookup service for internet applications,” IEEE Transactions on Networking, vol. 11, February 2003. [11] S. Voulgaris, D. Gavidia, and M. Steen, “Cyclon: Inexpensive membership management for unstructured P2P overlays,” Journal of Network and Systems Management, vol. 13, no. 2, June 2005. [12] B. Cohen, “Incentives Build Robustness in BitTorrent,” in Proc. 1st Workshop on Economics of Peer-to-Peer Systems (P2PEcon), 2003.

Suggest Documents