JAMES: JAva test-bed ManagEment System Michele Mastrogiovanni
Alessandro Modesti
Chiara Petrioli
Department of Computer Science SENSES laboratory University of Rome “Sapienza” Email:
[email protected]
SENSES laboratory University of Rome “Sapienza” Email:
[email protected]
Department of Computer Science SENSES laboratory University of Rome “Sapienza” Email:
[email protected]
Abstract—Research on wireless sensor networks (WSNs) has recently moved from the design and evaluation of protocol stacks to real-life implementation and test of the proposed solutions. It would be desirable to be able to run the implementation of a given protocol stack (or of multiple stacks) over different real-life testbeds obtaining comparable results. This requires standardized interfaces to remotely access and control a federation of testbeds, tools to gather data on the protocol performance without affecting the network operations, as well as ways to standardize how data needed to compute the metrics of interest are gathered and analyzed. This paper describes the design and use of JAMES, a new Test-bed Management System for WSNs that addresses all these challenges. We also discuss the use of JAMES features to ease the experimental evaluation of protocol stacks we have developed. Our experiments show that only a few tens of lines need to be written to be able to execute tests on a given protocol stack and that JAMES operations are transparent to the stack behavior.
I. I NTRODUCTION Wireless Sensor Networks (WSNs) have been intensely investigated for the past ten years, and advances have been produced at each layer of their protocol stacks as well as in their general architecture. New communicating nodes have been produced and commercialized (Tmote, Mica, MicaZ, BTnode,. . . ), custom operative system introduced (e.g., TinyOS [1], Bombilla [2] and Contiki [3]) and development kits have been made available to the academic and industrial research, as well as to anybody with an interest in the field. Any university or company today can easily setup a test-bed of wireless sensor nodes with little economic effort to complement education, foster hands on research and compare new and existing solutions. A protocol developer who builds a network protocol for WSNs can use her/his own test-bed, or select one among those publicly available, and test her/his solution in a chosen scenario. This approach to testing, however, can lead to multiple problems in assessing the effectiveness of a given protocol. The developer must be able to control the test-bed, possibly remotely, and to use the tools required to manage it. Results must be gathered and parsed in a format which differs depending on the tool used and the associated test-bed. If the developer wants to test the software on another test-bed, the code needs to be ported, SW used to interface with such testbed must be learned, and new tools have to be developed to parse and interpret the results. Different ways of measuring metrics make results on different test-beds not comparable. It is typical to have problems in reproducing paper results on
some protocols simply because of the difference between the test-bed used by the protocol authors and the one at hand. To get a fair comparison between two protocols, researchers have to run them on the same test-bed, and have to use the same tools to get the results. On the other hand, the network community would be extremely interested in being able to run the same protocol on several different test-beds, getting comparable results, and assessing the protocol effectiveness in a wide set of scenarios. Our work stems from these observations and leads to the implementation of JAMES (JAva test-bed ManagEment System). JAMES is a complete system to remotely manage test-beds and analyze and share results. It basically provides: 1) a tiny software extension that lets protocols running on sensor nodes to produce raw logs in a transparent way without affecting measured protocol performances; 2) a standard method to describe the format of the logs by means of XML files called Descriptors; 3) some standard tools to parse results based on the XML Descriptors and to render them in a graphical form. Such tools allow to seamlessly generate plots for the metrics of interest (e.g., comparing the performance of different protocols with respect to a given metric of interest or studying the performance of a given protocol over different testing scenarios). The whole system is distributed and can be managed via web interfaces. In this paper we describe JAMES in detail, and provide examples of its use for assessing existing and novel protocol stacks. The remainder of the paper is organized as follows. Section II reviews the state of art on existing test-bed management systems. Section III presents JAMES overall architecture and details all the different subsystem which comprise our testbed management system. Finally section IV provides examples of JAMES operations (and assessments of its features) by presenting: 1) a study of the impact of JAMES on running protocols, 2) an analysis of the behavior of our cross layer protocol stack for wireless sensor networks IRIS [4]. II. R ELATED WORKS Wireless Sensor Networks have been investigated for more than a decade. Solutions have been proposed at the different levels of the protocol stack which account for the unique features of sensor nodes (limited resources, low cost, severe energy constraints), the inaccessibility of the sensor nodes in many application scenarios of interest and the potentially high
978-1-4244-2517-4/09/$20.00 ©2009 IEEE
volume of devices (which in turn demands for scalable, selfdeployable and self-managing solutions). Recent research [5], [4], [6], [7] has address the design and evaluation of complete, energy-efficient, cross-layer optimized protocol stacks and has started investigating the impact of sensor nodes current HW and of real-life signal propagation on the proposed protocol stacks performance. The latter activity has been made possible by the deployment of real-life test-beds, by the implementation of the major protocol stacks on sensor nodes prototypes and by their testing over the available test-beds. The increased interest in real-life deployments has in turn driven research on test-bed Management Systems (TMS) for wireless sensor networks. TMSs are systems able to remotely control test-beds (e.g. programming/reprogramming nodes, scheduling runs, detecting malfunctioning devices,...). They also allow to gather and display the results of the tests (e.g., the sensed data or the performance experienced by the tested protocol stack over one or multiple runs). In the following we will provide an overview of the existing TMSs, focusing on those closer to JAMES . MoteLab [8] is a WSN test-bed developed at Harvard University for use with TinyOS. Each Mica2 sensor node used in the test-bed is controlled by a Crossbow MIB6000 gateway and it is attached to a multimeter able to measure its power consumption. It is possible to schedule tests and save the output coming from each device in a common database. It is also possible to upload special Java classes to perform custom data logging into specified tables. The system uses an accounting system to separate results related to different users. Emulab [9], [10] is a popular software platform for running network test-beds, that supports over a dozen of device types, including generic PCs, emulated wide-area network links, 802.11 links, etc. It is designed to provide unified access to a variety of experimental environments. The system core manages the physical resources within a test-bed, interacting with numerous back-ends, which interface to different types of hardware. Through a Web-based front-end, users create and manage “experiments,” which are essentially collections of resources that are allocated to a user by the test-bed management software and act as a container for control operations by the user and system. Mobile Emulab [11] is a Emulab-based system, which extends Emulab features in order to provide a TMS for mobile WSNs. It targets specific WSN applications where mobile robots are equipped with sensor devices and can interact also with static sensor nodes. The goal is to support testing of protocol stacks for mobile WSN. Thanks to Emulab capability, users can submit their own “experiments” and interact with the test-bed either configuring batch operation or using a proper interface. Deployment-Support Network (DSN) [12] is a tool for the development, debugging and monitoring of distributed wireless embedded systems (called target system in the following) in a realistic environment. DSN uses a second wireless backbone network, which is directly attached to the target network, in order to provide a channel for the transport of debug and control information from and to the target-nodes.
It also implements interactive debugging services such as remote reprogramming, RPC and data/event-logging on the DSN nodes clearly separating debugging and testing logic from the experiment. DSN and MoteLab fail to provide standard interfaces with a variety of existing test-beds: They aim at controlling a single test-bed. All the TMSs described above have limited capabilities for what concerns data/sw sharing among different users. Results on different protocols performance are hardly comparable and protocols and software developed for a specific test-bed cannot be seamlessly run on other test-beds. III. JAMES ARCHITECTURE JAMES is a TMS built with the aim of enabling researchers of the wireless network community to develop and compare protocols, and share results. The main idea behind JAMES is to parse and analyze raw logs provided by test-bed results in order to obtain some high level performance analysis useful for the research community. JAMES can be divided into three main sub-systems: the JAMES test-bed management system (JTMS), the JAMES testbeds management controller (JTMC) and the JAMES results renderer (JRR). The JTMS is a software platform which allows to control one or more set of nodes, programming and configuring sensor nodes, scheduling and launching tests, collecting raw logs and recovering from a crash (i.e., test interruption due to any technical problems). JTMS offers to test-bed users a debug channel, exploiting the Universal Serial Bus (USB) interface several WSN node platforms use to communicate with external systems. To avoid USB standard limitations, JTMS adopts a scalable, hierarchical architecture which allows to manage large test-beds, independently of the number of nodes the WSN contains. The JTMS subsystem is implemented by two kinds of Java software components, named Agents and Loggers respectively. A Logger is a Java server that runs on a machine connected via a USB-Chain to a sub-set of the nodes of the test-bed. Its task is to directly manage the subnetwork, installing software on the nodes, resetting, starting and stopping connected nodes and getting data from them. The whole set of the nodes can be managed by more than one Logger. The PCs gathering data from a cluster of nodes (via the Logger) are connected via TCP/IP to a higher node in the test-bed hierarchy which hosts the JAMES Agent component. The latter is another Java server dedicated to the test-bed, that allows different users to run multiple tests on the same testbed, controls the Loggers and is responsible for storing and arranging the data coming from them. The JTMC is the core of JAMES as it allows the remote sharing of different JTMS-based test-beds. JTMC handles the interaction with the users, providing some standard functions such as launching tests over multiple test-beds, collecting and analyzing results. This component is realized as a public web application which allows to manage multiple test-beds through JTMS. At any time a new test-bed can be added in order to enrich the test-bed pool. The goal is to increase the capabilities
protocols performance. Moreover the module performs some other managing tasks, the most important of which is the synchronization of the sensor nodes with the Logger. The developer specifies the format of the logs the test will produce, writing a data descriptor. A Descriptor is a simple XML file based on a given XSD which describes a set of logs and the payload of each log. In this way, the developer can customize JLF to manage her/his own metrics, including a C header based on the Descriptor (a specific translation tool generates the header from the XML). For instance, if a developer wants to measure the end-to-end latency experienced by her/his own protocol it should define an XML file like the following:
Fig. 1.
Architecture of the JAMES system
for a developer to analyze the behavior of its protocol in different scenarios. Finally JRR is a software suite which makes a JTMC able to parse and analyze the results generated during a test. It allows to automatically generate plots displaying the performance of a given protocol or comparing the performance of different protocols. JRR is realized by means of a set of Java classes that the user must provide in order to produce a Renderer Plugin. Each one of these plugins is a specific software that is able to parse raw logs and produce some analysis (i.e., throughput, latency, protocol peculiar features, etc.). A developer can create and share with the research community any Renderer Plugin to allow researchers to perform a fair comparison with her/his own protocol. In Fig. 1 we have sketched JAMES architecture. A cluster of sensors (drawn as circles in Fig. 1) is interconnected to a PC which hosts the Logger. Multiple clusters of sensors may belong to the same test-bed, managed by an Agent. Each test-bed provided by a university or a company can join the JAMES pool of test-beds by connecting to the same JTMC Controller. In the following sections, for each kind of users of the JAMES System (i.e. test-bed owners, software developers, WSN researchers) we present the different modules to be used. A. Software Developers Software developers can test their own WSN protocols through the JAMES platform on a remote test-bed. To perform this task, a developer uses a specific TinyOS library, JLF (JAMES Logs Factory), which is freely downloadable from the JAMES site [13]. In the rest of the paper, we will refer to any TinyOS compiled software which uses JLF simply as a Binary Image. The role of JLF is to allow the output of custom metrics from each sensor node to the Logger, minimizing the impact that exchanging data on the serial line might introduce on
MESSAGE_SEND MESSAGE_RECEIVED The user can calculate the end-to-end latency for each packet by identifying the packet ID, and computing the difference between the timestamps at the source and sink nodes, given respectively by the MESSAGE SEND and MESSAGE RECEIVED metrics. The JLF library provides an interface called JAMES which exposes the following commands to interact with the user application: • put • flush • stopFlush The user application can cache its custom metrics locally at the node, passing metric pointer and metric identifier (mtrID attribute in metric tags of the XML) to the put command call. The call causes the generation of a local timestamp that is stored along with the metric and its identifier in a log, so that, thanks to the global synchronization, it will be possible to trace the events timeline over the whole network. JLF requires the user application to explicitly communicate the periods in which locally cached data can be flushed to the Logger. The flush command call starts a burst of packets over the serial line. Each packet contains some logs. JLF continues to download packets until either the buffers are empty or the stopFlush command is called. The Logger is able to extract the logs by matching logs metric id in each packet with the
information in the Descriptor. Then, it forwards the logs to the Agent, adding the information about the node source and translating the log time to a global time. The Agent manages logs, storing them in a user own session file with some other meta-information, including that about the Descriptor of the current data. In order to analyze, compare and visualize the generated data, JAMES offers the possibility to upload custom rendering tools (Render Plugins). A Render Plugin is a template-shaped Java class, coupled with a unique Descriptor and able to manage one or more sets of data, matching that format. The Renderer is the subsystem of JAMES which deals with the Render Plugins. Each user who wants to develop a Render Plugin has to take into account the Renderer/Plugin interaction rules. The system provides methods both to access and manage sets of results, and to write and read temporary text files and graphic files. The plugin is required to generate at least an HTML page, that the system will display after the computation. Since each Descriptor can be coupled in the system with both Binary Images and Renderer Plugins, the Descriptors represent the core of the sharing approach of JAMES, introducing a sort of classification, which matches many protocols or protocol stacks with many analysis tools managing the results generated by the protocols execution. B. Research Community JAMES presents an account-based web interface, designed to separate SW sharing from analysis of the results. Users can upload their SW contributions (Binary Images, Render Plugins). They can also run tests over different test-beds, starting from a Descriptor-based software. The Web interface promotes a pre-emptive declaration of the Descriptors. The forms which deal with the upload of new SW contributions require the user to associate the Binary Image or the Render Plugin with a specific Descriptor, choosing it among those available in the system. If no available Descriptor matches the user needs, a new one must be defined by the user and added to the system using a given form. JAMES allows each user to submit her/his own test, independently of whether she/he is the developer of the Binary Image to run. In order to perform a test, users have to fill the test submission form, choosing the Binary Image and the testbed to use among the ones available and specifying a few other parameters useful to configure the Agents operations (e.g., the test running time, the number of iterations to be performed). In case a user does not want to share her/his own SW with the other JAMES users she/he can upload a temporary Binary Image, specifying the Descriptor it matches. Temporary uploads allow users to decide whether to share a given protocol or protocol stack implementation with the other members of the JAMES community or not, in order to meet different privacy needs. Each user can only control her/his own scheduled tests. For each scheduled test she/he can query the system about the test progress, can cancel the test and, provided some results are
already available, can request to download and perform some computation on the available data. The interface presents the user’s test results grouped by Descriptor and, for each group, it shows the available Render Plugins matching the Descriptor, thus easing data analysis. IV. E XPERIENCES WITH JAMES In this section we describe some of the experimental activities in which we have exploited JAMES features. We first describe the test-bed we have deployed. We then briefly present the IRIS protocol stack run on the nodes during our experiments. Finally we discuss how we have validated JAMES effectiveness and how JAMES have proven to be helpful in assessing the protocol stack performance. A. Test-bed description Currently the test-bed we use consists in 39 Crossbow Telos Rev. B nodes, placed over a grid attached to the ceiling of our SENSES Laboratory. Each node runs IRIS protocol stack we have developed. Power control is adopted to enforce multi-hop communications. Nodes emission power is set to the minimum value so that, despite the limited size of the room, routes between pairs of sensors can be up to seven hops. The channel data rate is 250Kbps and the packet data size is 50 bytes. The sink is located at one of the rooms corners. The WSN is divided into three subnets, each connected via an USB-chain to a PC which hosts the Logger managing it. The PCs are connected by Ethernet. One of the three PC has a public IP address. It acts not only as Logger but also as Agent and Controller and allows remote access to the Web interface of the Controller. B. IRIS protocol stack IRIS is a complete protocol stack for WSN which includes an awake/asleep scheduling scheme, an energy-efficient interest dissemination, MAC/routing protocols, as well as a scheme to estimate the number of neighbors of each sensor node. The latter is used to improve the effectiveness of the interest dissemination and convergecasting phases. Nodes alternate between awake and asleep mode according to asynchronous duty cycles. They do not know their neighbors and their duty cycle. Convergecasting IRIS uses a CSMA-based MAC. Packets are routed towards the sink by exploiting hop count (HC) topologies. 1 Hop counts are propagated and updated by every node during the interest dissemination phase. Each node has an associated cost function which expresses how good a node is to be elected as relay. The cost function can be computed as a function of: state of the queues, residual node energy, link quality (e.g., Received Signal Strength Indicator, RSSI), advancements towards the sink provided by the candidate relay nodes, capability of aggregating data, etc. Assume that a given node i has a packet to send and that the hop count (HC) of this node is n: the convergecasting protocols runs as follows: 1 Hop counts are defined as the minimum number of transmissions to reach the sink from a given node when all nodes are awake.
1) Node i sends an RTS to trigger a reply from all nodes with HCi ≤ n which are currently awake. 2) Upon receiving the RTS, all awake nodes j with HCj ≤ n compute a random jitter based on their normalized cost (cheaper nodes will have a shorter jitter). 3) Upon calculating tj , all nodes j reply to the RTS with a CTS after a time backoff of tj seconds. Also, nodes defer from transmitting their CTS in case they sense an ongoing CTS transmission. This is done to facilitate the selection, as a forwarder, of the node with the smallest cost. If, instead, multiple CTSs collide at the transmitter the whole procedure is repeated until we either reach a maximum number of contention rounds rmax or a single node successfully sends its CTS to node i. Energy efficient techniques are used extensively at the MAC layer. Not only nodes go to sleep whenever they realize they won’t be selected as relays but nodes participating to a relay selection process switch to a low power state, for short periods of times in which they know they won’t be addressed (i.e., information on the protocol operations are extensively used to go to sleep or low power modes whenever possible). Interest Dissemination IRIS interest dissemination is based on Fireworks [14], extended to the case where nodes are allowed to alternate between awake and asleep modes and do not know their neighbors. The Fireworks protocol is an on-line scheme. The interest propagation is initiated by the sink node by transmitting an interest to all of its neighbors. Whenever a node receives a new broadcast message, it rebroadcasts it to all its neighbors with probability p, while with probability 1 − p it sends it to c randomly selected neighbors. A c smaller than or equal to four provides good performance across a large number of networking scenarios [14]. Fireworks approach presents quite a few challenges in case of nodes following asynchronous duty cycles (a single broadcast is unlikely to be able to reach the targeted number of nodes) and when nodes do not know their neighbors. In this case it is difficult for the nodes to know when all the intended destinations of the interest have been reached so that it can safely stop propagating the interest. This problem has been solved in IRIS by implementing Fireworks jointly with an online density estimation technique. In detail, let γi be the actual number of neighbors of a given node i, including both awake and sleeping nodes. Let γ˜i be the current estimate of γi at node i. Upon receiving a broadcast message, node i with probability 1 − p decides to send the packet to c neighbors, where c is a parameter of the algorithm. If c > γ˜i the node sends the message to exactly γ˜i neighbors, while if c ≤ γ˜i the node sends the packet to c of its neighbors, which are randomly picked. On the other hand, with probability p the node sends the broadcast message to all (˜ γi ) its neighbors. See [4] for further details. C. Experimental results and JAMES assessment The first experiment we have performed was devoted to evaluate JAMES impact on protocols performance. Specifically we wanted to verify that running JAMES on the sensor
nodes didn’t change the protocol stack performance significantly. To this aim we focused on IRIS end-to-end latency computing it both with and without JAMES aid and then considering the difference between the two values. We first performed an interest dissemination and then we started a convergecasting phase in which a variable number of k sources located in the opposite corner with respect to the sink started generating data packets. k was varied between 1 and 3. Nodes duty cycle was set to 0.3. The end-to-end latency experienced by data packets was measured. Using JAMES we performed latency computation in the same way we described in III-A whereas we introduced a TTL (Time To Live) field into the radio packet header to compute the value of the end-to-end latency without the TMS. The TTL accounts for the time spent by a packet at each node on the path toward the sink. When a packet is received from the physical layer of a node, it is coupled with a metadata that maintains the local time of its reception (named receive time). When the packet is sent back to the physical layer to be relayed to the next hop the local clock is recorded (in the re-send time variable). The packet TTL field is updated, adding to the field value the difference between the re-send and receive times. The value of the TTL when the packet is received at the sink is used as an estimate of the end-to-end latency. Unlike the evaluation we performed with JAMES, this technique to estimate end-to-end latency does not account for transmission and propagation delays, which are however expected to have a minor impact on IRIS latency performance (at low duty cycle the time to find a relay is much larger). We then compared the latency performance obtained with the two approaches. Latency was in the order of one second per hop. The difference between the values obtained with the two methodologies was within a 1% when averaging over tens of runs. The second set of experiments addressed IRIS performance evaluation. We run IRIS protocol stack over a subset of 27 sensor nodes of our test-bed. As before, sensor nodes followed a duty cycle equal to 0.3. The sink was placed at a corner of the deployment area while data packets were generated by a single source located at the opposite corner. At the beginning of our experiment the sink started an interest dissemination (with parameters p = 0.3, c = 4). After receiving the interest the source started generating packets at a rate of 5 packets per second. The rule adopted for sake of relay selection randomly picked an awake neighbors among those with a less or equal hop count. All the rest of the protocol and scenario parameters were set as discussed at the beginning of this section. To use JAMES for testing our protocol stack we have first written a Descriptor file identifying logs to output to compute the metrics of interest. In particular we have defined as metrics of interest the end-to-end latency, the number of packets received at the sink over time, the number of nodes reached by the interest dissemination and by the hop count messages, and the energy consumption. We also defined plugins to be able to draw pictures on the metrics of interest. After these steps we uploaded the IRIS SW and were ready to launch tests using JAMES. Scheduling a test consists of programming the
R EFERENCES
Fig. 2.
Screenshot of a plot produced by JAMES to analyze IRIS behavior
nodes, synchronizing them and starting the protocol operations. JAMES automatically gathered the information needed to compute the metrics of interest and such metrics were then drawn using the defined plugins. Adapting the code to be executed within the JAMES framework was extremely fast. The additional code we had to write was around a hundred lines of code. An example of the tests results which are automatically displayed by JAMES is depicted in Fig. 2. This picture refers to the outcome of a specific run. It displays three curves. The lower curve shows the number of in network data packets over time. This value is always very limited given the limited source data rate and the fact only one source generates packets. The second curve linearly and steadily increases. It depicts the number of data packets received at the sink over time. All generated packets are successfully received at the sink. The third curve shows the number of nodes which have received an interest over time. We observe that all nodes are reached by the interest dissemination in a few seconds, confirming the effectiveness of our interest dissemination phase. ACKNOWLEDGMENTS This work was partially supported by the FP7 EU project SENSEI (Integrating the Physical with the Digital World of the Network of the Future) Grant Agreement Number 215923, www.ict-sensei.org, and by the MIUR International FIRB RBIN047MH9.
[1] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister, “System architecture directions for networked sensors,” SIGPLAN Notices, vol. 35, no. 11, pp. 93–104, 2000. [2] C. C. Han, R. Kumar, R. Shea, E. Kohler, and M. Srivastava, “A dynamic operating system for sensor nodes,” in Proc. 3rd International Conference on Mobile Systems, Applications, and Services (MobiSys ’05), Seattle, Washington, June 2005. [3] A. Dunkls, B. Grnvall, and T. Voigt, “Contiki a lightweight and flexible operating system for tiny networked sensors,” in 1st IEEE Workshop on Embedded Networked Sensors, Tampa, Florida, USA, November 2004. [4] M. Mastrogiovanni, C. Petrioli, M. Rossi, A. Vitaletti, and M. Zorzi, “Integrated data delivery and interest dissemination techniques for wireless sensor networks,” in Proceeding of the 49th annual IEEE Global Telecommunications Conference, 2006. [5] P. Casari, M. Nati, C. Petrioli, and M. Zorzi, “Efficient non-planar routing around dead ends in sparse topologies using random forwarding,” in Proceedings of the IEEE International Conference on Communications, ICC 2007, Glasgow, Scotland, June 24–28 2007. [6] D. Ferrara, L. Galluccio, G. Morabito, A. Leonardi, and S. Palazzo, “MACRO: An integrated MAC/routing protocol for geographical forwarding in wireless sensor network,” in Proceedings of the 24th IEEE Annual Conference on Computer Communications, INFOCOM 2005, Miami, FL, March 13–17 2005. [7] I. F. Akyildiz, M. C. Vuran, and O. B. Akan, “A cross-layer protocol for wireless sensor networks,” in Proceedings of the 40th Annual Conference on Information Sciences and Systems, ISS 2006, Princeton, NJ, Mar. 2006, pp. 1102–1107. [8] G. Werner-Allen, P. Swieskowski, and M. Welsh, “MoteLab: A wireless sensor network testbed,” in Proceedings of the Fourth International Conference on Information Processing in Sensor Networks (IPSN05), Special Track on Platform Tools and Design Methods for Network Embedded Sensors (SPOTS), April 2005. [9] “Emulab,” http://www.emulab.net. [10] B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler, C. Barb, and A. Joglekar, “An integrated experimental environment for distributed systems and networks,” in Proceedings of the 5th symposium on Operating systems design and implementation (ACM SIGOPS), 2002. [11] D. Johnson, T. Stack, R. Fish, D. M. Flicking, L. Stoller, R. Ricci, and J. Lepreau, “Mobile emulab: A robotic wireless and sensor network testbed,” in Proceedings of the 25th Conference on Computer Communications (IEEE IFOCOM), April 2006. [12] M. Dyer, J. Beutel, T. Kalt, P. Oehen, L. Thiele, K. Martin, and P. Blum, “Deployment support network. a toolkit for the development of wsns,” in Proceedings of the 4th Wireless Sensor Networks European Conference, EWSN 2007, Delft, The Netherlands, April 2007. [13] “JAMES,” http://reti.dsi.uniroma1.it/SENSES lab/JAMES.html. [14] L. Orecchia, A. Panconesi, C. Petrioli, and A. Vitaletti, “Localized techniques for broadcasting in wireless sensor networks,” in Proceeding of the ACM Joint Workshop on the Foundation of Mobile Computing, Dial M-POMC 2004, Philadelphia, PA, October 1 2004.