XenoTiny: Emulating Wireless Sensor Networks on Xen - CiteSeerX

4 downloads 180 Views 298KB Size Report
Mar 13, 2009 - domain's name, memory allocation, and the virtual network bridge to which .... A new set of makerules was added to the TinyOS source tree to build ... should result from setting a pin or register must be performed explicitly in ...
XenoTiny: Emulating Wireless Sensor Networks on Xen† Joseph Sventek

Alasdair Maclean

Ross McIlroy

Grzegorz Miłoś

Department of Computing Science University of Glasgow Glasgow G12 8QQ United Kingdom

ciboodle Ltd India of Inchinnan Renfrewshire PA4 9LH United Kingdom

Department of Computing Science University of Glasgow Glasgow G12 8QQ United Kingdom

Computer Laboratory University of Cambridge Cambridge CB3 0FD United Kingdom

[email protected]

[email protected]

[email protected]

[email protected]

Abstract The large-scale and inaccessibility of deployed wireless sensor networks mandate that the code installed in sensor nodes be rigorously tested prior to deployment. Such testing is primarily achieved using discrete event simulators designed to provide “high fidelity” simulation of the communications between nodes. Discrete event simulators, by their very nature, mask race conditions in the code since simulated interrupts never interrupt running code; an additional limitation of most such simulators is the requirement that all simulated nodes execute the same application code, at variance with common practice in actual deployments. Since both of these problems reduce confidence in the deployed system, the focus of this work is to eliminate these problems via complete emulation of wireless sensor networks using virtualization techniques. In particular, a version of TinyOS is described, XenoTiny, which can be executed as a guest domain over the Xen virtualization hypervisor. XenoTiny is well integrated with the TinyOS build process. Since each node runs independently in its own guest domain, race conditions are able to manifest themselves, and each node can run a node-appropriate application. The hardware emulation is performed at the lowest possible hardware abstraction layer, thus maximizing the amount of actual TinyOS code that is executed during emulation. Finally, a novel Xen-specific radio model mechanism has been introduced, easing the introduction of different radio models for use during emulation runs. Categories and Subject Descriptors C.3 [Special-purpose and Application-based Systems]: Real-time and embedded systems; D.2.5 [Software Engineering]: Testing and Debugging. General Terms Keywords

Performance, Design, Experimentation.

Sensor Network; Emulation; Xen; Radio Model.

1. Introduction A wireless sensor network [1] is a collection of low-cost, resource-constrained sensor nodes that interact using radio communications; each sensor node typically measures one or more characteristics of its environment, and communicates with other nodes to facilitate transfer of these measurements and/or to ______________________________________________ † This research was supported in part through EPSRC grant EP/C014774/1. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. VEE’09 March 11-13, 2009, Washington, DC, USA. Copyright © 2009 ACM 1-59593-XXX-X/0X/000X…$5.00.

send/receive control messages – e.g. sensor nodes monitor soil moisture and send readings to a node that can control soil irrigation [2]. Due to the short-range nature of their radios, most nodes will also act as routers to support their neighbors’ communications. Such networks are usually widely distributed, often in inaccessible places [1]. Such inaccessibility makes it difficult (or impossible if the nodes are installed in an area hazardous to humans) to access the nodes in situ, making it essential that the nodes be rigorously tested prior to their deployment. One possible form of such testing entails setting up an equivalent hardware testbed and executing the actual application code. The resource-constrained and embedded nature of sensor nodes means that they lack detailed output capabilities, thus making the collection of debugging information difficult. Additionally, the geographical spread of a large-scale network may be difficult to duplicate in a testbed environment. Finally, field conditions that might affect the nodes, especially their radio communications, may be difficult to duplicate in a testbed environment. As a result of these issues, use of a hardware testbed is usually restricted to small-scale tests. To overcome these difficulties, sensor network testing is typically done using software simulation [3-7]. By simulating all nodes within a single running program on a commodity PC, simulators overcome the lack of detailed output capabilities available in a hardware testbed. With a sufficiently precise model of radio communications, simulators are able to model the geographical spread of large-scale networks, and to emulate adverse/challenging field conditions. Discrete event simulation is the most common form used to simulate sensor networks. By its very nature, discrete event simulation models the activity of network nodes and software components on those nodes as discrete events in time. For example, TOSSIM [3] (which is the most widely used simulator for TinyOS applications due to its natural integration with and support for the TinyOS build process) makes component code appear to run instantaneously; additionally, interrupts, while timed exactly, will never interrupt running code. Due to this event-driven approach, an interrupt which can cause incorrect behavior in a physical node will not cause the same error in simulation. For example, an interrupt may modify a variable while normally executing code was in the middle of reading the variable. For the same reason, any code that performs an infinite loop awaiting an interrupt will never terminate in the simulator. An additional shortcoming of sensor network simulators is the common requirement that identical application code be executed in each simulated node. In real sensor networks, different nodes

Figure 1: TinyOS Module Graph [10] will have different responsibilities, and run different application software. To model this using TOSSIM, for example, it is necessary to insert conditional blocks in the application code, where different behavior results based on a node’s ID. As a result, the code that has been simulated is different from the deployed code, reducing one’s confidence in the tested code. Virtualization systems [8-9] enable the time-shared execution of multiple operating systems, complete with application software, on a single machine. A virtualization kernel (aka hypervisor) controls the actual hardware resources and multiplexes multiple virtual operating system images onto that hardware. Full virtualization systems, such as VMWare [9], support execution of binary OS images identical to non-virtualized hardware; paravirtualization systems, such as Xen [8], require that the source code for each guest OS be modified to use the hardware abstractions exported by the hypervisor. One approach to eliminate the two aforementioned shortcomings of sensor network simulation systems is to construct each sensor network node as a separate virtual OS image for execution on a virtualization kernel. Each virtual OS is independently scheduled; interrupts will interrupt currently running code, thus enabling race conditions to manifest themselves; and each virtual OS image can be constructed with the appropriate application code as required by the sensor network functionality. Since all of the virtual sensor nodes are emulated on a single computer, it is still possible to collect all of the debugging information in a single place (section 4.4 describes how the simulation is extended to multiple computers in a cluster). Since thorough testing requires high fidelity radio simulation, care must be taken to provide a mechanism by which that fidelity can be achieved in the virtualization context. To test this approach, we have produced a version of TinyOS, XenoTiny, which can be executed as a guest domain over the Xen virtualization hypervisor. XenoTiny is well integrated with the TinyOS build process. Since each node runs independently in its own guest domain, race conditions are able to manifest themselves, and each node can run a node-appropriate application. The hardware emulation is performed at the lowest possible hardware abstraction layer, thus maximizing the amount of actual

TinyOS code that is executed during emulation. Finally, a novel Xen-specific radio model mechanism has been introduced, easing the introduction of different radio models for use during emulation runs. The remainder of the paper is organized as follows: Section 2 briefly summarizes salient aspects of TinyOS; Section 3 does the same for Xen. The structure of XenoTiny is covered in Section 4. Section 5 describes how sensor network topologies are managed when using the system. We evaluate the accuracy and performance of the system in Section 6. Related work is discussed in Section 7, and we summarize our results in Section 8.

2. TinyOS TinyOS is a component-based, event-driven operating environment designed for use in embedded networked sensors, such as the Mica and Telos motes [10]. 2.1

nesC

The vast majority of TinyOS is written in nesC, an extension of C, which has a number of key features which make it suitable for use in sensor motes, including bi-directional interfaces, components, and tasks [11]. The nesC compiler compiles required components to produce a standard C language file. This resulting C file is then compiled using a compiler that is targeted at a particular platform and processor (e.g. avr-gcc for MICAz motes). 2.1.1

Interfaces

Interfaces in nesC are bi-directional, specifying both commands that a component makes available to be called and the events which it signals. Generally speaking, commands will form a chain from the application to the hardware (such as sending a message over the radio). Conversely, events will occur as a result of hardware interrupts and will signal components dealing with the hardware, which will in turn signal higher level components (e.g. notification that sending a packet over the radio has completed). Another feature common in TinyOS interfaces is split-phase operation. Typically, a component will call a function in another

component and, if it is an operation that is likely to take some time, the caller will wait for an event signifying operation completion. Split-phase operations are necessary as TinyOS has a single thread of execution and a blocking call would thus block the entire system. 2.1.2

Components

TinyOS is composed of re-usable components which can use and/or provide interfaces. A module is the most fundamental TinyOS component and is comparable to a Java object in that each module contains some state and has functions that can be invoked to manipulate that state. If a module states that it provides an interface, then it must implement the functions provided by that interface. Similarly, if it uses an interface, then it must implement handlers for events which may be generated by that interface. A configuration component contains no function implementations, and simply acts as a wiring specification for other components. If a component uses an interface, then a configuration must wire this to a component that provides that interface. If a configuration component indicates that it provides an interface, then it must wire this functionality to another component, and transitively, this wiring must eventually lead to a module that implements the interface. 2.1.3

Tasks

There is only a single thread of execution in a TinyOS program; concurrency is managed by tasks that run atomically with respect to one another. Tasks are posted to a queue and are processed sequentially by a non-preemptive scheduler. Long activities are typically divided into sequences of short tasks by the developer to enable other activities to make progress. While executing, a task can be preempted by interrupts. As a consequence, state which can be accessed by at least one interrupt service routine must be protected by atomic sections; these temporarily disable interrupts. A distinction is made between “asynchronous (async) code”, reachable from at least one interrupt service routine, and “synchronous (sync) code” which is only reachable from tasks. The compiler returns an error if any sync code is called directly from async code; if async code requires access to sync code, it must post a task containing a call to that command. 2.2

TinyOS Application Architecture

TinyOS is compiled with a single top-level application component. This top-level component may be wired to many individual application components, each having a separate purpose. The application components are eventually wired to other, lower-level components. The TinyOS compilation only includes the components that are required by the application, thus minimizing the storage requirements for the program. Figure 1 shows the module graph for TinyOS assuming that the application has used the RF chip, serial port, timers, EEPROM and sensors. It shows the application using a number of cross-platform components (Application to Byte Levels) which rely upon standard interfaces to hardware provided by platform-specific components at the Hardware-Interface Level. 2.3

Hardware Abstraction Levels

Each of the hardware components is represented by components at three levels of abstraction: the Hardware Presentation Layer (HPL), the Hardware Adaptation Layer (HAL), and the Hardware Interface Layer (HIL) [12]. These are illustrated in Figure 2.

The HPL is a simple wrapper around the hardware, and typically provides functions such as initialization, starting/stopping, and getting/setting registers. It is also responsible for enabling/disabling interrupts relevant to the component and for providing interrupt handlers particular to that component. The HAL is platform-specific, and provides useful abstractions over the capabilities of the hardware. HAL components are allowed to store some state relating to interactions with other components. HIL components adapt the platform-specific interfaces to standard, platform-independent interfaces. While using the HIL ensures cross-platform compatibility, certain tasks may be much more efficient if performed by either of the lower layers. Therefore, some components may bypass the HIL and be wired to either the HAL or HPL at the cost of decreased portability. In this work, it is important that only the lower layers be modified whenever possible. This maximizes the amount of real TinyOS code that is tested using the emulation. 2.4

TOSSIM

TOSSIM is the TinyOS simulator provided as part of the TinyOS distribution [3], and is widely used by TinyOS application developers to test their sensor network designs and implementations. TOSSIM relies upon nesC compiler support in order to replace certain components with TOSSIM implementations. If the sim flag is provided to the compiler, then any directory specified by the platform as a source for components which has a sim subdirectory has that subdirectory searched first for components of a given name, thus causing any TOSSIM implementations to be found by the compiler and used instead of the real components. At the time of writing, only the MICAz platform has TOSSIM implementations available.

3. Xen Xen [8] is a virtual machine monitor (hypervisor) which enables a number of operating system instances to run simultaneously on a single machine. Each operating system instance runs in a domain, with Xen regulating access to the physical resources by the domains and ensuring that the domains cannot interact inappropriately with each other- e.g. by attempting to read or write memory allocated to another domain. 3.1

Paravirtualization

Full virtualization provides complete emulation of the hardware, including privileged instructions; thus, a guest operating system instance is unaware that any abstraction from the hardware is occurring. Xen uses paravirtualization, in which Xen exports its own set of hardware abstractions, and the source code for each guest operating system must be modified to interact with those abstractions. Porting an OS to Xen is similar to porting to a new hardware platform. The hypervisor provides a software ABI (the “hypercalls”) which replaces manipulation of the actual hardware; this includes, for example, privileged instructions such as updating page tables and requesting access to hardware resources. Although this requires development time to be spent in porting to the hypervisor, the technique yields considerable increases in performance over full virtualization [8]. It should be noted that only the OS kernel requires any modification, and that user applications remain unchanged.

extremely important for the emulation to give us confidence prior to deployment. Different deployments will exhibit different radio environments, thus making flexible support for different radio models an absolute necessity. 4.1

Figure 2: TinyOS Hardware Abstraction Layers [12]

4.1.1 3.2

Domain Management

A single, specially privileged domain (“Dom0”) is the first domain that the hypervisor loads when booting; this is typically a modified Linux kernel. This domain has access to available hardware such as hard disks and network interfaces. Other domains (termed “DomU” or guest domains) do not have direct access to these resources and must request access via Dom0; this distinction is transparent to all but the lowest layers of the guest operating system. Dom0 privileges include the ability to create and destroy domains. This is typically achieved by executing the relevant xm subcommand, providing configuration parameters in a file or from the command line. For example, xm create requires the new domain’s name, memory allocation, and the virtual network bridge to which its virtual network card will be connected. 3.3

Split Drivers

As guest domains typically do not have access to the physical hardware, their device drivers are replaced with a split driver implementation. The backend portion of the driver resides in Dom0 and accesses the hardware resource. The matching frontend in a DomU communicates with the backend to obtain the relevant service; it is for this reason that the lower layers of the guest OS’s drivers must be modified.

4. XenoTiny As with any other guest OS, it was necessary to port TinyOS to use the Xen hypercalls – i.e. emulate low level manipulation of the hardware through interactions with the Xen hypervisor. Since TinyOS applications often interface directly to hardware-specific components, it was critical that we emulate at the lowest possible level so as to maintain the “application API”. It was also desirable that whatever changes we had to make integrated naturally into the TinyOS build process. Construction of a node image targeted at XenoTiny should be no more onerous than building one targeted at TOSSIM. Finally, high fidelity emulation of node radios and the radio communications of the motes that make up a sensor network are

Porting TinyOS to Xen

As discussed in section 2.1.4, the result of using the nesC compiler on a TinyOS application is to produce a complete C program, including a main() together with the source for all the additional components that were wired into application. The standard build process is to then compile this file using the appropriate hardware backend to produce an absolute binary that can be downloaded onto a mote. In order for any XenoTiny application to be instantiated in a Xen domain, it must be possible to link the application into an ELF binary of the form required by Xen. All guest OSs which are ported to Xen must also: • read the startup information provided at domain boot-up • set up handlers for virtual exceptions • handle events such as timer interrupts • provide a method for communication between domains (e.g. to emulate the radio in the XenoTiny case). Reference Operating Systems

Two reference operating systems are provided with the Xen distribution, XenoLinux and Mini-OS. XenoLinux is a modified Linux kernel, with modifications to replace direct access to the hardware with calls to the hypervisor (for CPU and memory virtualization) or to Dom0 (for I/O virtualization). Linux expects to dynamically load and execute application binaries over time, and provides the types of sophisticated OS abstractions that we have come to expect in desktop and server systems. This generality and power comes at the expense of several millions of lines of source code; tens of thousands of lines of code from the original Linux distribution are modified to produce XenoLinux. Mini-OS is an OS developed specifically for the Xen virtualization platform. The name refers to the fact that it is a “minimal” OS designed, in part, as a specimen Xen guest for use as a reference. It provides the basic capabilities to enable Dom0’s xm application to start it running in a guest domain; it also has some basic operating system functionality including a simple, nonpreemptive scheduler and threading. Mini-OS consists of fewer than 10,000 lines of code; since it was designed to be a reference, the code is laid out in a clear manner to aid understanding. MiniOS expects the actual program to execute once the guest domain is operational to be named app_main(). This combination of features and capabilities led us to explore using Mini-OS as a potential basis for the TinyOS domain. As a reference OS for Xen, it is no surprise that Mini-OS performs all of the startup functionality required of a guest domain prior to invoking app_main(). The fact that it invokes a single function to start the application fits well with the model for TinyOS applications. In fact, only a few of Mini-OS’s features were not applicable to TinyOS – e.g. threading functionality and dynamic memory allocation. Mini-OS provides a number of functions that encapsulate Xen hypercalls. While an understanding of how hypercalls interact with Xen is still required, development time for some of the basic functionality was substantially reduced as a result of these existing functions. To produce an ELF binary with the TinyOS image ready for instantiation in a Xen guest domain, the following high level build process was required:

• invoke the nesC compiler for the xen platform, resulting in a C source file • use a sed script to rename main() to app_main() • compile the source file to yield the corresponding .o • compile Mini-OS and link it with this .o file, yielding an appropriate absolute binary image 4.1.2

The Xen Platform for TinyOS

Two options existed for creation of this new platform: it could represent a generic TinyOS mote, replacing top level functionality, or it could be specific to a particular set of hardware and replace low-level functionality. Given our goal to create an accurate simulation of mote behavior, the latter option was chosen. This is the same approach taken by TOSSIM, which bases its simulation on the MICAz platform. Unlike TOSSIM, however, the work described herein modifies less of the original TinyOS code, focusing on hardware emulation at the lowest possible abstraction level. To enable a user to run the “make xen” command from an application’s directory, a number of additions to the TinyOS make system were required: • A xen directory was added to the TinyOS platforms directory. This platform directory contains a list of of the components required for the the Xen version of TinyOS. For each component, if a Xen implementation was required, it was placed in a xen directory within the directory containing the original component, similar to the use of sim directories for TOSSIM. Files in these Xen directories replace the default component code when building for XenoTiny. • A new set of makerules was added to the TinyOS source tree to build for the xen platform. Many of the existing makerules for the MICA family of motes were reused, with a couple of modifications: • removal of references to the AVR libraries and header locations so that the x86 Mini-OS versions would be used • invoke gcc (the x86 C compiler) after the successful execution of ncc (the nesC compiler) to produce a .o file that can be linked into Mini-OS The static allocation of memory used in TinyOS applications means that no effort was required to find an optimal memory size for each XenoTiny domain – simply link everything together and go. It should be noted that due to the mismatch between the size of the TinyOS application on the x86 and on the Atmel processor, developers should check the size of the image produced by the MICAz compilation process to ensure that the application will fit on the hardware. 4.2

Basic Functionality

This section describes the low-level hardware emulation required within XenoTiny. 4.2.1

Registers and Pins

TinyOS components for the MICAz platform contain references to hardware registers and pins. The Atmel microcontroller uses memory-mapped I/O for its pins and registers, reserving a number of low memory addresses specifically for reading the status of a pin/register or manipulating a pin/register. Symbolic names are provided to enable access to these absolute memory addresses. Similar to the solution in TOSSIM, these fixed memory addresses have been replaced by an array of the same number of bytes. Definitions for the pins/registers are then replaced with references into this array. All components that manipulate

Figure 3: Interrupt emulation pins/registers must use these new definitions. Any effect that should result from setting a pin or register must be performed explicitly in the xen-specific component code. 4.2.2

Atomic Sections

Atomic sections in TinyOS are delimited by calling the functions __nesc_atomic_start() and __nesc_atomic_end(); their implementations are platform-specific. On the MICAz this causes a specific bit on the SREG register to be set and cleared to globally disable or enable interrupts. In Xen, disabling interrupts is done within the domain by modifying the value of a variable within a HYPERVISOR_shared_info structure which the hypervisor inspects before signaling any virtual interrupts to the domain. Mini-OS provides useful macros for this process: local_irq_enable(), local_irq_disable(), local_irq_save() and local_irq_restore(); the enable/disable macros simply perform on/off transitions, while the save macro saves the current interrupt state before disabling, and the restore macro restores the interrupt state. Atomic sections within TinyOS code may be nested, so the save/restore model is the correct one to use. 4.2.3

Atomic Pin and Register Manipulation

On the MICAz, all reads and writes to/from memory-mapped registers and pins are atomic by default. On the xen platform, reading and writing to the array representing the pins and registers is not atomic, as an interrupt could interrupt manipulation of the array. Each macro used as a substitute for a bit operation on the registers and pins was placed within its own atomic block to eliminate this potential race condition. 4.2.4

Interrupt Pins

As indicated above, setting a pin in the array that replaces the MICAz low memory locations will have no physical effect, other than changing the value in the array cell; this is particularly a problem when that pin triggers an interrupt. On the xen platform, the wrapper around each pin (HplAtm128GeneralIOPinP) is augmented to provide the XenPinEvents interface, enabling it to fire the modified() event. The wrapper around each interrupt vector (HplAtm128InterruptPinP) is similarly augmented to use XenPinEvents, implementing a handler for the modified() event. As a result, when an interrupt-causing pin is modified, the corresponding GeneralIOPin signals its modified() event. This is received by the InterruptPin (as if the interrupt had just occurred) and processed as normal. When the interrupt event has finished processing, control is returned to the call that modified the pin. This is shown in Figure 3.

Using the Xen registers implementation, the original MICAz LEDs module works correctly (i.e. the relevant bits are set correctly) but none of these changes are visible to the user. Having implemented the dbg() function for Xen, the TOSSIM implementation for LEDs has been reused – each time a call to the LEDs component is made, it manipulates the relevant bits and then outputs any change made. Examples of this LED-specific output can be seen in Figure 4. This could be straightforwardly augmented to a graphical view of the LEDs in the dom0 control app. 4.2.7

Microcontroller Sleep

Each TinyOS platform must include a McuSleepC module which is responsible for measuring power usage of the mote and provides a command to cause the microcontroller to sleep. Sleeping puts the mote microcontroller into the lowest power state possible, allowing battery life to be substantially extended. This is used by the TinyOS scheduler when there are no tasks in its queue, and thus no work to perform. The microcontroller will sleep until an interrupt has occurred; the interrupt service routine may post a task for subsequent execution; if the interrupt does not post a task, the sleep operation is again performed. The XenoTiny implementation mimics this behavior by putting the Xen domain to sleep in a sequence of hypervisor calls. Similarly to the native mote hardware, the domain will sleep until an interrupt is received, at which point the block hypercall returns. XenoTiny does not currently attempt to emulate the energy usage of the processor. However, since XenoTiny runs in realtime (section 4.2.8) and supports the lower hardware layers, it knows when components are turned on and off, and can approximate energy usage in a straightforward manner. Figure 4: Collected Output From Running TinyOS Domains 4.2.5

Debug Output

One of the benefits of simulators such as TOSSIM is the ability to print debugging information to the console. Given the anticipated use of XenoTiny, it is essential to provide this ability for Xenbased emulation. While Mini-OS’s printk() function provides this functionality to the Xen virtual console (which is accessible from Dom0), it was decided that the same style of command as used in TOSSIM should be used; this approach enables code reuse from existing applications that have been simulated using TOSSIM as well as eases the transition to Xen for TinyOS developers who are familiar with TOSSIM. The TOSSIM dbg() function has been implemented using a number of printk() calls to the domain’s console. To avoid having to open multiple shells on Dom0, one for each TinyOS domain, the Dom0 control program for initiating the sensor nodes collects the output from each guest domain and displays the output in a single window, as shown in Figure 4. 4.2.6

LEDs

Most motes, including the MICAz, have a user interface consisting of 3 LEDs. TinyOS requires that each platform provide a PlatformLedsC component which must provide three GeneralIO interfaces (one per LED). Each of these interfaces provides simple set/clear functionality for each of the pins controlling an LED. TinyOS then uses this basic functionality to create the platformindependent component, LedsC, which wires its functionality to LedsP.

4.2.8

Timers

Each TinyOS platform must implement a millisecond timer component, HilTimerMilliC. An abbreviated version of the key functionality of this component is as follows: interface Timer { command void startPeriodic(uint32_t dt); command void startOneShot(uint32_t dt); command void stop(); event void fired(); command uint32_t getNow(); }

On the MICAz, the Atmel microcontroller has four hardware timers. The CC2420 radio chip relies upon a 32kHz clock for some of its processing. On the MICAz platform, two of these timers are used: Timer0 (8 bits) is used for the millisecond timer and Timer1 (16 bits) is used for the 32kHz timer. Xen provides functionality to schedule timer events for delivery to a guest domain. Requests are made via the set_timer_op(timeout) hypercall, which causes a virtual timer interrupt to be delivered after timeout nanoseconds. Only one of these requests can be outstanding for a guest domain at any one time, so issuing a new request replaces one which is pending. Given the goal of providing hardware emulation at the lowest possible abstraction level, XenoTiny has replaced the HPL wrappers for Timer0 and Timer1. Due to the single outstanding Xen timer limitation alluded to above, it was necessary to introduce a XenTimer component which takes requests for timer events from Timer0 and Timer1, managing the queue of requests and the sin-

Figure 5: xen platform timer component graph gle outstanding interrupt provided by Xen. The resulting timer component graph is shown in Figure 5. 4.2.9

IDs

When TinyOS is compiled for use on hardware, each is given a set of unique numbers. Nodes use these as identifiers and when a unique seed is required. For example, when sending a radio message, nodes place their TOS_NODE_ID values in the source field of the packet header. The RandomC component also uses the value of TOS_NODE_ID as the seed for its random number generator. Xen provides a useful method for transferring start-of-day information to domains via the XenStore. XenStore provides a treestructured database in which each domain has its own directory; Dom0 can write key-value pairs into any of the domains’ directories, and each domain can read/write key-value pairs into its own directory. When a TinyOS node is initialized, it calls the init() command in the XenIDsP module provided as part of XenoTiny. This component calls the xenbus_read() function a number of times to obtain the IDs stored in the Xen Store for this particular node. The init() command is integrated into the hardware initialization; as a result, no component that relies upon unique IDs will execute until after XenIDsP has obtained their values. 4.3

Radio Communications

The MICAz radio (CC2420) implements the IEEE 802.15.4 standard. TinyOS provides a platform-independent interface (ActiveMessageC), and the implementation of the active message functionality orchestrates invocations of methods in the CC2420ControlC, CC2420TransmitC and CC2420ReceiveC interfaces. The implementations of these latter interfaces communicate directly with the CC2420 radio. 4.3.1

TOSSIM Radio

The TOSSIM radio is implemented at the HIL layer, the highest level hardware abstraction. It directly replaces ActiveMessageP with its own radio implementation. A radio model component is present in each of the nodes; when transmitting, the model may alter the frame before adding it to the event queue; other nodes

are alerted regarding this new event and process the frame appropriately before passing the message up to TinyOS components. While this approach enables the use of a complex, high fidelity radio model for sending frames between nodes, it replaces the entire radio stack with a new implementation, thus reducing confidence in the tested system. 4.3.2

XenoTiny Radio

To maximize the amount of actual TinyOS code used in emulation, we focused on replacing the lowest-level radio component possible. A new component, XenRadioP, was created to replace the hardware underlying the calls made by CC2420TransmitC and CC2420ReceiveC. CC2420ControlC was heavily modified since its radio setup and bus arbitration functionalities are not needed in XenoTiny; all other components from the MICAz implementation are used unmodified. These changes cause each node to exhibit the same pattern of interactions with low-level radio components. Once the simulated radio chip has assumed responsibility for sending a frame, one must emulate the transmission of the frame to other motes. The next section discusses the design of the virtual radio network. 4.3.3

Simulated Radio Network

In the field, radio communications between TinyOS motes exhibit various properties that must be simulated: packet loss, radio noise and collisions. Signal power is reduced as the distance between nodes increases, often resulting in increased bit-errors and loss. TOSSIM implements a sophisticated radio model to simulate the above properties. Our goal was to provide a framework in which various radio models, of different foci/accuracy, can be easily implemented and inserted. Of particular importance was the desire that network model developers not have to be expert programmers; the framework needed to provide clear interfaces for accessing the underlying functionality. The key functionality provided by these interfaces consists of: • notification of all incoming frames from the motes • access to the topology in which the motes are deployed • ability to transmit frames to motes (as dictated by the radio model)

Figure 6: Network Model Framework A centralized approach has been adopted, with the network model implementation active on Dom0. When a mote transmits a packet, it is forwarded to the network model; the model determines which other motes, if any, should receive that packet, and then forwards the packet to those motes. The radio communication emulation uses the virtual networking interface provided by Mini-OS. Each 802.15.4 packet is encapsulated into a UDP packet and relayed to Dom0 via the frontend-backend communication channel. This is achieved by associating a unique IP address to each mote identifier and binding it to the domain’s virtual MAC address by loading the ARP cache. The radio model running in Dom0 intercepts the packets after they have been processed by the IP networking stack. To support the straightforward creation of different network models, the Java framework shown in Figure 6 is provided. The NetworkModel class is abstract and defines a constructor which takes as parameters a Topology and a class which implements the TinyMessageSend interface. The Topology class, discussed in the next section, can be used by the network model to determine distances between nodes. An example network model implemented in the distribution, PerfectRadioModel, performs simple distance-based filtering of packets. The bytes of packets are unmodified (no noise), and packets are relayed if the distance from the sender is less than a particular bound. While this is clearly too simplistic for a real simulation, it provides a starting point for future development of more accurate replacements. 4.4

Distributed XenoTiny

While the above discussion has described how to construct XenoTiny-based emulations of sensor networks on a single machine, the virtual networking facilities in Xen have enabled us to extend the system to multiple machines in a cluster. The key changes required building a distributed control structure for starting and stopping XenoTiny domains on the different machines from a single console, collecting the console output from distributed domains on a single console, and having the start of day information for each new domain contain the IP address of the single radio model instantiation.

5. Topology Management The process of building a domain is fairly tedious to perform by hand. Firstly, make xen must be executed in the relevant application directory; then the Mini-OS Makefile must be modified to include the object file produced. make is then run in the Mini-OS directory to create the absolute binary. Finally, this binary is loaded into a Xen guest domain using the xm command in Dom0. There are a number of per-domain settings that must be specified: the domain config file must contain a unique domain name, the XenStore must have the new domain’s start-of-day constants written into it (including the node’s geographic coordinates), and the arp command must be issued to bind the IP address for each TinyOS node to its virtual network interface. The topology of a sensor network is specified in a topology file. Each line of the file contains information for one node in the system as follows: • node ID • latitude (+/-,degrees minutes, seconds) • longitude (+/-, degrees minutes, seconds) • altitude (meters) • node application Note that we have chosen latitude, longitude and altitude as node coordinates to support realistic deployments scenarios. Field deployment of motes is usually driven by knowledge of the terrain and the locations of special interest for sensors; these coordinates are determined using hand-held GPS (Global Positioning System) devices. This leads to a direct mapping between testing and deployment configurations, providing additional assurance that the nodes will operate as expected when deployed. This differs from TOSSIM in which the topologies are defined in terms of radio gain (signal strength) between pairs of nodes. Given a topology in terms of latitude, longitude, altitude, one can estimate the gains between pairs of nodes to enable comparison with TOSSIM simulation results. In order to address the above-mentioned complexities, as well as to provide coordinate information for use by network model implementations, a control program, written in Java, is supplied with the system. This control program:

• parses a topology file, constructing an instance of the Topology class • instantiates the network model using that Topology class instance • for each entry in the file, creates a guest domain running the application and pauses the guest domain • after the last domain has been created, the domains are resumed in rapid succession When a simulation is in progress, it is possible to manipulate the topology in various ways. Nodes can be added and deleted. Additionally, if supported by the network model, nodes may move to new coordinates.

6. Evaluation By design, XenoTiny emulations are more precise/correct than TOSSIM simulations (race conditions can manifest themselves and the timers are exact) and more flexible (supports different applications in different emulated nodes, the topology is defined in terms of a realistic deployment model, and support for multiple radio models tuned to the targeted deployment environment). This section evaluates XenoTiny from three perspectives: correct behavior with respect to standard TinyOS applications, compatibility with TOSSIM, and resource consumption.

vided by our PerfectNetworkModel. This enabled direct validation against the TOSSIM results. 6.1.5

MultihopOscilloscope

This application tests the TinyOS Collection layer which provides best effort multihop delivery of packets to a sink [13]. Although the output relies on sending packets by serial connection to a PC – functionality not yet implemented in XenoTiny – the program was modified to output via dbg() statements when a new packet was received at the sink. This application shows packets being received from nodes that are out of range of the sink, having been routed by other nodes. 6.2

Compatibility with TOSSIM

The system’s correctness was tested using a number of test programs from the TinyOS apps directory.

Our goal was to run unmodified TinyOS code on Xen. This code often includes TOSSIM dbg() statements which are useful during development on TOSSIM; when compiling to motes, the dbg macros are simply replaced by white space. XenoTiny replaces these macros with its own implementation to aid portability between Xen and TOSSIM. A problem arises, however, when the developer has used a TOSSIM-specific variable or function (e.g. sim_time_now()) as a parameter to the debugging statement, as these do not exist in XenoTiny. The difference in topology representation between Xen and TOSSIM can cause problems, although it is possible to approximate radio gain values given the latitude, longitude, altitude triples for a pair of nodes.

6.1.1

6.3

6.1

TinyOS Applications

Blink

The Blink program uses the millisecond timer and the LEDs to count from 0 to 7 repeatedly. It uses three separate timers at the rates 1Hz, 2Hz, 4Hz, respectively; when a timer fires, the corresponding LED is toggled. The application was originally written for TOSSIM; its successful execution on XenoTiny required removal of references to sim_node_id(), which has no meaning in the Xen emulation. 6.1.2

RadioCountToLeds

This application periodically increments an unsigned integer and broadcasts its value over the radio. When a node receives such a message, it sets its LEDs according to the bottom three bits of the received value. This enabled the correctness of the radio components to be proved in both sending and receiving. 6.1.3

TestAcks

While RadioCountToLeds tests the radio, it does not check that ack frames for each transmitted frame are being received correctly. The TestAcks application, designed for TOSSIM, assumes that all motes have their IDs set to 1; Xen requires that the motes have unique IDs (as would be required in a real deployment). By setting only one node’s ID to 1, and ignoring that it does not have its packets acknowledged (as it cannot send to itself), the remaining nodes in range of 1’s radio have been checked for correctness. 6.1.4

Neighbor Discovery

This application is an implementation of the HELLO protocol in which neighboring nodes periodically exchange HELLO messages. Each node maintains a list of nodes that are direct neighbors, and exchanges its neighbor list with its neighbors, thus enabling each node to discover its 1 and 2 hop neighbors. This application was particularly useful as it is designed for TOSSIM using a perfect network simulation, similar to that pro-

Resource consumption

Each node in a Xen simulation consumes 5 MB of memory; Dom0 requires 500MB. Therefore, emulating a 500-node network requires 3GB of memory. Each node requires 2-3 seconds to be instantiated in a guest domain. Each guest domain is allocated a percentage of the processor when created; we have tested the accuracy of the emulation down to allocations of 1% of the processor (due to a limitation in one of the Xen configuration tools); future work will explore how small a CPU slice can be allocated and still maintain emulation accuracy. The clock emulation in XenoTiny is exact; thus, simulations using XenoTiny run in real-time. On the negative side, this temporal fidelity means that simulations of sensor networks using XenoTiny will take longer than the equivalent TOSSIM simulations; on the positive side, it means that one can incorporate “hardware in the loop” by linking a number of real motes through a gateway that interacts with a simulated set of motes. Such an approach can provide further confidence in the software prior to deployment.

7. Related Work As mentioned several times in the preceding text, TOSSIM [3] is the simulator of choice for TinyOS-based wireless sensor networks. It uses a probabilistic bit error model for inter-node interactions, simple yet sufficiently expressive to capture a wide range of network interactions. It is a discrete event simulator, and as such, it masks the existence of race conditions; it also requires that all sensor nodes execute the same application. VMNet [4] provides a custom virtualization environment upon which to emulate sensor networks; as such, it is able to provide a similar degree of emulation realism as XenoTiny. VMNet emulates at the CPU clock cycle level, and as a result, runs approximately 10 times slower than an equivalent TOSSIM simulation; in our tests of large-scale XenoTiny networks, we have not experienced any slowdown relative to the equivalent TOSSIM simula-

tion. Additionally, the authors do not document the capabilities of their Emulation Manager, and as a result, it is unclear how difficult it would be to emulate TinyOS running on other motes, which we expect to be straightforward using Xen. sQualNet [5] extends a state-of-the-art network simulator with sensor network specific models. Since it is based upon a discrete event simulator, sQualNet simulations suffer from the masking of race conditions in the sensor code. sQualNet does provide a high level of detailed modeling of the physical environment; we would expect many of these techniques to find their way into more sophisticated radio model implementations in XenoTiny. ATEMU [6] provides low-level emulation of the operation of each individual sensor node; the wireless interactions between these sensor nodes are then simulated in a realistic manner. Similar to VMNet, the core of ATEMU is a software simulator for AVR processor based systems, such as the MICA2. Each sensor node is emulated at the CPU clock cycle level. While the authors do not indicate runtime behavior, this low level of emulation is expected to make such emulations much slower than those from XenoTiny; extension to other microcontrollers is also questionable. Emstar [7] is a software environment for developing and deploying heterogeneous sensor-actuator networks. It is more focused on higher-level issues such as fault isolation, fault tolerance, system visibility, in-field debugging, and resource sharing across multiple applications on a sensor network; as such, it provides a middleware layer to support these system qualities.

8. Summary and Future Work XenoTiny provides an accurate simulation environment upon which real sensor network components operate. This occurs in real-time with the same behavior as running on actual hardware. In particular, race conditions are able to manifest themselves, and different nodes are able to run different applications. A framework supporting different radio models has been provided to enable realistic radio transmissions for use in simulations. A wide variety of applications have been tested on the platform, using a perfect radio model, and work correctly. While XenoTiny currently supports most of the core functionality provided by the MICAz platform and TOSSIM, there are a number of future developments that we intend to pursue: • Currently, the frame quality settings for received packets are always set to their maximum values; future radio models may model these settings, and their values will need to be included in the UDP packets that encapsulate TinyOS messages. • More sophisticated radio models that implement packet loss, bit-errors, signal attenuation, etc. • A mechanism for modeling busy channels that enables each radio to detect this condition, primarily through the use of grant tables to safely share this information among the guest domains. • Implementation of the serial output bus typically used to communicate with a larger computer. • More sophisticated user interfaces to the topology management functionality. • Exploration of use of Mini-OS as a Xen-compatible substrate upon which to mount Contiki [14]. • Exhaustive tests of the scale to which accurate simulations can be performed, including exploration of the number of guest domains that can be supported on each CPU while maintaining emulation accuracy.

• Accurate modeling of energy usage.

References [1] Akyildiz, I.F., Su, W., Sankarasubramaniam, Y., and Cayirci, E. 2002. Wireless sensor networks: a survey. Computer Networks, Volume 38, Issue 4, 15 March 2002, 393-422. [2] Wang, N., Zhang, N., and Wang, M. 2006. Wireless sensors in agriculture and food industry - Recent development and future perspective. Computers and Electronics in Agriculture, Volume 50, Issue 1, January 2006, 1-14. [3] Levis, P., Lee, N., Welsh, M., and Culler, D. 2003. TOSSIM: accurate and scalable simulation of entire TinyOS applications. In Proceedings of the 1st international Conference on Embedded Networked Sensor Systems (Los Angeles, California, USA, November 05 - 07, 2003). SenSys '03. ACM, New York, NY, 126-137. [4] Wu, H., Luo, Q., Zheng, P., and Ni, L. 2007. VMNet: Realistic Emulation of Wireless Sensor Networks. IEEE Transactions on Parallel and Distributed Systems, Volume 18, Issue 2, February 2007, 277288. [5] Varshney, M., Xu, D., Srivastava, M., and Bagrodia, R. 2007. sQualNet: A Scalable Simulation and Emulation Environment for Sensor Networks. In Proceedings of the International Conference on Information Processing in Sensor Networks (Cambridge, Massachusetts, USA, April 25 - 27, 2007). IPSN '07. ACM, New York, NY. [6] Polley, J., Blazakis, D., McGee, J., Rusk, D., and Baras, J.S. 2004. ATEMU: a fine-grained sensor network simulator. In Proceedings of 1st IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks (Santa Clara, California, USA, October 4-7, 2004) SECON 2004, IEEE, 145-152. [7] Girod, L., Ramanathan, N., Elson, J., Stathopoulos, T., Lukac, M., and Estrin, D. 2007. Emstar: A software environment for developing and deploying heterogeneous sensor-actuator networks. ACM Trans. Sen. Netw. 3, 3 (Aug. 2007), 13. [8] Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I., and Warfield, A. 2003. Xen and the art of virtualization. In Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles (Bolton Landing, NY, USA, October 19 - 22, 2003). SOSP '03. ACM, New York, NY, 164-177. [9] S. Devine, E. Bugnion, and M. Rosenblum. Virtualization system including a virtual machine monitor for a computer with a segmented architecture. US Patent, 6397242, Oct. 1998. [10] Hill, J., Szewczyk, R., Woo, A., Hollar, S., Culler, D., and Pister, K. 2000. System architecture directions for networked sensors. SIGPLAN Not. 35, 11 (Nov. 2000), 93-104. [11] Gay, D., Levis, P., von Behren, R., Welsh, M., Brewer, E., and Culler, D. 2003. The nesC language: A holistic approach to networked embedded systems. In Proceedings of the ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation (San Diego, California, USA, June 09 - 11, 2003). PLDI '03. ACM, New York, NY, 1-11. [12] TinyOS Community Forum. An open-source OS for the networked sensor regime. http://www.tinyos.net, accessed 15 August 2008. [13] TinyOS Core Workgroup. Tep 119 – collection. http://www.tinyos.net/tinyos-2.x/doc/html/tep119.html, accessed 30 March 2008. [14] Dunkels, A., Gronvall, B., and Voigt, T. 2004. Contiki - a lightweight and flexible operating system for tiny networked sensors. In Proceedings of the 29th Annual IEEE International Conference on Local Computer Networks, November 16-18, 2004, 455-462.