Document not found! Please try again

Autonomic Management of Hybrid Sensor Grid Systems ... - UF CISE

2 downloads 0 Views 208KB Size Report
Content-Based Routing. Master Autonomic Manager. Scheduling Engine. TinyAuto. Mote. Mote. Grid Resources (Computing, Storage, Communication). Mote n.
Autonomic Management of Hybrid Sensor Grid Systems and Applications Xiaolin Li, Xinxin Liu, Huanyu Zhao

Nanyan Jiang, Manish Parashar

Scalable Software Systems Laboratory Department of Computer Science Oklahoma State University, Stillwater, OK 74078, USA {xiaolin, xinxin, huanyu}@cs.okstate.edu

The Applied Software Systems Laboratory Department of Electrical & Computer Engineering Rutgers University, Piscataway, NJ 08854, USA {nanyanj, parashar}@caip.rutgers.edu

Scientific applications, such as waste management [1], volcano monitoring [2], city-wide structural monitoring [3], habitat and environmental monitoring [4], and end-to-end soil monitoring and precision agriculture systems [5] are now experiencing a revolution in instrumentation, which will change the character and scope of its experiments and measurements. Advances in instrumentation at many scales are turning scientific observation system into a data rich field and allowing new quantitative synthesis and hypothesis testing in near real time, as data streams in from distributed instruments. These emerging applications and systems render new application requirements and systems requirements largely different from traditional stand-alone systems in either grids or sensor networks. Future extensive usage of sensor networks for scientific computing uncovered new requirements for characterizing previously unknown phenomenon: varied spatial and temporal phenomenon, data quality and uncertainty estimation, and predictable temporal response to varying application characteristics. These different application requirements necessitate different system requirements: Performance and resource sharing, Workload balancing, Power-aware self-managing, and simple and extensible programming models and interfaces. The research presented in this paper is supported in part by National Science Foundation via grant number CNS-0709329, CNS-0723594, and IIP-0758566.

II. S YSTEM A RCHITECTURE (queries, User commands, injecting agents)

Results and Visualization

Interacting Software Modules

Development Kit and API

Web Portal/GUI (S3PAS Powered) Pre-Processing

Developer

Output Utilities

Programming Model

Mote n Mote i

Data-Driven Simulation Engine

Scheduling Engine

Data Fusion

TinyAuto

Master Autonomic Manager

(( ))

((

Active Key

TinyAuto

Associative Rendezvous Messaging Content-Based Routing Self-Organized Overlay text Grid Resources (Computing, Storage, Communication)

Grid

Fig. 1.

(( Mote

Basestation

)

)

)

Self-Configuring, SelfProtecting, and Power-Aware

I. I NTRODUCTION

The rest of the paper is organized as follows. Section II presents the system architecture of the ASGrid framework. Section III presents the self-configuration mechanism through content-based aggregation. Section IV presents the selfoptimization through resource management and task scheduling. Section V presents two security schemes: ActiveKey and S3Trust. Section VI presents the experimental results. Section VII presents the related work. Section VIII concludes the paper.

Self-Organizing, Self-Optimizing, and Secured Access

Abstract—In this paper, we propose an autonomic management framework (ASGrid) to address the requirements of emerging large-scale applications in hybrid grid and sensor network systems. To the best of our knowledge, we are the first who proposed the autonomic sensor grid system concept in a holistic manner targeted at non-trivial large applications. To bridge the gap between the physical world and the digital world and facilitate information analysis and decision making, ASGrid is designed to smooth the integration of sensor networks and grid systems and efficiently use both on demand. Under the blueprint of ASGrid, we present several building blocks that fulfill the following major features: (1) Self-configuration through content-based aggregation and associative rendezvous mechanisms; (2) Selfoptimization through utility-based sensor selection and modeldriven hierarchical sensing task scheduling; (3) Self-protection through ActiveKey dynamic key management and S3Trust trust management mechanisms. Experimental and simulation results on these aspects are presented.

)

Mote

Sensornet

Conceptual Architecture of ASGrid

The ASGrid framework is intended to provide an integrated service bridging the sensor networks and grid computing/storage systems. The conceptual architecture is shown in Fig. 1. The ASGrid framework integrates grid and sensor systems through pervasive data services across both systems, consisting of the following layered modules from the bottom up. • Self-Organizing Overlay. The overlay network is based on the Content Addressable Network (CAN) protocol with smart spatial and temporal load balancing and adaptive replica. • Content-based Routing. Content-based routing and aggregation are realized through associative rendezvous messaging and semantic aggregation. • Autonomic Managers. Autonomic managers include master autonomic managers (MAM) and TinyAuto. MAM is in charge of diverse functions of self-managing and provides managing and adapting policies. TinyAuto is

978-1-4244-2390-3/08/$25.00 ©2008 IEEE

1







a lightweight component designed particularly for sensor networks considering energy-efficiency and other resource constraints. Two-Tier Scheduling Engine. High-level task decomposition, scheduling, and resource management. A lowlevel sensing task scheduling strategy is applied in sensor network fields for sensing and data fusion. Upper Layers: Preprocessing, Programming Models, User Interface, and Programming Interface. Queries and tasks submitted by users are first preprocessed, decomposed, and scheduled to appropriate resources. Programming models are used for developers and other interactive software to tap the information and resources in ASGrid systems. Self-Protecting Modules. The low-level dynamic key management scheme ActiveKey and high-level trust management scheme S3Trust can be used in both sensor networks and grid systems. III. S ELF -C ONFIGURATION : C ONTENT- BASED AGGREGATION S ERVICE

The content-based aggregation service provides a uniform query interface where aggregation queries are specified using content descriptors from a semantic space in the form of keywords, partial keywords, wildcards and ranges, and the type of aggregation desired. Spatial attributes of a data item, e.g., the location of the sensor producing the data can be specified along with the other attributes in a query. The service extends the Associative Rendezvous (AR) [6] abstraction for contentbased information discovery and decoupled interactions. To meet the needs of queries, sensor networks are self-configured to provide content-based aggregation service similar to the construction of semantic routing tree in TinyDB [7]. Associative Rendezvous Messaging. The AR interaction model [6] consists of three elements: Messages, Associative Selection, and Reactive Behaviors. An AR message is defined as the triplet: (header, action, data). The data field may be empty or may contain the message payload. The header includes a semantic profile in addition to the credentials of the sender, a message context and the TTL (time-to-live) of the message. A profile is classified as a data profile or an interest profile depending on the action field of the message. Profiles in AR are represented using a hierarchical schema that can be efficiently stored and evaluated by the selection engine at runtime. A key characteristic of the selection process is that it does not differentiate between interest and data profiles. This allows all messages to be symmetric where data profiles can trigger the reactive behaviors of interest messages and vice versa. The action field of the message defines the reactive behavior at the Rendezvous Point (RP). Basic reactive behaviors currently defined include store, retrieve, notify, and delete. Note that the actions will only be executed if the message header contains an appropriate credential. Information Aggregation Using AR. The aggregation service extends the retrieve reactive behavior to specify aggregation operators, i.e., the AR message for an interest

profile includes the aggregation operation as follows: (< {attr} >, retrieve(A)), where attr is the set of content attributes of the interest profile within the AR message header, and retrieve(A) specifies aggregation using the aggregation operator A. Note that aggregations can be constrained to data-elements within a specific spatial region. The location attributes of a data element for an aggregation query are specified as part of the semantic profile of the message header, similar to other content attributes, i.e., location descriptors form (typically leading) dimensions of the semantic information space, based on which data and interest profiles are defined. The data profile becomes (< L, {attr} >, store, data) and the corresponding interest profile becomes (< L, {attr} > , retrieve(A)), where L specifies the location of the data producer in the data profile and the region of interest in the interest profile. L may use latitude/longitude or any other specification of location. Aggregation queries may also be recurrent in time. In such a query, an additional parameter is required to specify the frequency of evaluation of the persistent query; (< L, {attr}, T T L >, retrieve(A, Ta )), where Ta is the time interval between repeated aggregates. Semantics of Aggregation. The semantics of the aggregation query are as follows. When an aggregation query is posted, the query is routed to all rendezvous peers with data profiles that match the query’s interest profile. All data items at all peers that match the interest profile specification are aggregated according to the specified aggregation operator and are returned. In case of recurrent aggregation queries, the interest profile is registered at each rendezvous peer for the duration of its TTL, and is repeatedly evaluated at the specified frequency. The aggregation operation is repeated each time the query is evaluated and the aggregate is returned. Note that each aggregation operation is independent. IV. S ELF -O PTIMIZATION : R ESOURCE M ANAGEMENT Resource management in sensor networks plays an important role due to resource scarcity. While most research work has been focused on low-level efficient routing, systematic resource management that is adaptive to application runtime states and contexts has received little attention. We address resource management issues in an integrated and systematic manner: (1) Two tier data and control flow management and (2) Utility-based sensor selection. Node-level and system-level utility functions are defined considering all aspects of energy residual, communication cost, and data or information quality. (3) Model-driven data fusion task scheduling. Two Tier Data and Control Flows. In ASGrid, it involves multi-tier data and control flows across grid and sensor systems bridged through one or more basestations as gateways. The operations consist of high-level task scheduling in the grid, low-level sensing task scheduling in the sensor network, and distributed data-driven simulation to predict certain phenomena and guide sensing tasks and assure data quality through feedback-based control flows back to sensor fields as illustrated in Fig. 2. In particular, a high-level task is

2

Results/Visualization

User Task, Data

Front-End Internal Tasks

Data-Driven Simulation Engine

Clusters Decomposed Tasks

Local Results

Self-manage Policies Knowledge Base

Computing nodes Distributed Autonomic Managers

Grid computing system

Required Results

Adjust Parameters Adjust Sensing Tasks

Collective Data

Inter-Cluster Scheduling Data fusion nodes Intra-Cluster Scheduling

Sensed Data Source nodes

Data Flow

Control Flow

Fig. 2.

Wireless sensor networks

Basestation Fused Data

Self-Managing

Multi-tier Data Flow.

partitioned and distributed among computing resources. Note that the task partition may also result in a subtask distributed into sensor fields routed by basestations. The task can be a query of historical data archives, e.g., “how much is the oil production in the last 30 days?”, or realtime phenomena, e.g., “what is the current oil concentration in the field near (x,y,z)?” In ASGrid, the former query is preprocessed and a parallel or pipelined task probing the distributed data storage is then initiated. In turn, this task is partitioned and scheduled among grid systems to produce results without launching sensing tasks into the sensor network. The latter query is preprocessed and two tasks can be generated: one task is to run the datadriven simulation using past data; the other task is to launch a sensing task into the sensor network; the simulation results and sensed/fused data are then combined to produce answers to assure the data quality and reduce uncertain noises in sensed data. Utility-based Sensor Selection. To aid the resource management, we define the node and system utility as follows. Node utility. Based on the data quality, communication cost to transmit the data to a cluster head, and energy residual in a sensor node i, we define the node utility μi as below, μi = αφ(Zi ) − βϑ(j, i) + χε(i)

N 

μi (t)

V. S ELF -P ROTECTION : DYNAMIC K EY M ANAGEMENT AND R EPUTATION M ANAGEMENT ActiveKey: Dynamic Key Management. Key management schemes can be classified into static and dynamic schemes according to whether keys are predistributed and kept rather statically or keys are predistributed and dynamically updated by periodic or often rekeying operations. Most work has been devoted to static key predistribution [11], [12], [13], [14] and polynomial-based key establishment Blom85, Blundo92 schemes. In contrast, dynamic key management has been received little attention. Due to node capture or physical tampering, static key management is inadequate and does not respond quickly enough under attacks. With dynamic key management, a node captured can be isolated out of the trusted nodes quickly and effectively.

(1)

where φ(Zi ) is the information utility of node i; vartheta(j, i) represents the communication cost between the leader node j and i; ε(i) reflects the current energy level in node i. System utility. We further define the system utility μs (t) at the time t as a combined utility of all sensor nodes in a system or a cluster, given by, μs (t) =

α, β and χ are predefined. For example, if the value of μs (t) is small or below a certain threshold, which indicates that the energy of the local cluster decreases, should be assigned a larger value compared to and . In this case, the cluster head will tend to select the nodes with more energy. The cluster head selects a slave node to accomplish the sensing task according to the best node utility μi among all slave nodes using certain α, β and χ values. Model-driven data fusion task scheduling. The sensor network topology follows a hierarchical architecture with four kinds of nodes: base station, data fusion node, multi-hop routing node, and source node. Networked sensor nodes are organized into N clusters with n source nodes in each cluster. Two kinds of data are produced in each cluster: original data measured by source nodes and fused data processed by fusion nodes. The total amount of original data (M) and fused data (D) are known as specified by the sensing task and high-level task of the application, respectively. To formulate the scheduling models, we assume that measuring, communication, and processing time are compound measures, including all possible packet loss, synchronization overheads, and other uncertain factors. Due to space limit, we only describe the basic components. For more details, please refer to our earlier work [8], [9], [10].

(2)

i=1

The system utility reflects the system state at time t. For a specific task, a set of thresholds and corresponding values of

Fig. 3.

Active Key Components

To enable efficient dynamic key management, we propose a novel notion of active/proactive key management scheme. To the best of our knowledge, this is the first such attempt in the sensor network security community. As shown in Fig. 3, an active key includes not only key material but also active

3

VI. E XPERIMENTAL AND S IMULATION R ESULTS The objective of the experiments presented in this section is to evaluate and demonstrate system performance and scalability, and its ability to support wide-area networked sensor network deployments of scientific applications. In addition, the performance and robustness of S3Trust scheme is also demonstrated.

Messaging with complex queries LAN

ORBIT

PlanetLab

16

32

2500 2000

Time (ms)

operations to generate dynamic session keys. Following the notation of active keys, we can classify all previous work as passive key management schemes. Different from passive keys, an active key consists of data (passive part) and operations (active part). The passive part can include traditional keys, deceptive data, and the state of the active key; the active part includes the operations, which can be defined in simple notations, such as arithmetic (+, -, *, /) or logic (AND, OR, XOR) operations, in a simple coding language, such as a limited assembly language following a simplified ISA (instruction set architecture) in a virtual machine, or in a key handler style, such as a local procedure call or as a remote procedure call pointer. To generate a valid session key, the host node can invoke the operations on the passive part of the active keys alone or on the combined data including the passive part and local specifications of the sensor node. We adopt existing deterministic key predistribution schemes, such as Blom and Polynomial schemes [15], [16]. For key revocation and dynamic update, two options are available: centralized schemes [11], where the base station makes decisions on which nodes have been compromised according to information collected from all nodes and announces a revocation message and new active keys to correct nodes; and distributed schemes [12], [17], where the revocation decisions are made collaboratively in a local cluster or neighborhood of suspected nodes by distributed voting and states inspection of active keys. S3Trust: Robust and Scalable Reputation Management System. To further protect the sensor grid systems, we propose a reputation management system S3Trust. It addresses the dilemma between the completeness in reputation calculation and the network overload. S3Trust features a HTrust aggregation scheme which is inspired by the h-index algorithm [18]. Leveraging the robustness of the algorithm under incomplete and uncertain circumstances, S3Trust offers a robust reputation evaluation mechanism for both individual and group trusts with minimal communication and computation overheads. S3Trust further considers spatial and temporal information to update and adapt trust scores for an individual and groups. S3Trust consists of the following components: local trust manager, group trust manager, and distributed trust database. Local trust manager is responsible for local trust score calculation and trust evaluation. A trust seeker is used for querying, locating of trust data over the network and also makes the trust inference among peers. We use group trust manager to aggregate group reputation. Preliminary results are demonstrated in the experiment section.

1500 1000 500 0 4

8

64

Number of peers

Fig. 4.

Messaging Cost for Complex Queries

A. Performance of Content-based Aggregation As shown in Fig. 4, the experiment was conducted on the Orbit [19] test bed and compare the results with data aggregation in PlanetLab [20] and local LAN. Orbit testbed [19] is composed of hundreds of wireless nodes, is used for this set of experiments. We assume that huge amount of sensing data are generated, but only a small portion of them are queried and processed. The goal of the experiment is to demonstrate the ability of the pervasive data services to provide (near) realtime data aggregation from sensor networks. PlanetLab is a large scale heterogeneous distributed environment composed of inter-connected sites on a global scale. The goal is to demonstrate the ability of pervasive data services to support application even in an unreliable and highly dynamic environments such as PlanetLab, which essentially represents an extreme case for a Desktop Grid environment. The Meteor data aggregation service is currently deployed on about 300 nodes on Orbit, which have been used in the experiment presented below. In the experiment, each machine ran an instance of the stack, randomly joined the overlay during bootstrap phase, and served as a master or worker node with one worker instance per node. The sensor field is partitioned to about 10 × 5 coordination clusters, where each cluster had about 6 sensor nodes, in which 50 sensor nodes acted as cluster head and others served as members of the clusters. B. Support of Grid Applications We first evaluate and compare number of required messages with respect to in-network and external data quality control as shown in Figure 5 (a). External data quality control with larger network size have largest number of messages flowing through sensor network, since it needs to transfer local aggregated information to the remote sink. In-network data quality control have the same order of number of messages in theory. However, since the evaluation of data quality is localized (e.g. close to the targeted region), no messages are required to relay the results outside the network. As we expected, number of messages required per unit time decrease as the aggregate interval increases. Figure 5 (b) shows the latency for one data quality cycle with in-network and external data quality control schemes.

4

In-network DQC

External DQC (N=16)

External DQC (N=32)

Overall message size (K)

12 10 8 6 4 2

that measurement speed almost does not affect the finish time because sensing takes a small fraction of the entire execution time. Fig. 6(b) and 6(c) show that when the communication or processing speed of sensor nodes increases, the finish time is reduced. However, the effect of Sp is not as significant as Sc . We can find that the five lines in Fig. 6(c) converge when N becomes large. D. Trust Management

0 500

1000

1500

2000

2500

3000

Average aggregate intervals (ms)

(a) In-network DQC

External DQC (N=16)

External DQC (N=32) 700

DQC cycle (ms)

600 500 400 300

Fig. 7.

Malicious Service Percentage vs. Time (100 nodes)

Fig. 8.

Malicious Service Percentage vs. Time (500 nodes)

200 100 0 500

1000

1500

2000

2500

3000

Average query interval (ms)

(b) Fig. 5. control

Comparing performance for in-network and external data quality

Localized data processing shortened such cycle which is especially attractive for larger size of sensor networks. C. Sensing Task Scheduling Through simulations, we investigate the effects of three system parameters - measurement, communication, and processing speed - on the finish time and energy consumption through numerical simulations. In the simulation, the amount of data all the source nodes need to measure is M = 1000. We assume that the required informative data after processing the original data gathered at the base station should be L = 100. We adopt the following energy parameters: transmitting a unit of sensor reading over a unit distance takes etx = 200nJ; receiving one unit of sensor reading consumes erx = 250nJ; measuring one unit of sensor reading needs es = 100nJ; processing one unit of observation consumes ep = 30nJ; the distance between the sender and the receiver is d = 100m. There are 30 source nodes in each cluster; the number of routing nodes in between a source and a fusion node is 10. (Note that es depends on the sensors used for particular applications and could be of different orders of magnitude for different applications.) Fig. 6 shows the finish time versus the number of clusters. Since the number of clusters equals that of data fusion nodes, increasing N leads to the increment of both data fusion nodes and source nodes. In Fig. 6(a), the value of Sm is chosen from 0.8 to 1.6, while Sc and Sp is fixed to 1.0. This figure shows

To evaluate the performance of our trust management scheme S3Trust, we mainly use the detection rate of malicious peers. As shown in Fig. 7 and 8, the initial experiments are done simulating a network with 100 and 500 nodes where 20% of the peers are malicious and 15% of the peers have bad credibility. The trust ratings accurately reflect each peer’s actual behavior, the number of malicious service distribution also decreases with time. We observe that in a larger network, S3Trust still functions well and the convergence time increases in a reasonable range. Both results demonstrate that the S3Trust system detects most of the malicious peers after a few aggregation iterations. On average, the nodes transmit modest messages and incur acceptable communication overheads. VII. R ELATED W ORK Some research groups have recently started to build middleware systems to manage sensor networks [21], [22]. The RUNES middleware [21] aims to develop software components for networked embedded systems. RUNES features reconfigurability and dynamic linking on top of the Contiki operating systems. Recently, the SensorWeb initiatives are being designed to spread collected data and processed information throughout the Internet. Aggregation in large P2P distributed systems is often based on a hierarchical architecture. Data aggregation is an essential functionality in sensor networks,

5

550

900 Sm=0.8 Sm=1.0 Sm=1.2 Sm=1.4 Sm=1.6

500

700 Sc=0.8 Sc=1.0 Sc=1.2 Sc=1.4 Sc=1.6

800

450

Sp=0.2 Sp=0.4 Sp=0.6 Sp=0.8 Sp=1.0

600

700

400

500

300

Finish time

Finish time

Finish time

600 350

500

400

400 250

300 300

200

200 200

150

100

100 5

10

15

20

25

30

35

40

Number of clusters

(a) Effect of measurement speed (Sm )

100 5

10

15

20

25

30

35

Number of clusters

(b) Effect of communication speed (Sc )

40

5

10

15

20

25

30

35

40

Number of clusters

(c) Effect of processing speed (Sp )

Fig. 6. Measurement speed, communication speed, and processing speed against the finish time with the number of cluster increasing. (a) Sm varies with Sc and Sp fixed; (b) Sc varies with Sm and Sp fixed; (c) Sp varies with Sm and Sc fixed.

and has been addressed by a number of research efforts. The Tiny AGregation (TAG) [23] system provides high level programming abstractions that view the sensor network as a distributed database, and enable users to execute queries expressed in a suitable relational language. We propose to take advantage of the self-managing features offered by the notion of autonomic computing[24] to integrate grid and sensor systems. VIII. C ONCLUSION We presented a comprehensive autonomic managing framework ASGrid for managing hybrid sensor grid systems and applications. The ASGrid framework ushers in a novel concept of autonomic sensor grid system. In particularly, we described self-configuration, self-optimization, and self-protection mechanisms under the framework. Self-configuration was realized through content-based aggregation. Self-optimization was fulfilled through systematic utility-based resource management and data fusion task scheduling. We then presented security mechanisms: low-level dynamic key management and highlevel trust and reputation management. Experiments and simulations were presented to demonstrate part of the functionalities and performance of the ASGrid. We will continue our research to realize full-fledged self-managing hybrid sensor grid systems. R EFERENCES [1] M. Parashar, V. Matossian, H. Klie, S. G. Thomas, M. F. Wheeler, T. Kurc, J. Saltz, and R. Versteeg, “Towards dynamic data-driven management of the ruby golch waste repository,” ICCS 2006, 2006. [2] G. Werner-Allen, K. Lorincz, M. Ruiz, O. Marcillo, J. Johnson, J. Lees, and M. Welsh, “Monitoring volcanic eruptions with a wireless sensor network,” Second European Workshop on Wireless Sensor Networks, 2005. [3] V. A. Kottapalli, A. S. Kiremidjiana, J. P. Lyncha, E. Carryerb, T. W. Kennyb, and K. H. L. amd Ying Lei, “Two-tiered wireless sensor network architecture for structural health monitoring,” SPIE’s 10th Annual International Symposium on Smart Structures and Materials,, 2003. [4] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and J. Anderson, “Wireless sensor networks for habitat monitoring,” WSNA’02, Atlanta, Georgia, September 2002., 2002. [Online]. Available: citeseer.ist.psu. edu/mainwaring02wireless.html [5] K. Szlavecz, A. Terzis, R. Musaloiu-E., J. Cogan, S. Small, S. Ozer, R. Burns, J. Gray, and A. S. Szalay, “Life under your feet: An endto-end soil ecology sensor network, database, web server, and analysis service,” MSR-TR-2006-90, 2006.

[6] N. Jiang, C. Schmidt, V. Matossian, and M. Parashar, “Content-based middleware for decoupled interactions in pervasive environments,” Technical Report Number 252, Wireless Information Network Laboratory (WINLAB), Rutgers University, April 2004. [7] S. Madden, J. Hellerstein, and W. Hong, “TinyDB: In-network query processing in tinyOS, version 0.4,” madden,[email protected], [email protected], September 2003. [8] H. Kang and X. Li, “Power-aware sensor selection in wireless sensor networks,” 2006, iPSN’06 Work-in-Progress, http://www.cs.virginia.edu/ ∼ipsn06/WIPList.htm. [9] H. Kang, X. Li, and P. Moran, “Power-aware markov chain based tracking approach for wireless sensor networks,” in WCNC’07, 2007. [10] X. Li, H. Kang, and J. Cao, “Coordinated workload scheduling in hierarchical sensor networks for data fusion applications,” in IEEE MASS’07, 2007. [11] L. Eschenauer and V. Gligor, “A key management scheme for distributed sensor networks,” in ACM CCS’02, 2002. [12] H. Chan, A. Perrig, and D. Song, “Random key predistribution schemes for sensor networks,” in IEEE Security and Privacy Symposim, 2003. [13] W. Du, J. Deng, Y. Han, and P. Varshney, “A pairwise key predistribution scheme for wireless sensor networks,” in ACM CCS’03, 2003. [14] D. Liu and P. Ning, “Establishing pairwise keys in distributed sensor networks,” in ACM CCS’03, 2003. [15] R. Blom, “An optimal class of symmetric key generation systems,” in EUROCRYPT ’84, 1985. [16] C. Blundo, A. D. Santis, A. Herzberg, S. Kutten, U. Vaccaro, and M. Yung, “Perfectly-secure key distribution for dynamic conferences,” in CRYPTO 92, 1992. [17] H. Chan, V. D. Gligor, A. Perrig, and G. Muralidharan, “On the distribution and revocation of cryptographic keys in sensor networks,” IEEE Transactions on Dependable and Secure Computing, vol. 2, no. 3, 2003. [18] J. E. Hirsch, “An index to quantify an individual’s scientific research output,” Proceedings of the National Academy of Sciences, 2005. [19] “Orbit project,” http://www.orbit-lab.org/. [20] “Planetlab project,” http://www.planet-lab.org/. [21] P. Costa, G. Coulson, C. Mascolo, G. Picco, and S. Zachariadis, “The runes middleware: A reconfigurable componentbased approach to network embedded systems,” in PIMRC’05. IEEE, 2005. [22] G. Hackmann, C.-L. Fok, G.-C. Roman, and C. Lu, “Agimone: Middleware support for seamless integration of sensor and ip networks,” DCOSS’06, 2006. [23] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong, “Tag: a tiny aggregation service for ad-hoc sensor networks,” in Proceedins of the USENIX Symposium on Operating Systems Design and Implementation, 2002. [24] J. O. Kephart and D. M. Chess, “The vision of autonomic computing,” IEEE Computer, vol. 36, pp. 41–50, 2003.

6