Computer Communications 29 (2006) 3467–3482 www.elsevier.com/locate/comcom
Coverage-adaptive random sensor scheduling for application-aware data gathering in wireless sensor networks Wook Choi b
a,*
, Sajal K. Das
q
b
a Telecommunications R&D Center, Samsung Electronics, Suwon 442-600, South Korea Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019-0015, USA
Available online 20 March 2006
Abstract Due to the application-specific nature of wireless sensor networks, application-aware algorithm and protocol design paradigms are highly required in order to optimize the overall network performance depending on the type of applications. In this paper, we propose a novel coverage-adaptive random sensor scheduling for application-aware data-gathering in wireless sensor networks, with a goal to maximize the network lifetime. The underlying idea is to decide in each round (approximately) k data reporters (sensors) which can meet the desired sensing coverage specified by the users/applications. The selection of these k data reporters is based on a geometric probability theory and a randomization technique with constant computational complexity without exchanging control (location) information with local neighbors. The selected k data reporters for a round form a data gathering tree to get rid of wait-and-forward delay which may result from the random sensor scheduling and are scheduled to remain active (with transceiver on) during that round only, thus saving energy. All sensors have an equal opportunity to report sensed data periodically so the entire monitored area is covered within a fixed delay. Simulation results show that our proposed random sensor scheduling leads to a significant conservation of energy with a small trade-off between coverage and data reporting latency while meeting the coverage requirement given by the users/applications. 2006 Elsevier B.V. All rights reserved. Keywords: Wireless sensor networks; Sensor data gathering; Random sensor scheduling
1. Introduction Wireless sensor networks are distinguished from the traditional ad hoc networks due to high node density and extremely limited resources such as bandwidth, energy, computational capability, and memory [1]. They can be deployed both indoors and outdoors, substituting for our sensory organs in the inaccessible or inhospitable areas, for a variety of applications such as environment or equipment monitoring, smart homes/smart space, intrusion detection, security and surveillance, and space exploration. q
Supported by NSF ITR Grant IIS-0326505. This work was partially presented in INFOCOM 2005. * Corresponding author. E-mail addresses:
[email protected] (W. Choi),
[email protected] (S.K. Das). 0140-3664/$ - see front matter 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.comcom.2006.01.033
In most of these applications, sensors are operated by stringently constrained energy (battery power), the replenishment of which may not often be feasible, to complete a given specific task. Therefore, an important challenge in designing algorithms and protocols for wireless sensor networks is to make them energy-efficient as well as application-aware so as to maximize their lifetime. Fundamentally, sensor networks are application-specific data gathering platforms where sensors continuously sense their vicinity (called sensing coverage) and report sensed results to a data gathering point [5]. The reporting frequency depends on the communication models used, which can be classified into continuous, event-driven, on-demand, or hybrid, depending on the type of applications [12,15]. The continuous model requests all sensors to transmit their sensed data periodically while they are alive. In the eventdriven model, sensors start reporting their sensed data only
3468
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
when a specific event occurs. Whereas, in the on-demand model they report sensed data only at the users’ request. Due to the high node density, it is very likely that multiple sensors to generate and transmit redundant sensed data which results in unnecessary energy consumption and hence significantly decreases the network’s lifetime. Considering the fact that the energy consumption for wireless data transmissions is the most critical, minimizing the number of data transmissions by eliminating redundancy saves a significant amount of energy. Data-centric routing [10] attempts to reduce duplicate data transmissions by aggregating multiple packets cached for a certain amount of time, thereby conserving energy with some transmission delay. In [14], a sensing coverage preserving scheme is proposed which turns off the sensors having their entire sensing area overlapped with other sensors. More recently, two elegant algorithms, called connected sensor cover [7] and coverage configuration protocol [17] have been proposed that consider coverage and connectivity problems simultaneously. The former selects a minimum number of sensors to cover a specified area for query execution, thus reducing unnecessary energy consumption from redundant sensing. The latter selects a minimum number of sensors to guarantee that any point within a monitored area is covered by at least K sensors. These protocols decide a relatively small set of (connected) active sensors by running an algorithm with relatively high computational complexity and exchanging control information with local neighbors in order to cover the entire monitored area by 100% in each round. The execution and implementation of such algorithms, however, are challenging in wireless sensor networks due to the severe resource limitations and high node density. In fact, finding the smallest set of connected sensors that completely cover a given monitored area is an NP-hard problem [7]. 1.1. Motivation and problem definition Our motivation lies in the fact that depending on the type of applications, energy conservation can further be enhanced while meeting the user’s requirements such as data delivery latency and sensing coverage of a monitored
a
area [3]. Indeed, in some applications, the network lifetime is much more critical than covering the entire monitored area 100% at every round. For example, for a sensor network deployed for statistical study of scientific measurements in a monitored area, it may be enough if the network covers approximately 80% of the area on an average in each data reporting round on the condition that the sensed result covering the entire area can be collected within a fixed delay. Therefore, for more efficient and effective data gathering, sensors need to be configured more intelligently depending on application-specific requirements such as end-to-end delay, event-missed rate, and desired sensing coverage [2]. Fig. 1 illustrates an application-aware data gathering based on a trade-off between coverage and data reporting latency. The black solid dots within a small circle such as s1, s2, s3, . . . , s6 in both Figs. 1(a) and (b) represent the currently selected sensors that meet the desired sensing coverage (DSC) specified by the user/application, whereas the hollow dots in Fig. 1(b) represent the sensors selected in a previous round. A large circle represents the sensing range of a currently selected sensor. Suppose that the first set of selected 6 sensors in Fig. 1(a) covers a desired portion of the area but not the entire monitored area. For example, the shaded area in Fig. 1(a) is actually covered by the second set of selected 6 sensors as shown in Fig. 1(b). Thus, the entire monitored area can be sensed after two consecutive data reporting rounds, i.e., within a fixed delay. The problem in this data gathering is therefore associated with the selection of a minimum number of sensors satisfying the desired sensing coverage in each round while providing 100% coverage of the monitored area within a fixed delay. Formally, we define the problem as follows: Problem definition. Consider a set of N sensors placed over a monitored area Q by a random deployment scheme such that each sensor i has sensing region SRi . From N, a minimum number of k sensors is to be chosen in each data reporting round such that; ðð[ki¼1 SRi Þ \ QÞ P DSC for each round. With the selected k sensors in each round, the entire monitored area Q has to be covered within a fixed delay, T. Let Dtj be the time duration of each data reporting
b
Fig. 1. Illustration of application-aware data gathering based on a trade-off between coverage and data reporting latency.
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
round j with sensing coverage SC j ¼ ðð[ki¼1 SRi Þ \ QÞ P DSC. Then consecutive d reporting rounds are required such that ! d X d ðð[j¼1 SC j \ QÞ ¼ QÞ ^ Dtj 6 T . j¼1
To solve this problem, we apply a geometric probability theory dealing with the overlapped area when a circle is randomly placed over a geometrical figure [6] and a randomization technique.
3469
DSC is greater than 80%. The success ratio of immediate data reporting by at least one sensor is greater than 94% when DSC P 50%. The remainder of this paper is organized as follows. Section 2 presents a network model. Section 3 defines the DSC and describes how to find the minimum k sensors to meet a given DSC. Section 4 presents the proposed coverageadaptive random sensor scheduling, the time-critical event detection capability of which is analyzed in Section 5. We present simulation results and discuss the further study of our proposed sensor scheduling in Sections 6 and 7, respectively. Section 8 concludes the paper.
1.2. Our contributions 2. Network model In this paper, we propose a novel coverage-adaptive random sensor scheduling for application-aware data gathering in wireless sensor networks. The ultimate goal is to maximize energy conservation and thus the network lifetime. Our contributions are summarized as follows: • We introduced a novel concept of desired sensing coverage (DSC) which may vary depending on the type of applications. • The proposed random sensor scheduling is based on a disjoint random sensor selection (DRS) scheme to decide in each data reporting round (approximately) k on-duty (active) data reporters (sensors) which are sufficient for the DSC requested by the user/application. Off-duty sensors cache sensed results and wait for their turn to report. Thus, the parts of the area not covered by the current set of selected k sensors are covered by a subsequent set of k sensors. This incurs some delay but saves energy. The group of k sensors selected in each round forms a data gathering tree (DGT) rooted at the data gathering point with the help of off-duty sensors, if needed, to eliminate the wait-and-forward delay of selected reporters which may result from the proposed scheduling. Only the sensors on the DGT are scheduled to remain active with their transceivers on for that round only. • The computational complexity of DRS scheme is constant (i.e., independent of network density and size), thereby providing a high scalability. In addition, sensors do not use (periodic) control information exchange with local neighbors. Thus, our DRS-based sensor scheduling is well suited for task-specific wireless sensor networks which are required to run for a long time under highly limited resource constraints. • We measure the competitiveness of our applicationaware data gathering to 100% coverage data gathering schemes by presenting the analytical result of immediate reporting capability in each sensor. Experimental results demonstrate that our DRS-based coverage-adaptive random sensor scheduling can lead to a significant conservation of energy meeting the DSC specified by the user/application. It is also shown that the average data reporting latency is hardly affected when the
We define a wireless sensor network as an undirected connected graph G = (V, E), where V is the set of nodes that denote sensors and base station (BS) serving as a data gathering point or control center; and E is the set of edges representing bidirectional wireless communication links between sensors (or between sensors and the base station) within the radio range. A large number of homogeneous sensors are deployed with high node density over a 2-D geographical area. They learn their local connectivity at the network deployment time and also adapt to topology changes caused by sensor failures. In this paper, we mainly focus on sensor networks formed by uniformly randomly distributed static sensors with one data gathering point which could be static or dynamic. However, the application of the proposed scheme is not limited to such deployment only. Fig. 2 illustrates network topology models. Each node si 2 V has its specific radio and sensing ranges, both of which are assumed to form circular areas with radius r. The circular radio and sensing range areas are denoted by Arsi . An edge, denoted by e (si, sj) 2 E, exists between two sensors si and sj if d (si, sj) 6 r where d (si, sj) is the Euclidean distance between si and sj. A sensor si can directly communicate with other nodes within Arsi and can sense objects or phenomena within Arsi with high accuracy. A sequence of edges in the graph G forms a routing path to the base station, denoted by P ¼ heðs1 ; s2 Þ; eðs2 ; s3 Þ; . . . ; eðsi1 ; si Þ; eðsi ; s Þi where si is a sensor node, 1 < i 6 |V| 1, and s* is the base station. Thus, P is considered as a multi-hop routing path and each node si in P acts as an individual router. The path P is used if s* is not directly reachable. A control message from the base station s* is delivered to the sensor nodes by means of flooding [9]. We introduce four basic definitions. Definition 2.1. The set of local neighbors of si is defined as N si ¼ fsj jdðsi ; sj Þ 6 r; i 6¼ j and sj is another sensor or base stationg. Definition 2.2. A data reporting round is a time unit for which each sensor generates a sensed result to be forwarded to the base station using the path P. The duration of a round is denoted by Dt.
3470
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
Single-Hop
Multi-Hop
Sensor
BS Sensor Field
Fig. 2. Network topology models.
Definition 2.3. A reporting cycle, denoted by C, is the periodicity for a sensor si to report its sensed data. As shown in Fig. 3, the reporting cycle contains d ¼ bjV j1 c reporting k rounds where k is the number of sensors to be selected as active data reporters for the desired sensing coverage. Each round within C is denoted by R‘ where 1 6 ‘ 6 d. Definition 2.4. A data reporting group is a set of active data reporters in a given round. There are d such disjoint groups, denoted by G‘ where 1 6 ‘ 6 d such that each group G‘ maps to each R‘ within C (refer to Fig. 3). Each sensor belongs S to exactly one group such that T G ¼ / and 16‘6d ‘ 16‘6d G‘ ¼ V fs g. Sensors belonging to G‘ report their sensed data only during the selected round R‘ and wait until they complete the reporting cycle. Note that all sensors have an equal opportunity to report what they sensed within C, thus covering the entire monitored area within a fixed delay.
Definition 3.1. A monitored area, Q, is the actual area to be monitored by sensors. We consider this area as an a · a square. Definition 3.2. A sensor-deployed area, D, is a square area including all sensors which can have an effect on covering Q such that the square has rounded corners (refer to Fig. 4). When sensors are randomly deployed over the monitored area Q, it is likely that some of them may be placed slightly beyond the boundary of Q due to the inaccuracy in random sensor deployment. This is why the sensor-deployed area, D, needs to be defined to account for the probabilistic sensing coverage. The distance from the boundary of Q D to each rounded corner is equal to r (radius of sensing range). Thus, the circular sensing range of a sensor residing in D Q is not fully overlapped with the area Q. Sensor-Deployed Area
3. Desired sensing coverage and number of data reporters In this section, we define the desired sensing coverage (DSC) which is a trade-off (negotiation) factor between coverage and data reporting latency. The user specifies the DSC as the desired level of ‘‘quality of service’’ to be achieved from sensor data gathering. The term DSC represents a probabilistic percentage of covering any point within the entire monitored area. The question is: in order to meet the user-specified DSC, how many sensors do we need to select at each data reporting round? Before proceeding further, let us introduce the following three definitions.
(i-1) th Reporting Cycle Round 3
Round 4
Δt
Monitored Area Sensor
r Fig. 4. Monitored and sensor-deployed areas.
(i) th Reporting Cycle
(i+1) th Reporting Cycle
Round 1
Round 2
Round 3
Round 4
Reporting Group 1
Reporting Group 2
Reporting Group 3
Reporting Group 4
Round 1
Fig. 3. Illustration of reporting round (R‘ ), cycle (C), and group (G‘ ), 1 6 ‘ 6 d = 4.
Round 2
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
Definition 3.3. A probabilistic sensing coverage, w, is the probability of any point in Q being covered by the circular sensing range of at least one of the selected k sensors residing in D. Note that w is provided by the user or application as the DSC. Let q be the subarea of Q covered by the circular sensing ranges of k 6 |V| 1 sensors residing in D ¼ a2 þ 4ar þ pr2 . Then, the fraction Qq is the user’s DSC in each data reporting round. Any point (x, y) within Q is said to be covered if it is inside the circular sensing coverage of at least a selected sensor within D. To measure the probabilistic sensing coverage, w, we first measure the probability Pq (x, y) that a point (x, y) within Q will not be covered by a selected sensor, si. Let Aðx; yÞ be a circular area, centered at (x, y), with radius r. Then, the point will not be covered if si is within D Aðx; yÞ. Therefore, the probability that a point (x, y) is not covered by a randomly selected sensor, is given by Z Z D Aðx; yÞ P q ðx; yÞ ¼ ; ð1Þ vðx; yÞ dx dy ¼ D DAðx;yÞ where vðx; yÞ ¼ D1 is the probability that si is located at a point (x, y) within D. Hence, the probability that a point is not covered by (uniformly) randomly selected k sensors is simply (Pq (x, y))k. Let q be the subarea of Q not covered. For the selected k sensors, the expected value of q can be obtained as Z Z k E½ q ¼ ðP q ðx; yÞÞ dx dy. ð2Þ Q
As mentioned earlier, we consider how much area in Q can be covered by randomly selected k sensors. For this purpose, we first consider the fraction of Q not covered by these k sensors. This can be obtained by dividing E½q by the area a2 of Q. Based on Eqs. (1) and (2), the fraction of Q not covered by k selected sensors is given as k k E½ q D Aðx; yÞ a2 þ 4ar ¼ ¼ . a2 D a2 þ 4ar þ pr2 a
Therefore, the smallest integer k which satisfies the DSC, w, can be derived as & ’ logð1 wÞ k¼ . ð4Þ a2 þ4ar logða2 þ4arþpr 2Þ We compared the simulation results with the numerical results obtained from Eq. (4). Figs. 5(a) and (b) show, respectively, the results for covering a requested portion of the monitored area with varying network sizes and sensing ranges. Regardless of the size of the network field and the sensing range, we observe that both the numerical and simulation results match well. 4. Coverage-adaptive random sensor scheduling Designing a sensor scheduling scheme without considering application-specific behavior leads to unnecessary sensor mode (active/sleep) switchings. It is discovered in [16] that if a sensor switches its mode every 30 s, the amount of electric current consumed per hour is 0.059 mA which is sufficient to transmit data for 17.7 s. Hence, sensor scheduling should be application-aware in order to make it optimal in terms of energy conservation. Our proposed sensor scheduling is based on a disjoint random sensor selection (DRS) where (approximately) k sensors meeting a given applicationspecific DSC are selected in each round and are scheduled to remain active with their transceiver on for that round only forming a data gathering tree (DGT). Eventually, d groups of data reporters form d DGTs, each
Network Size = 200 x 200m
2
1.0
0.8
0.6 2
Analytical (100x100m ) Simulation 2 Analytical (200x200m ) Simulation 2 Analytical (300x300m ) Simulation
0.4
0.2
0.0
Monitored Area Coverage Ratio
Monitored Area Coverage Ratio
Finally, when k sensors are uniformly randomly selected from D, the probabilistic sensing coverage (w) that any point of Q will be covered by the circular sensing range of at least one of the selected k sensors is given by k a2 þ 4ar . ð3Þ w¼1 2 a þ 4ar þ pr2
b
Sensing Range = 30m
1.0
3471
0.8
0.6
0.4
Simulation (Sensing Range: 20m) Analytical Simulation (Sensing Range: 30m) Analytical Simulation (Sensing Range: 40m) Analytical
0.2
0.0
0
10
20
30
40
50
60
70
80
Number of Selected Sensors
90
100
0
20
40
60
Number of Selected Sensors
Fig. 5. Comparison results of simulation and analytical results in covering the requested portion of Q.
80
100
3472
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
serving as a set of active sensors (with transceiver on) in each round within the reporting cycle C. 4.1. Disjoint random sensor selection Disjoint random sensor selection (DRS) scheme covers the entire monitored area within a fixed delay by allowing all sensors to have an equal chance to report within the cycle C. More specifically, all sensors select a round from d rounds within C and serve as a data reporter for the selected round only, thus guaranteeing that all the sensors report their sensed data within a fixed delay meeting a given DSC. A user may consider some other constraints along with the DSC. For example, every part of the monitored area is required to be monitored with a uniform delay, or the monitoring pattern should not be learned by an adversary attempting to circumvent the sensing activity. That is, the uncovered part of the entire monitored area in each round should not be known. To cope with such constraints, we introduce two variants of DRS – fixed disjoint random sensor selection (F-DRS) and non-fixed disjoint random sensor selection (N-DRS). Both guarantee a disjoint set of data reporters in each round of C. They are described below. • Fixed disjoint random sensor selection (F-DRS): Initially, each sensor randomly selects only a reporting round from d rounds within C and keeps that selected round as their reporting round until it is requested to reselect one, so that the selected k sensors in each round are fixed as well as disjoint within C. Moreover, all sensors have a fixed data reporting latency of d · Dt at every C (we assume that sensors can report at any time within their reporting round. So, we take the mean value Dt2 as reporting latency within the selected round interval Dt. Refer to Fig. 6(a)). • Non-fixed disjoint random sensor selection (N-DRS): Similarly to F-DRS, sensors using N-DRS select a reporting round randomly from d rounds. However, at every C, they repeat the selection procedure so that
the selected k sensors in each round are memoryless as well as disjoint. Therefore, the monitoring pattern cannot be known beforehand and the reporting latency of each sensor ranges from Dt to (2d 1)Dt (refer to Fig. 6(b)). Now, let us define a reporting sequence to describe how the sensors realize the above selection schemes. Definition 4.1. A reporting sequence of a sensor si, denoted by RS si , is a sequence of bits. Each bit maps to a round R‘ ð1 6 ‘ 6 dÞ in the cycle C and hence the number of bits is equal to the number (d) of rounds within C. The bit sequence is initialized to all zeros (i.e., off) and only one bit will be flipped (i.e., on) depending on which round within C is selected. Thereby, RS si denotes a round within C in which the sensor si reports its sensed data. For example, RS si ¼ \01000" for k = 4 and |V| 1 = 20 represents that si selects the second round R2 in C as its data reporting round. To become a data reporter for only one specific round within the cycle C, each sensor si constructs its RS si by running the algorithm Construct_RS presented in Fig. 7, where A[d] is a bit array of length d and RAND [1, d] is a function which returns a random integer between 1 and d based on uniform random number generator. While operating, sensors report their sensed data only when a bit corresponding to the current round in A[d] is equal to 1. Thereby, the selected k sensors in each round R‘ of the reporting cycle C are disjoint and each sensor transmits its sensed data exactly once within C. After a reporting cycle C, if sensors are based on the N-DRS, they acquire a new reporting bit sequence RS si by running the algorithm Construct_RS. Otherwise, they keep the initial RS si . As the DSC increases, the number of data reporting rounds within C decreases. Therefore, the larger the DSC specified by the users, the smaller is the delay in monitoring the entire area. Whereas, energy conservation rate is inversely proportional to the DSC. 4.2. Active sensor set formation The proposed DRS-based scheduling allows only k active data reporters during a reporting round so that active reporters which do not have any neighbor with the same reporting sequence have to wait for data reporting
Fig. 6. Reporting latency in two consecutive cycles (d = 3).
Fig. 7. Algorithm for constructing reporting sequence (RS si ).
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
3473
until at least one of neighbors becomes active. For instance, as shown in Fig. 8, sensor A can forward data immediately while sensor B has to wait to forward data until sensor C becomes active. To eliminate such waitand-forward delay, d data gathering trees (DGT) are constructed and sensors forming a DGT remain active for a round. The objective is to minimize the number of additional sensors which needs to be involved to allow k selected data reporters to have at least one active neighbor and thus the number of active sensors (with transceiver on) for a round. While constructing the DGT, each sensor si is supposed to • Find a next hop node, called upstream sensor, on the DGT to reach the data gathering point (s*). This upstream sensor has either the same reporting sequence as RS si or the shortest hop distance to a sensor with the same reporting sequence as RS si . • Learn the reporting sequences of neighbors and their reachability to each of d DGTs. The data gathering point (s*) initiates the DGT construction by flooding a setup message, denoted by S, whose format is: S ¼ fRS sj ; ðc1 ; O1 Þ; ðc2 ; O2 Þ; . . . ; ðc‘ ; O‘ Þg;
1 6 ‘ 6 d;
where sj is the sender, c‘ is a hop counter, and O‘ is the sensor (origin) that resets c‘. While the message S is being relayed, c‘ is reset to 1 or 1 depending on the reporting sequence RS sj of the sender sj. The c‘ increases or decreases by one until it is reset. Therefore, |c‘| is the hop distance from O‘ to the receiver of this message. The hop distance is the main criterion to find an upstream sensor. Initially (i.e., from s*), S = {0, (0, s*), (0, s*), . . . , (0, s*)}. Each sensor si maintains a forwarding record which includes the best candidate (upstream sensor) to reach each DGT of d data reporting groups. We denote this record by Rsi whose format is Rsi ¼ fðc1 ; O1 ; s1 Þ; ðc2 ; O2 ; s2 Þ; . . . ; ðc‘ ; O‘ ; s‘ Þg;
1 6 ‘ 6 d;
where s‘ 2 N si ði 6¼ ‘Þ is the best candidate to reach the DGT of G‘ . Initially, all 3-tuples in Rsi are set to (1, , ). Hereafter, a form of expression A.b means ‘‘b in A’’ so that the notations c‘ and O‘ in both S and Rsi are distinguished. For example, ‘‘S.c‘’’ represents ‘‘c‘ in S’’. Fig. 9 represents a procedure for generating a relay message S in each sensor si based on Rsi .
Fig. 9. Algorithm for generating S to relay.
Sensors receive setup messages relayed by neighbors for a fixed amount of time before relaying S. They update their Rsi whenever a setup message S is received from a neighboring sensor. Of importance, for an update of each 3-tuple (c‘, O‘, s‘) in Rsi before broadcasting S, a condition ‘‘ðS.c‘ < Rsi .c‘ Þ’’ should be satisfied and the corresponding S.O‘ should be different from Rsi .O‘ . However, once sensors broadcast S after the fixed amount of time, they attempt to update each (c‘, O‘, s‘) in Rsi only if Rsi .c‘ P 1. For each update trial after broadcasting S, an additional condition ‘‘(S.O‘„ the receiver of S)’’ should be satisfied besides the two pre-broadcasting update conditions. Otherwise, a cycle will be formed on the DGT, thus resulting in a routing loop. Note that the post-broadcasting update is confined only to sensors which cannot directly reach the data gathering point. Suppose that si 2 G‘ reports its sensed data during a round R‘ . When si is ready for broadcasting a setup message, it generates S by running the algorithm, Generate_S, shown in Fig. 9 and resets S.c‘ to 1 and becomes the origin of c‘ (i.e., si fi S.O‘) if Rsi .c‘ P 0; otherwise it only decreases S.c‘ by 1. On the other hand, for cm (m „ ‘), si resets cm to 1 and becomes the origin of cm (i.e., si fi S.Om) if Rsi .cm < 0; otherwise it only increases S.cm by one (refer to Fig. 9). With the above hop counter manipulation, a neighbor belonging to the reporting group same as G‘ is considered first as its upstream node, if it exists. Otherwise, a neighbor with the shortest hop distance to a sensor in G‘
Fig. 8. Wait-and-forward delay depending on neighbors’ round selection results (d = 3).
3474
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
is chosen (refer to Fig. 10 where dotted lines represent the radio connectivity). Finally, sensor si sends a join-request to its upstream node, say sj, if this upstream node does not belong to G‘ . The join-request is a control message to request the upstream node to remain active during a specific reporting round. Hence, sensor sj receiving such a join-request updates its RS sj to include the reporting round ðR‘ Þ of the sender si. Then it relays the join-request if 0 < Rsj .c‘ , otherwise it discards the join-request without relaying it further. Refer to Fig. 10 where sensor s4 with Rs4 ¼ fð1; s2 ; s2 Þ; ð1; s1 ; s1 Þg discards the joint-request received from s5 without further relaying it, i.e., Rs4 < 0. Fig. 11 shows a DGT construction algorithm where 1 6 ‘ 6 d and sj 2 N si . The notation Wsi in lines 2 and 12 represents a cache of si in which all setup messages received from neighbors are stored for their reporting sequence retrieval. Another purpose of this collection is for coping with sensor failures and route load balancing as in [4,8]. Fig. 12 illustrates the DGT construction when d = 3 where a set of three numbers next to the nodes represents the hop counters in S that they relayed using a broadcast. Sensors s6 and s8 sent a join-request to s1 and s3, respectively, because their upstream nodes s1 and s3 do not belong to the same reporting group (i.e., c2 in the final forwarding records of s6 and s8 has a positive hop counter). So, the reporting sequences of s1 and s3 are updated as ‘‘011’’ and ‘‘110’’, respectively.
Fig. 11. Algorithm for constructing DGT.
sor medium access control (S-MAC), was proposed in [18]. This protocol not only offers a function of time synchronization but also allows sensors to have their own sensor scheduling pattern for the application used. More specifically, S-MAC allows sensors to have their own periodic listen and sleep schedule and hence reduces energy consumption. During sleep, sensor nodes turn off their radio and set a timer to wake up themselves later. The duration for listening and sleeping is selected depending on the different application scenarios. The reporting sequence, RS si , constructed by each sensor can be provided
4.3. Self-scheduling Only k sensors are responsible for reporting their sensed data in each reporting round so that off-duty sensors do not have to keep their transceivers on, if they are not requested to serve as additional sensors in forming d DGTs. Hence, we can further improve on energy-savings by scheduling only the sensors on a DGT to be active with their transceivers on in each round. In order to realize such a sensor scheduling, clocks in all sensors first need to be synchronized. An elegant sensor MAC protocol, called sen-
Sensor Type
s* upstream node of s3
upstream node of s4
s1
s2
Join-request
s3
s4
upstream node of s5 update RSs4 "10" as "11"
Join-request
s4
RSs Reporting Group i
10
G1
01
G2
11
G1, G2 s*
s* s1
s2
s5 s4
DGT for G 1
s3
s4
DGT for G 2
Fig. 10. Illustration of selecting upstream nodes and sending join-requests.
s5
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
3475
our proposed coverage-adaptive random sensor scheduling. Sensors schedule themselves to remain active with their transceivers on according to their RS si (note that here RS si is the final updated one after DGT construction). So, they need to remain active for at least Dt of their reporting round during a reporting cycle C. However, some sensors remain active more than Dt during the cycle, depending on the number of join-requests they received during the DGT construction. Sensors transmit their sensed data to the data gathering point by simply forwarding it to its upstream node while they are active. However, in the case that sensors are not active but an immediate data reporting needs to be done due to a specific event detection, they need to have at least an active neighbor which belongs to the current DGT to complete the immediate reporting. Fig. 14 illustrates how a sensor can transmit its sensed data without any wait-and-forward delay during the reporting cycle C. Sensor s1 transmits its sensed data through its upstream node s2 for Dti of its reporting round. s1 can also transmit data anytime after Dti since it knows all of the neighbors’ RS si . For example, if s1 needs an immediate reporting at a certain time within Dti+1, it can complete that reporting through a neighbor s3 which remains active during that reporting round. As a result, the capability of immediate data reporting is affected by the number of neighbors and the distribution of their reporting round selection results. We probabilistically analyze this capability in the following subsection. 5. Analysis of immediate data reporting capability Fig. 12. Illustration of data gathering tree construction (d = 3).
for S-MAC as an application-specific periodic listen and sleep schedule. Fig. 13 shows an example of RS si -based periodic listen and sleep scheduling where the type of sensors and RS si are found in Fig. 12. We recommend this SMAC protocol for link layer protocol in implementing
Sensor Type
RSsi
round
100
Listen
010
Sleep
Δt Sleep
Listen
Sleep
001
Sleep
Listen
110
Listen
Sleep
011
Sleep
Listen Cycle C
Time
Fig. 13. Illustration of RS si -based periodic listen and sleep schedule for S-MAC.
As mentioned above, only the sensors on a DGT will keep their transceivers on for Dt. Thus, sensors not in the current DGT should be able to access one of the active sensors in this current DGT to transmit their sensed data to the data gathering point. Otherwise, the transmission will be delayed until the time at which they will be able to access an active neighbor on a DGT. Thus, in order to measure the immediate data reporting capability of a sensor si we i measure the probability, Prsact , that si can access an active neighbor belonging to the current DGT when it is not active. Since each sensor selects its reporting round from ½R1 ; R2 ; . . . ; R‘ ð1 6 ‘ 6 dÞ with probability P R‘ ¼ 1d (refer to Fig. 7), sensor si can expect to have nðR‘ Þ ¼ P R‘ jN si j neighboring sensors for each round. So, if si needs to immediately report what it sensed and nðR‘ Þ P 1 for the current i round, then Prsact > 0. Suppose that all the nðR1 Þ neighbors of si which are expected to select a round R1 draw another round R‘ ð2 6 ‘ 6 dÞ. In this case, if at least one neighbor sj which is expected to select a round R‘ also draws R1 instead of R‘ , the si can access the DGT of the reporting group G1 through the neighbor sj even though none of those nðR1 Þ neighbors selects the round R1 . Therefore, i when si is not active, Prsact is given by i ¼ ð1 ðP R‘ Þ Prsact
nðR‘ Þ
Þ þ ðP R‘ Þ
nðR‘ Þ
ð1 ð1 PrRact‘ Þ
d1
Þ;
ð5Þ
3476
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482 Radio Range (r) Reporting Cycle
s2 s3
s5
s4 s1
Active
Sensor
RSsi Period
s2
1000
s3
0100 Δti+1
s4
0010 Δti+2
s5
0001 Δti+3
s2 Δti
Δti
s3 Δti+1
s4 Δti+2
s5 Δti+3
Time
Fig. 14. Immediate data reporting capability of a sensor (d = 4).
where P R‘ ¼ 1 P R‘ and PrRact‘ is the probability that a round R‘ is chosen by at least one neighbor which is expected to draw a round different from R‘ . Let A denote the case ‘‘g out of nðR‘ Þ sensors fail to draw an expected round’’ and let B denote the case ‘‘at least a sensor among g selects R‘ ’’. Then, the case B will occur given that the case A has occurred and thus, the probability that both cases occur is given by: PrðA \ BÞ ¼ PrðAÞPrðBjAÞ. Hence, PrRact‘ is derived as " # nðR g X X‘ Þ nðR‘ Þ g j ðgjÞ g nðR‘ Þg R‘ Pract ¼ ; ðP R‘ Þ ðP R‘ Þ rr g j j¼1 g¼1
where E½jN si j is the expected number of neighbors within the EDZ whose size is the same as the sensing area, Arsi . Since the entire EDZ may not be within the monitored area depending on the location of event occurrence (refer to Fig. 16(b)), we get x jV j1x jV j1 X E½Arsi E½Arsi jV j 1 E½jN si j ¼ x 1 ; D D x x¼1 ð7Þ
ð6Þ 1 ¼ 1 r. Fig. 15(a) shows the probaand r where r ¼ d1 si bility Pract for varying d and nðR‘ Þ in each round. When an event occurs, a circular area of radius r centered at this event is defined as an event detection zone (EDZ), as illustrated in Fig. 16(a). All sensors residing in this EDZ are able to sense the event. However, not all the sensors can report their sensed result. Only the sensors which have at least a neighbor with the transceiver on at that instant can report it. Therefore, the immediate eventdetection reporting turns out to be a failure if none of the sensors can find such a neighbor when they sense a specific event. The probability, Pr@succ , that @ sensors in EDZ
a
will be able to complete the immediate reporting, is given by E½jN si j @ E½jN j@ @ i i Þ ð1 Prsact Þ si ; ðPrsact Prsucc ¼ @
E½Arsi
is the expected sensing area (event detection where zone) area. We omit to show how to measure it due to the space constraint (it can be found in [2]). Fig. 15(b) shows the success probability Pr@succ of the immediate reporting for varying expected number of neighbors when d = 5. 6. Simulation study A discrete time event simulator, called simlib [11], was used to implement the sensor selection schemes, N-DRS and F-DRS which are bases of our proposed coverageadaptive random sensor scheduling, and to collect statisti-
b
1
1
0.7
0.6
0.5 δ=2 δ=3 δ=4 δ=5
0.4
0.3 1
0.95
ℵ
Success Probability ( Prsucc)
0.8
s
Probability ( Pract i )
0.9
2
3
4
5
6
7
Expected Number of Neighbors for Each Round
8
0.9
0.85 ℵ=1 ℵ=2 ℵ=3 ℵ=4 ℵ=5
0.8
0.75 10
15
20
25
30
Expected Number of Neighbors (δ = 5)
Fig. 15. (a) Success probability that a sensor si completes an immediate reporting, and (b) Success probability that at least @ sensors complete the immediate reporting.
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
3477
Sensing Range Monitored Area (Q)
si
r Event
r
sj
Event Occurred !
r
Event
Event Detection Zone (EDZ)
Fig. 16. Event occurrence and detection range.
cal information. We focus mainly on fundamental issues like immediate data reporting (time-critical event detection) capability, data reporting latency for 100% coverage, and energy conservation capability. 6.1. Performance metrics and methodology We measure the performance of the proposed DRSbased sensor scheduling scheme by evaluating the following three metrics with desired sensing coverages varying from 0.5 to 0.9 with 0.1 interval: • Data reporting latency for 100% coverage: The time difference between an event detection and its reporting to the data gathering point. • Immediate data reporting capability (event detection fidelity): The fraction of the number of event detections successfully reported by the sensors without any forwarding delay. • Energy conservation capability: Data transmission is the most energy-consuming component. Thus, we consider the number of data transmissions in each data reporter as an index for measuring energy conservation capability. We compare the data reporting latency and energy conservation capability of our proposed data gathering strategy with those measured from a common (optimal) data gathering (CDG) where sensors cover the entire monitored area 100% and report their sensed data at every round. We generate three sensor network fields: 100 · 100 m2, 200 · 200 m2, and 300 · 300 m2. Homogeneous sensors
are scattered randomly and uniformly over the network field with density of 1 sensor/200 m2. We assign an identifier to each sensor sequentially from 0 to |V| 1. We run 10 experiments, each with different sensor distribution, for various desired sensing coverage in each network field. The radius (r) is 30 m for both sensing and radio ranges. After the deployment, sensors collect DGT setup messages for 10 s and then broadcast a setup message, S, generated based on the collected setup messages (i.e., Rsi ). Local neighbors and their reporting sequences are learned during the DGT construction period. Events occurred in the network field based on uniform distribution and their occurrence interval is also uniformly distributed between 1 and 10 s. Table 1 summarizes the parameters used in this simulation where k is computed by Eq. (4) based on the specified DSC, w. Our implementation does not include any feature of the MAC layer and wireless channel characteristics, since our main objective in this study is to measure the integrity of event detection (i.e., immediate data reporting capability) and data reporting latency caused mainly by the reporting sequence-dependent wait-and-forwarding behavior, when the user-specified sensing coverage is less than 100%. As a future work, we plan to incorporate MAC layer features into our implementation and run experiments with realistic data gathering scenarios. 6.2. DGT construction – elimination of wait-and-forward delay It is very likely that the total number of sensors required to be active for a round is more than k in order to ensure
Table 1 Simulation parameter values Network field ðQÞ Number of sensors (|V| 1) Sensor density Sensing and radio ranges (r) Event occurrence Event cccurrence interval Data reporting interval Dt # of data reporting rounds (d) Simulation time
100 m · 100 m 200 m · 200 m 50 200 1 sensor/200 m2 30 m Uniform distribution Uniformly distributed within [1,10] seconds 10 s b50 b200 kc k c 16,000 s
300 m · 300 m 450
b450 k c
3478
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
the immediate data forwarding (reporting) of the selected k data reporters without wait-and-forward delay. Note that the number of data reporters in each round could be larger than k since d ¼ bjV j1 c (refer to Fig. 7). Fig. 17 compares k the results of the selected k sensors with the total number of sensors which should be active after the DGT construction procedure is completed. As the entire network size increases, the ratio of the monitored area to the entire network size increases. Thus, the larger the network size, the smaller is k, since the probability of the selected sensors residing within the monitored area becomes high. On the other hand, the distribution of these sensors becomes sparser. Fig. 17 demonstrates that the number of selected k sensors decreases as the network size increases and the difference between k and the number of sensors on a DGT slightly increases as the network size increases. In particular, for 300 · 300 m2 network field, the number of sensors which should be active forming a DGT is only
28.7% of the total number of sensors deployed when w = 0.9. On the other hand, in the 100 · 100 m2 field, the number of sensors is around 50%. 6.3. Periodic data reporting: latency for 100% coverage In Figs. 5(a) and (b), we showed that k sensors are sufficient to cover as much of the area as the user requests. Here, we present the average and maximum data reporting latency until the sensors transmit their sensed data to the data gathering point, i.e., while they operate under periodic data reporting option. Figs. 18(a)–(c) show the average reporting latency for each simulated network size, respectively, and Figs. 18(b), (d), and (f) show the corresponding maximum latency. As mentioned earlier, the DSC and the total number of sensors are the main factors in deciding the latency since they determine the number of rounds (d) within a reporting
Fig. 17. Selected k sensors vs. total number of active sensors on DGT.
a
b
c
d
e
f
Fig. 18. Periodic data reporting – latency for 100% coverage (Avg. and Max.).
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
cycle C. Thus, sensors need to wait a longer period to become a data reporter as the DSC decreases (i.e., d increases). This is because we observe the maximum latency becomes very large when the DSC is low. Note that the maximum latencies of F-DRS and N-DRS are always smaller than d · Dt and (2d 1) · Dt, respectively, in all simulated networks as we claimed earlier. This demonstrates that the entire monitored area is covered within a fixed delay. The average reporting latency shows the same trend. For w P 0.7, the average latency is a bit less than twice that of CDG in all the figures. Particularly, when w P 0.8, the average latencies in all the simulated network fields are not much affected as compared with the ones of CDG. As shown earlier in Fig. 19, the ratio of k sensors to the total number of sensors remains almost the same. This implies that the number of rounds (d) is not continuously increasing as the network size grows. 6.4. Immediate reporting: capability and latency Since sensors continuously monitor their vicinity to detect specific events on behalf of the users, they may need
Ratio of k to Total number of Sensors
0.5 DSC = 50% DSC = 60% DSC = 70% DSC = 80% DSC = 90%
0.4
0.2
0.1
200
400
600
800
1000
α value of α × α Network Field
Fig. 19. k vs. total number of sensors (|V| 1) over varying network sizes and DSC.
a
to report immediately after detecting a specific event or when their sensed result is higher/lower than a given threshold. Figs. 20(a)–(c) show the success ratio of immediate data reporting during the simulation time where @ is the number of sensors which successfully report what they detected without forwarding delay. In all the simulated network sizes and desired sensing coverages, the success ratio achieved by @ P 1 sensor is above 94% and the ratios by @ P 3 and @ P 6 sensors are above 87% and 73%, respectively. In particular, when w P 70%, the success ratio by more than 6 sensors is above 90%, implying high reliability of reception at the data gathering point. In the case that a sensor fails to report its sensed data immediately, it needs to wait for the next earliest active neighbor it can access. Figs. 21(a)–(c) show statistics of such latencies during the simulation time. Note that this latency only pertains to the case of immediate reporting failure. Fig. 21(c) shows the longer latency for all DSCs as compared to the one in Figs. 21(a) and (b). This is because the probability that a sensor finds a neighbor in the current DGT, is reduced as the number of rounds in the reporting cycle becomes larger (however, the length of the reporting cycle does not increase linearly as the network size grows as shown in Fig. 19). Note that when w P 70%, the reporting latency is not much critical and the maximum latency is less than twice the minimum latency. This implies that the proposed random sensor scheduling can provide a significant resource savings with a minimum trade-off. 6.5. Energy conservation capability
0.3
0
3479
Given a DSC, energy conservation rate in each sensor depends on k and |V| 1 (total number of sensors) because they determine the number of reporting rounds, d, that affects the frequency of sensed data reportings in each sensor. This implies that the energy conservation capability (i.e., the number of data transmissions) is inversely proportional to d as demonstrated in Figs. 22(a)–(c). We observe that as the network size increases, the number of data transmissions in each sensor is noticeably smaller as compared to CDG. In other words, the energy conservation
b
Fig. 20. Immediate data reporting success ratio.
c
3480
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
a
b
c
Fig. 21. Reporting latency in case of immediate reporting failure.
a
b
c
Fig. 22. Distribution of number of data transmissions (reportings only).
rate can be significantly increased with a small trade-off if we recall that the data reporting latency and immediate data reporting capability were not significantly degraded when w P 0.8. Table 2 represents how much the sensors’ reporting duty can be relieved with the density of 1 sensor/200 m2 in each simulated network field. We also measure the number of data transmissions in sensors while they are serving as a reporter as well as forwarder (i.e., reportings + forwardings). Fig. 23 shows the corresponding results. As mentioned earlier, depending on the distribution of the selected k sensors, some sensors have to be involved in constructing a DGT irrespective of their reporting round to provide data forwarding service. Thus, some sensors show a relatively larger number of data transmissions. Nevertheless, we observe in Fig. 23, that sensors show higher energy conservation capability than in CDG. 7. Discussions The novelty of our proposed coverage-adaptive random sensor scheduling is to provide the users with sensing covTable 2 Calculated values for k and d based on w and |V| DSC (w)
Network size 100 m · 100 m 200 m · 200 m 300 m · 300 m
0.5
0.6
0.7
0.8
0.9
(k, d) (6, 8) (17, 11) (32, 14)
(k, d) (8, 6) (22, 9) (42, 10)
(k, d) (10, 5) (28, 7) (55, 8)
(k, d) (14, 3) (38, 5) (73, 6)
(k, d) (20, 2) (54, 3) (104, 4)
erage adaptable to the type of applications to prolong the network lifetime. The coverage adaptivity is achieved by having sensors construct their own reporting sequence, RS si , based on a specified DSC. As mentioned earlier, the complexity of constructing RSi is not affected by the network density. However, in the case of [7,17], the denser the network the more the computation power and communications are required, and thus less scalable become [7,17]. Therefore, as the node density increases our random sensor scheduling scheme becomes more efficient in conserving energy with a minimum trade-off. In this work, we omitted measuring the complexity of constructing data gathering platform (i.e., sensing coverage and connectivity). Thus, the CDG could be considered as a general model of any existing data gathering scheme offering 100% coverage while we were comparing the integrity of event detection and energy conservation capability. We leave as a part of our future work measuring the competitiveness of our random sensor scheduling as compared to existing data gathering platforms with 100% coverage by comparing such complexity with varying network density. In our randomization algorithm, sensors become data reporters for only a specific round within a reporting cycle, C, by independently selecting a round, R‘ , with probability P R‘ ¼ 1d without knowing which round the other sensors have selected. Thus, the resulting number of selected sensors in each round could be a little more or less than k. With much concern about the variance on the number of data reporters selected in each round which is directly related to the fidelity of meeting the given DSC in each round,
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
a
b
3481
c
Fig. 23. Distribution of number of data transmissions (reportings + forwardings).
the users might ask ‘‘Given the total number of sensors (|V|) and DSC (w), how high is the chance of having the number of selected data reporters less than k? or how can we make sure that the number of sensors selected in each round is greater than or equal to k?’’. In this case, the main consideration is to see how much the number of selected sensors deviates from the value k measured from Eq. (4). If we consider the total number of selected sensors in each round as a random variable Z, then, given c which tells us how far the actual number of selected sensors will be from k, we can derive a probability bound on the occurrence of k c 6 Z 6 k + c and can also measure the probability, P R‘ , that k sensors will be almost surely selected for |V| and w using the Chernoff bound technique [13]. For the details, refer to [2]. 8. Conclusion In this paper, we proposed a novel coverage-adaptive random sensor scheduling for application-specific data gathering in wireless sensor networks. The proposed scheme is based on a disjoint random sensor selection (DRS) which exploits a trade-off between coverage and data reporting latency in determining sensors to be active for sensed result reporting. The ultimate goal is to prolong the network lifetime by relieving the quality of service (i.e., sensing coverage of a monitored area) depending on the desired sensing coverage specified by the user/application. To achieve this goal, we first presented how to find a minimum of k data reporters (sensors) which can meet the desired sensing coverage using geometrical probability. Then we introduced two variants of DRS with constant computational complexity: non-fixed random sensor selection (N-DRS) and fixed random sensor selection (F-DRS). The data reporters selected for a round form a data gathering tree (DGT) rooted at the data gathering point to eliminate the wait-and-forward delay of their reporting and only the sensors on a DGT are scheduled to remain active (with transceivers on) during that round. We also analyzed the time-critical event detection fidelity because it could be a major concern in applying the proposed scheduling due to less than 100% coverage in every round.
Experimental results showed that our random sensor scheduling can meet the desired sensing coverage specified by the user incurring some delay for 100% coverage. Moreover, it shows that the network lifetime can be significantly increased with a small trade-off between coverage and data reporting latency; the higher the network density, the higher is the energy conservation rate without any additional computation and communication costs. As a future work, we plan to integrate the Poisson sampling technique into the DRS to improve the spatial regularity of selected sensors, thus improving the fidelity of meeting the desired sensing coverage in each round as well as the immediate reporting capability. Acknowledgments We are grateful to all the referees for their insightful comments which improved the technical quality of the paper. References [1] I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, Wireless sensor networks: a survey, Computer Networks 38 (4) (2002) 393– 422. [2] W. Choi. A novel framework for energy and application-aware data gathering in wireless sensor networks. Ph.D. dissertation, UT Arlington, 2005. [3] W. Choi, S.K. Das. A novel framework for energy-conserving data gathering in wireless sensor networks, in: Proceedings of IEEE International Conference on Computer Communications (INFOCOM), 2005, pp. 1395–1404. [4] W. Choi, S.K. Das, K. Basu. Angle-based dynamic path construction for route load balancing in wireless sensor networks, in: Proceedings of IEEE Wireless Communications and Networking Conference (WCNC), 2004, pp. 2474–2479. [5] D. Cook, S.K. Das, Smart Environments: Technology, Protocols and Applications, John Wiley, New York, 2004. [6] F. Garwood, The variance of the overlap of geometrical figures with reference to a bombing problem, Journal of Biometrika 34 (1947) 1–17. [7] H. Gupta, S.R. Das, Q. Gu, Connected sensor cover: self-organization of sensor networks for efficient query execution, in: Proceedings of ACM Mobile Adhoc Network Symposium (MOBIHOC), 2003, pp. 189–199.
3482
W. Choi, S.K. Das / Computer Communications 29 (2006) 3467–3482
[8] X. Hong, M. Gerla, H. Wang. Load balanced, energy-aware communications for mars sensor networks, in: IEEE Aerospace, vol. 3, 2002, pp. 1109–1115. [9] C. Intanagonwiwat, R. Govindan, D. Estrin. Directed diffusion: a scalable and robust communication paradigm for sensor networks, in: Proceedings of ACM Mobile Computing and Networking (MOBICOM), 2000, pp. 56–67. [10] B. Krishnamachari, D. Estrin, S. Wicker. The impact of data aggregation in wireless sensor networks, in: Proceedings of IEEE International Conference on Distributed Computing Systems Workshops (ICDCSW), 2002, pp. 575–578. [11] A.M. Law, W.D. Kelton, Simulation Modeling and Analysis, third ed., MacGraw-Hill, New York, 2000. [12] A. Manjeshwar, Q.-A. Zeng, D.P. Agrawal, An analytical model for information retrieval in wireless sensor networks using enhanced APTEEN protocol, IEEE Transactions on Parallel and Distributed Systems 13 (12) (2002) 1290–1302. [13] R. Motwani, P. Raghavan, Randomized Algorithms, Cambridge Press, 1995. [14] D. Tihan, N.D. Georganas. A coverage-preserving node scheduling scheme for large wireless sensor networks, in: Proceedings of ACM Workshop on Wireless Sensor Networks and Applications (WSNA), 2002, pp. 32–41. [15] S. Tilak, N.B. Abu-Ghazaleh, W. Heinzelman, A taxonomy of wireless micro-sensor network models, ACM Mobile Computing and Communications Review 6 (2) (2002) 28–36. [16] A. Tiwari, F.L. Lewis, S.S. Ge. Wireless sensor network for machine condition based maintenance, in: Proceedings of International Conference on Control, Automation, Robotics, and Vision, 2004, pp. 461–467. [17] X. Wang, G. Xing, Y. Zhang, R. P. C. Lu, C. Gill. Integrated coverage and connectivity configuration in wireless sensor networks, in: Proceedings of ACM Conference on Embedded Networked Sensor System (SENSYS), 2003, pp. 28–39. [18] W. Ye, J. Heidemann, D. Estrin. An energy-efficient mac protocol for wireless sensor networks, in: Proceedings of IEEE International Conference on Computer Communications (INFOCOM), 2002, pp. 1567–1576.
Dr. Wook Choi is a senior engineer in Samsung Electronics. He finished his Ph.D. at the University of Texas at Arlington (UTA) in 2005. He received the outstanding M.S. thesis and Ph.D. dissertation awards from UTA. His research interests include wireless mesh networks, ad hoc and sensor networks, mobile and pervasive computing, multi-radio access protocols.
Dr. Sajal K. Das is a Professor of Computer Science and Engineering and also the Founding Director of the Center for Research in Wireless Mobility and Networking (CReWMaN) at the University of Texas at Arlington (UTA). His current research interests include sensor networks, resource and mobility management in wireless networks, mobile and pervasive computing, wireless multimedia and QoS provisioning, mobile internet architectures and protocols, grid computing, applied graph theory and game theory. He has published over 350 research papers in these areas, holds four US patents in wireless internet and mobile networks. He received Best Paper Awards in IEEE PerCom’06, ACM MobiCom’99, ICOIN’02, ACM MSwiM’00 and ACM/IEEE PADS’97. He is also recipient of UTA’s Outstanding Faculty Research Award in Computer Science (2001 and 2003), College of Engineering Research Excellence Award (2003), and University Award for Distinguished record of Research (2005). He serves as the Editor-in-Chief of Pervasive and Mobile Computing journal, and Associate Editor of IEEE Transactions on Mobile Computing, ACM/Springer Wireless Networks, IEEE Transactions on Parallel and Distributed Systems. He has served as General or Program Chair and TPC member of numerous IEEE and ACM conferences. He is the Vice Chair of IEEE TCCC and TCPP Executive Committees.