1
Selective Additional Rebroadcast: An Approach for Robustness Control for Network Wide Broadcast Paul Rogers and Nael Abu-Ghazaleh
Abstract— Network-Wide Broadcast (NWB) is a common operation in Mobile Ad hoc Networks (MANETs) used by routing protocols to discover routes and in group communication operations. NWB is commonly performed via flooding, which has been shown to be expensive in dense MANETs because of its high redundancy. Several algorithms have been proposed targetting reducing the overhead of NWB. In this work, we target another problem that can substantially impact the success of NWBs: since MAC level broadcasts are unreliable, it is possible for critical rebroadcasts to be lost, leading to a significant drop in coverage. This is especially true in the presence of poor quality links (due to signal fading or contention) and when coverage redundancy is low. Existing NWB algorithms that target reducing the overhead of flooding reduce its inherent redundancy and harm its reliability. Thus, we argue for a robustness control component of NWB algorithms to maintain coverage robustness. The first contribution of the paper is to classify the solution space for robustness control. The second contribution is a selective rebroadcast solution for robustness control that is localized, reacts dynamically to losses, and can be added to provide robustness control to existing NWB algorithms. We show that our approach leads to considerable improvement in NWB coverage relative a recently proposed solution to this problem. Because it dynamically reacts to losses the additional overhead incurred scales with the amount of losses experienced in the network. Finally, we explore improvements to the solution that adapt to the local state of the network to carry out more informed rebroadcast decisions.
I. I NTRODUCTION Network-Wide Broadcast (NWB) algorithms provide a low-overhead mechanism to deliver information to all nodes in the network without requiring routing information information. Because its relatively low overhead, and resiliency to mobility and other dynamic changes in the network, NWB is a heavily used primitive at the core of most MANET routing [1], [2] and group communication protocols [3], [4]. The authors are at the Computer Science Dept., State University of New York at Binghamton: emails
[email protected],
[email protected]
A common approach to performing NWB is flooding: a process where every node that receives a packet for the first time, rebroadcasts it. Flooding has been shown to be wasteful, especially in dense networks – a problem identified by Ni et al and called the broadcast storm [5]. Ni et al also proposed several solutions to the problem based on nodes locally determining whether their rebroadcast is likely to be needed. A different, topology-sensitive, approach to the problem of controlling NWB redundancy attempts to construct a virtual backbone that is tasked with disseminating the broadcast. For example, Connected Dominating Sets (CDS), comprising of a connected subset of the nodes that together cover all the nodes in the network, can be constructed and used as a virtual backbone. This set can then be tasked with rebroadcasting the NWB packet while all other nodes just receive it, significantly reducing the number of retransmissions needed relative to flooding. NWB algorithms are reviewed in more detail in Section III. In this paper, we consider a different problem that adversely affects most NWB protocols: the robustness1 problem. This problem affects NWB protocols that rely on MAC level broadcast operations. More specifically, because MAC broadcasts are unreliable, it is possible for rebroadcasts to be lost due to interference or transmission errors. The loss rate can be considerable if high interference exists or if link quality is bad [8]. This problem may lead to the NWB reaching only a subset of the nodes. A particularly bad example of this problem occurs when the initial transmission of the NWB packet is lost at all receivers (e.g., due to a collision with another packet). In this case the NWB will not reach any nodes. The underlying causes are discussed 1 It is important to note that guaranteed reliability requires acknowledgment from all recipients, which in general, cannot be accomplished in approaches that rely on MAC broadcast. For example, fully reliable NWB can be built on top of unicast communication, but at a much higher overhead than broadcast based approaches [6], [7]. We do not pursue guaranteed reliability.
in Section II. The problem affects most existing NWB algorithms including flooding based protocols [5] and virtual backbone approaches (e.g., [1], [9], [10]).
Finally Section VII presents some concluding remarks. II. BACKGROUND – MAC B ROADCAST
NWB algorithms be viewed as two complementary components: (1) Redundancy control: this phase of the algorithm targets reducing the redundancy in the rebroadcasts; and (2) Robustness control: this phase of the algorithm attempts to maintain coverage and recover from losses occurring in the transmissions necessary to cover the nodes. Most existing NWB algorithms target only redundancy control and have been therefore shown to suffer in terms of robustness. Having a combined solution results in a low overhead robust NWB. Further, effective solutions dynamically maintain robustness as losses occur; overhead dynamically increases as the environment becomes less reliable.
Wireless communication properties make wireless transmissions unreliable for two primary reasons: (1) Signal propagation effects can cause deep fades in signal power (20dB, or 100x changes in signal power are common in urban environments [11]). This leads to a large number of transmission errors, especially when the two communicating nodes are far from each other; and (2) Collisions: the wireless channel is non-uniformly shared, giving rise to the well known hidden terminal problems [12]. A hidden terminal is an interfering node out of reception range of the sender, but in interference range with the receiver; such a transmission is not detected by the sender (it is hidden to it), causing a potential collision at the receiver. Furthermore, techniques such as Carrier Sense Multiple Access and Collision Avoidance reduce collisions but do not eliminate them [13]. Collisions arise due to self-interference among the NWB retransmissions at different nodes, as well as due to interference from other connections. To improve robustness in the face of losses, MAC protocols like IEEE 802.11 [14] use retransmission of packets that are not acknowledged. Such an approach cannot be used with broadcast packets because there are a number of receivers. Accordingly, if a broadcast packet is lost due to a collision, the loss is not detected by the sender and no retransmission is carried out. This makes MAC level broadcast operations much more susceptible to losses. As a result, NWB operations which rely on MAC level broadcast suffer from loss of coverage: the NWB robustness problem.
The first contribution of the paper is to classify the solution space for robustness control in NWBs. In Section IV, existing approaches for NWB robustness are examined and classified within this solution space. Based on this classification, the second contribution of the paper is to propose a new approach to robustness control that has several desirable properties with respect to existing solutions: it is a network level solution, that is sensitive to losses, sensitive to the nearby state of the network, and does not require explicit feedback to track losses. The proposed approach, Selective Additional Rebroadcast (SAR), predicts losses without explicit feedback, and dynamically compensates for likely losses in areas where these losses are likely to be harmful. We also explore improving the prediction by adapting to nearby network state and topology. SAR is presented in detail in Section V. Simulation results show that the solution leads to substantial improvement in NWB robustness over existing NWB approaches with no robustness control, as well as in comparison to an existing robustness control algorithm. This is especially true in sparse networks where regions of the network may not have available redundancy. SAR introduces additional redundancy dynamically in response to predicted losses. As such, the approach achieves higher robustness than even flooding, while cutting down on unneeded redundancy when loss rates are not high or where the network is dense. With SAR, the decisions are taken locally – the approach can be used as the redundancy control component of virtually all NWB approaches to increase their robustness. Therefore, a combination of an effective redundancy reduction algorithm and SAR can provide a robust NWB algorithm with acceptable overhead and low unneeded redundancy. The experimental study is presented in Section VI.
III. OVERVIEW OF E XISTING NWB A LGORITHMS In this section, we overview NWB approaches with an emphasis on robustness. The primary characteristics we are interested in are: (1) Overhead; and (2) robustness to losses. A. Topology-Independent (Flooding-based) Approaches Flooding is a brute force approach that has high overhead, especially in dense networks. The high overhead problem (broadcast storm) was identified by Ni et al [5]; most NWB algorithms target reducing this overhead. Flooding is thought to be resilient to MAC losses due to the high redundancy generally present. However, this may not be true: in low density areas of networks, the 2
available redundancy is low. Flooding requires no topology knowledge and therefore, it is resilient to mobility and does not require neighbor/topology tracking. One approach to reducing overhead is to have nodes determine locally whether their rebroadcast is likely to be redundant. Ni et al [5] suggest several criteria for deciding whether to rebroadcast, including: (1) probabilistically; (2) based on the number of rebroadcasts already heard (if several were heard, an additional one is probably not needed); or (3) based on the distance or location of the nearest heard rebroadcast (if its too close, it is likely that little additional coverage is obtained from another broadcast). These approaches reduce the overhead without appreciably harming coverage. Like flooding, they are resilient to mobility. Because each node locally determines whether its rebroadcast is likely to be needed, the approach dynamically adapts to transmission losses. However, because the decisions are locally made, they are generally not optimal with respect to overhead.
node tracks its two hop neighbors. This enables a node A to determine which of its neighbors a node B covered by checking the two hop neighbor information without requiring B to transmit its neighbor list. While most existing NWB algorithms focus primarily on redundancy control, some have targeted the lack of robustness. We overview these algorithms in Section IV after presenting a classification of the solution space for robustness control. IV. ROBUSTNESS C ONTROL S OLUTION S PACE The vulnerability of MAC broadcast operations to losses and the effect of losses on coverage are known (e.g., [18], [19]). Recently, Rogers and Abu-Ghazaleh characterized the effect of these losses under different scenarios for different NWB algorithms [20]. They show that the coverage success correlates with the available redundancy and the broadcast loss rate. The first factor, available redundancy, depends on the sparseness of the topology as well as the NWB algorithm (which seek to reduce redundancy). The second factor, broadcast loss rate, depends on the quality of the links; losses occur due to signal fading or collisions due to interference from other packets in the network or outside sources. They show that losses significantly harm the robustness of NWB protocols. In particular, topology-aware have a lower overhead but are less robust than flooding based operations. In addition, static topology-aware protocols are found, expectedly, to be more vulnerable to losses, as they do not adapt dynamically to a loss of a transmission to a node in the static backbone. We propose that NWB algorithms be thought of as two complementary components: (1) Redundancy control (or coverage control): this component attempts to maintain coverage of nodes while reducing the number of necessary transmissions. This is the component typically supported by NWB algorithms; and (2) Robustness control: this component attempts to ensure that coverage is maintained in the face of broadcast losses. Thus, an NWB with an effective redundancy control and robustness control components would not only keep the overhead of the algorithm low and reduce redundant transmissions, but also maintain coverage in the face of transmission losses. The classification of NWB algorithms in Section III represents a classification of redundancy control solution space; in the remainder of this section, we present a classification of the robustness control solutions space. Note that the robustness properties of the basic redundancy control approaches vary significantly. Thus, it
B. Topology Sensitive Approaches An alternative approach is to build a virtual backbone that covers all nodes in the network. Only backbone nodes rebroadcast NWB packets. Under ideal assumptions, all the nodes can be covered with an asymptotically optimal overhead; they have been proposed to replace flooding in routing algorithms [1], [15]. A popular approach is using a Connected-Dominating Set (CDS) – a connected subset of the nodes from which all other nodes are one-hop reachable. If the CDS nodes broadcast, all the nodes in the network are covered. While finding the optimal CDS is NP-complete, algorithms have been developed to construct them distributedly, and near-optimally in terms of the size of the CDS and the overhead to construct it (e.g., [9], [10]). In terms of vulnerability to losses, an important distinction between topology sensitive approaches is whether the forwarding responsibilities are statically or dynamically determined. In static approaches, the topology information is used to statically determine forwarding responsibilities. As a result, this approach becomes especially vulnerable to losses: if a loss of a packet to a node with forwarding responsibilities occurs, the remainder of the backbone reachable through it and the nodes they cover will not receive the NWB. In dynamic approaches [16], [17], each node locally determines whether it needs to be part of the virtual backbone based on already heard transmissions. This makes them significantly more resilient to losses. For example, in the Scalable Broadcast Algorithm [17], each 3
is interesting to see how the different approaches for robustness control interact with the different approaches for redundancy control.
it most suitably implemented as a network level solution. Depending on the granularity and the subset of receivers responding, explicit feedback may also increase latency and load on the network (due to the acknowledgement traffic). 2) Implicit Feedback Algorithms: Schemes that use implicit feedback attempt to judge the success of a NWB transmission based on locally observed behavior and without requiring addition control packets to be exchanged. Briefly, this approach hypothesizes that the behavior of the neighbors will be different in the case that they receive a broadcast from the case where they don’t. Thus, by observing the behavior of nearby nodes, the loss probability can be predicted. A simple example of such implicit feedback is for a node to observe the channel and see if the NWB packet was retransmitted by neighboring nodes. If enough retransmissions are overhead, the original transmission to the neighbor was likely received successfully. Implicit feedback solutions can improve their decision by factoring in other criteria that can improve the loss prediction. For example, each node can measure the utilization of the channel; the higher the utilization the higher the probability of a loss. However, this approach is imprecise because the local state at the sender may not match that at the receivers. An additional metric of interest is the expected number of rebroadcasts (a function of density and NWB algorithms) the knowledge of which can help refine the prediction made by the implicit feedback mechanism. 3) Awareness of Neighboring State: In addition to immediate behavior of nearby neighbors in response to a broadcast, a longer term context for the probability of a loss and the available amount of redundancy is available. More specifically, losses increase with the quality of the links and the nearby contention for the medium. Moreover, the topology of the neighboring section of the network provides useful information regarding the expected number of rebroadcasts. If such information is available more effective prediction of losses can be carried out to benefit implicit loss algorithms. Similarly, such information can be used by fixed redundnacy algorithms to build in a higher degree of redundancy where it is expected to be needed.
A. Robustness Control Solution Space Robustness control consists of additional broadcasts, beyond the minimum needed to cover the nodes under ideal no loss conditions, that are injected in an attempt to recover from lost broadcasts. Approaches to robustness control can be classified along multiple axes, including the following. We distinguish solutions at the MAC layer from those at the network layer. MAC layer solutions attack the root of the problem – the unreliable MAC broadcast. They attempt to replace it with a more reliable broadcast primitive. Network layer solutions attempt to introduce redundancy in coverage; they can also attempt retries for reliability in a similar way to MAC solutions. Network layer solutions face a lower deployment barrier and do not require changes to standards or hardware. Solutions can be distinguished based on how they trigger robustness control responses. Fixed redundancy approaches incorporate a fixed degree of redundancy in their coverage, beyond the minimum necessary, to leave a safety margin against broadcast losses. This redundancy is fixed – in some cases, it may not be needed when no losses occur; in others, it may not be sufficient to recover from losses. In contrast, loss sensitive approaches trigger additional redundancy (typically in the form of additional rebroadcasts) only when losses are detected or predicted. Among loss-sensitive approaches, there are a number of solutions possible based on the approaches for detecting/predicting losses and how redundancy is built in response. Broadly, loss detection/prediction can be accomplished by explicit or implicit feedback. 1) Loss Sensitive Robustness via Explicit Feedback: In explicit feedback algorithms, the receivers inform the senders of the reception success state using an explicit transmission. Explicit feedback solutions face the following problem: if multiple receivers of a MAC broadcast have to provide acknowledgements, collisions occur. However, it is possible to randomize/stagger the responses or require feedback from a subset of the receivers only. Different solutions can be achieved by controlling the granularity of the feedback. The simplest scheme is to provide feedback on every broadcast transmission. This has the cost of increasing latency and overhead. Another approach is to send feedback periodically. This requires nodes to cache previously sent data, in case a failure situation arises. Since this feedback spans multiple packets,
B. Mapping Existing Robustness Control Algorithms to Solution Space A number of solutions are emerging for improving the robustness of MAC level broadcast as well as NWB algorithms. To counter the effect of transmission losses on CDS protocols, Lou and Wu propose the Double 4
Coverage Broadcast (DCB) [19] CDS. DCB ensures that every node is covered twice (not just once as per the CDS requirement). Thus, DCB represents a fixed redundancy approach based on a static topology aware redundancy control algorithm (CDS). While DCB is more robust than single coverage static CDS algorithms, the set of forwarding nodes is statically determined and the degree of redundancy is fixed. Therefore, the approach remains vulnerable to losses. As this protocol is designed to achieve more robust NWBs, we use it as a baseline for comparison with our approach.
approaches; for example, a dynamic virtual backbone algorithm such as SBA [17] is able to tolerate losses better than a static approach with fixed redundancy [19]. However, this measure of redundancy control is limited as it cannot recover from critical losses or in sparse networks. V. S ELECTIVE A DDITIONAL R EBROADCAST We propose Selective Additional Rebroadcast (SAR): an approach where NWB packets are selectively rebroadcast an additional time if they are suspected to have been lost. Ideally, lost broadcast packets would be recovered via explicit feedback in a manner similar to Automatic Repeat Request (ARQ), where the receiving node requests a retransmission if a timeout or error occurs. However, since this is not possible for broadcast operations due to multiple receivers, the protocol attempts to detect losses based on locally available information. Thus, it is an implicit feedback, network level, solution that can be added as a robustness control component to any NWB algorithm. It is important to note that SAR does not simply restore some of the redundancy that is available in flooding that NWB algorithms attempt to reduce. They key to its operation is being able to locally predict whether a rebroadcast has been lost and retransmitting only if necessary. The additional rebroadcasts are not generated blindly as in fixed redundancy approaches; they are only generated when they are predicted to be needed – in sparse areas of the network when a loss is likely to have happened. SAR differs from the solutions of Hyper-flooding [21] and Hyper-gossiping [22] presented earlier in Section III. The criteria used for rebroadcasting in Hyper-gossiping and Hyper-flooding are not sensitive to losses. Instead, a fixed amount of redundancy is added probabilistically to the NWB algorithm. In addition, the SAR solution can be applied to NWB algorithms besides flooding or gossipping. Several criteria for implicit loss prediction can be explored to decide when to rebroadcast a packet an additional time. SAR is designed to provide NWB robustness implicitly. No explicit feedback is used to predict whether a NWB transmission has been lost. This prediction can be based on recently observed behavior such as recent loss rates and the utilization of the channel. In addition, the prediction metrics can take into account the subsequent behavior of nearby nodes (which may differ based on whether the rebroadcast was received or not). For SAR, we start with the following basic triggers and refine them to consider local neighborhood state later.
Hyper-flooding [21] is proposed for MANETs with high levels of mobility, where reliability is needed. Each node in the network broadcasts at least once (flooding). Some nodes, based on explicit feedback (that a new neighbor appeared) or probabilistically, may decide to rebroadcast again. Hyper-gossiping, proposed by Khelil [22], is an extension to hyper-flooding that uses gossiping instead of flooding, as described earlier. The combination attempts to increase reliability without increasing overhead in highly mobile scenarios. The criteria for retransmission here is not sensitive to losses; rather a fixed amount of redundancy is added probabilistically. Tang and Gerla proposed adding reliability to MAC level broadcast [23]. Specifically, they proposed a more reliable broadcast primitive where all receiving nodes acknowledge a broadcast. Since multiple nodes may respond concurrently, presence of noise on the medium is considered to mean that at least one node received the packet correctly. Thus, a packet is assumed lost and retransmitted unless one acknowledgement is received, or noise is heard on the channel. Rogers and AbuGhazaleh introduced an alternative MAC level primitive for enhancing NWB robustness [24]. This solution specifies one receiver as responsible for generating an acknowledgement. Thus, the primitive is similar to a public unicast. This approach has lower overhead than the first scheme, and also provides control over which neighbor receives the packet correctly; this property is useful for virtual backbone algorithms. These two solutions are loss sensitive, with explicit feedback. One can argue that NWB algorithms with dynamic redundancy control, both topology aware ones as well as virtual backbone ones, have a component of robustness control based on implicit feedback. More specifically, since these algorithms carry out decisions on whether their rebroadcast is needed or not locally, based on observed behavior of nearby nodes, they are able to recover from losses at other nodes. This is why these approaches display more resilience to losses than static 5
1) Probabilistic prediction: a packet is rebroadcast with a fixed probability (fixed redundancy). In general, this solution does not adapt to the density or loss rates in the network. Therefore, it may result in large increases in the overhead when it is not needed (e.g., in dense areas and/or when interference is low). This solution is included because it provides a fixed redundancy reference point that increases redundancy without using any prediction intelligence. This approach is basically adding Hyper-gossiping [22] to other NWB algorithms. 2) Counter-Based Solution: if the node does not hear n other nodes rebroadcast the packet within a certain amount of time, it will rebroadcast it again. This solution is attractive because it naturally adapts to the interference level and density of the network. In a dense/low interference area, a number of rebroadcasts are likely to be received after a node rebroadcasts a packet. But in sparse/high interference areas, this is not the case, and the algorithm rebroadcasts a packet to enhance reliability. First, the behavior of a fixed value of n across the network is studied, followed by the adaptation of the value to reflect the neighboring topology. 3) Adaptive SAR: Rather than having a fixed number of rebroadcasts that applies throughout the network, the appropriate threshold for rebroadcast can be adaptively set at the individual nodes. For example, a node can disable SAR if it determines that the SAR broadcasts have not helped the NWB recently. By doing this, the node can cut down on overhead when the broadcasts are not needed, and re-enable SAR later, in case new nodes have moved into the surrounding area. Alternatively, a node may assess its local neighborhood to compute the expected number of rebroadcasts and set its rebroadcast threshold to match those.
since no new nodes will be covered. Depending on the NWB algorithm being used, the density can be further defined as two different values: (1) The physical density: this consists of the number of neighboring nodes; (2) The NWB density: this is the number of neighboring nodes that perform a NWB transmission. If not all neighboring nodes transmit, it is not useful to track all physical neighbors. In the next section, the results of using a pure probabalistic solution, a counter-based solution, and adaptive solutions are presented. The value of the MAC utilization metric and the network density metric are also evaluated for use in the adaptive solutions. VI. E XPERIMENTAL E VALUATION In this section, we present an evaluation study of SAR. The Network Simulator NS-2 was used for all experiments [25]. In all scenarios, nodes are randomly deployed in a fixed area of 1000 by 1000 meters. We used the 802.11 MAC implementation in NS-2 with the default parameters (range of 250 meters). In each experiment, one NWB is generated per node. We time the NWBs to ensure that successive operations do not interfere. Each data point represents an average of 20 scenarios with different random seeds. While we do not present the confidence intervals to avoid cluttering the graphs, we did track them and verified that the average is narrowly bound (95% confidence intervals within 1% of the mean for the probabilistic drop scenarios). The primary performance metrics we are interested in are: (1) Node coverage: percentage of nodes that receive the flood; (2) Overhead: number of retransmissions. We use normalized overhead, defined as the number of retransmissions per receiving node, as a measure of overhead. For flooding, normalized overhead is always one: every receiving node retransmits the packet once. Optimized NWBs target lowering the overhead while maintaining coverage which should results in a normalized overhead smaller than 1. We use simulate losses by probabilistically dropping rebroadcast packets at receivers; a similar approach to modeling losses was used in other works [19]. While this approach is somewhat unrepresentative (transmission errors are typically not uniform across all links), it provides a controllable approach to varying the loss rate. The same effects were found to hold under more realistic loss generation models (e.g., using interfering connections or a signal fading propagation model); we
Furthermore, the following additional improvements to these predictors that take into account the local state information are explored: 1) Medium Access Control (MAC) Utilization: A node can take into account how busy the medium is, and use that to gauge how likely the loss of a NWB is. This is potentially unrepresentative, since the node is testing how busy the MAC layer is at the source, rather than at the destination. 2) Density: A node can use the number of neighbors it has to determine whether it is in a dense or sparse area of the network. Furthermore, it may be able to use this information to determine whether it is a leaf node, where a rebroadcast is wasteful, 6
1
use the simpler model because it is more controllable and consistent with existing robustness control studies. The overhead required for the various ‘HELLO’ messages is not tracked for topology-sensitive algorithms. The reason this is omitted overhead is that it is not assessed per NWB but happens periodically. Thus, one round of neighbor discovery may benefit multiple NWBs.
0.8
Coverage
0.6
0.4
0.2 Flooding LBA AHBP SBA DCB
A. Evaluated Algorithms
0 0
0.1
0.2
0.3
0.4
0.5
Drop Probability
In addition to flooding, we selected the following NWB algorithms to provide representatives of the different classes of solutions discussed in Section III:2 • Location Based Algorithm (LBA) [5]: In this algorithm, a node includes its location in the rebroadcasted packet. Upon reception of a broadcasted packet, the receiving node notes its current location in comparison with the location of the sending node, obtained from the packet header. Based on this information, a node decides whether its rebroadcast provides sufficient coverage to be worth sending. This is a dynamic flooding based approach (not topology sensitive). • Ad Hoc Broadcast Protocol (AHBP) [26]: Nodes track two hop neighbor and use this information to explicitly select a set of 1-hop neighbors to rebroadcast the packet such that all the 2-hop neighbors are covered. AHBP is a static topology-sensitive (CDS) based approach. • Scalable Broadcast Algorithm (SBA) [17]: This algorithm is essentially a dynamic version of AHBP where nodes locally determine if all their 2-hop neighbors have been covered based on overhead broadcasts. • Double-Covered Broadcast (DCB) [19]: This is a static topology-sensitive CDS-based algorithm with fixed redundancy robustness control (CDS developed to provide double-coverage for every node).
Coverage, 30 Nodes
Fig. 1.
most reliable approach because the other approaches reduce the redundancy available in flooding; (3) the static algorithms (AHBP and DCB) perform worse than the dynamic approaches because the forwarding responsibilities are determined statically, making it difficult to recover from losses. 1.2
1
Overhead
0.8
0.6
0.4
0.2
Flooding LBA AHBP SBA DCB
0 0
0.1
0.2
0.3
0.4
0.5
Drop Probability
Fig. 2.
Normalized Overhead, 30 nodes
Figures 2 shows the normalized overhead of the different NWB approaches in a 30 node environment. As expected, AHBP, DCB, and SBA have a much lower overhead than flooding and LBA, due to their neighborhood knowledge algorithms. However, note that for topology sensitive algorithms (AHBP, DCB and SBA), this graph does not factor in the cost of the ’HELLO’ messages that are periodically sent to accomplish neighborhood discovery. Neighbor discovery cost is more expensive than the cost of flooding since every node sends an update and receives several updates; it is even more expensive if two hop neighbors are tracked as is the case in SBA. However, this cost may be amortized over several NWBs depending on the NWB generation rate and topology change rate. Thus, the overhead shown in Figure 2 is a lower bound for the topology-sensitive
B. Basic Robustness Evaluation Figure 1 shows the coverage obtained by the different NWB alternatives with 30 nodes. Several observations can be made on this figure: (1) coverage degrades for all 5 approaches as the loss rate increases; this is not surprising, since all these protocols rely on the unreliable MAC level broadcast. However, the severity of the problem is surprising; (2) flooding remains the 2 We are thankful for Tracy Camp’s group at Colorado Mines for providing us with the base implementation for the SBA and AHBP protocols. We also thank W. Lou for the DCB implementation.
7
1
protocols, that can be quite optimistic if neighborhood discovery frequency is not much lower than NWB frequency.
0.8
Coverage
0.6
C. Evaluation of SAR Approaches
0.4
0.2 1 AHBP AHBP-SAR (P=.5) AHBP-SAR (N=1) AHBP-SAR (Dynamic) 0 0
0.8
0.1
0.2
0.3
0.4
0.5
Drop Probability
SAR Approaches Comparison, Coverage
Fig. 5.
Coverage
0.6
1
0.4 0.8
0.2 AHBP Counter Leaf Pruning MAC Utilization
Overhead
0.6
0 0
0.1
0.2
0.3
0.4
0.5
0.4
Drop Probability
Coverage - Additional SAR Metrics
Fig. 3.
0.2
1 AHBP AHBP-SAR (P=.5) AHBP-SAR (N=1) AHBP-SAR (Dynamic) 0 0
0.8
0.1
0.2
0.3
0.4
0.5
Drop Probability
Fig. 6.
Overhead
0.6
SAR Approaches Comparison, Overhead
0.4
The effect of the problem of leaf nodes broadcasting unnecessarily can be seen at a 0 loss rate in the counterbased solution. In this case, the only losses in the network are due to self-interference. The counter-based version has a significant increase with SAR even at a 0 loss rate. To address this problem in an algorithm such as AHBP, a node can use its neighbor knowledge to see how strongly connected it is. A node that is a leaf node will not have a high neighbor count. This knowledge can be used to not cause additional SAR broadcasts if the node has heard a number of broadcasts equal to its CDS neighbor count. Furthermore, for static algorithms, such as AHBP, the neighbors which are to rebroadcast are known at the upstream sending node. The sending node can ensure it has heard a transmission from each rebroadcasting CDS node, and cancel the additional SAR broadcast if the broadcast was propagated successfully. An additional metric to be used is that of the MAC utilization. A node in a section of the network with more traffic will have a higher MAC utilization value, and the chances of a successful broadcast are smaller. Figures 3 and 4 compare the base AHBP protocol, the counter-based AHBP-SAR protocol, AHBP-SAR with leaf node pruning, and AHBP-SAR with MAC utilization. It can be seen that the leaf node pruning results in
0.2 AHBP Counter Leaf Pruning MAC Utilization 0 0
0.1
0.2
0.3
0.4
0.5
Drop Probability
Fig. 4.
Overhead - Additional SAR Metrics
Using a purely probabilistic metric to use SAR, coverage is slightly increased, while overhead is increased by the percentage used as the probability value P . This approach is a blind method of controlling robustness. A node rebroadcasts solely based on a probability, and takes no external observations into account. As a result, nodes perform the SAR broadcast when not necessary, and a critical rebroadcast may not occur. Using the counter-based metric for SAR, coverage values are increased more than the probabilistic-based approaches, and the overhead fluctuates, based on network conditions. This approach does make use of observed network behavior, and the results are better, with higher coverages and lower overhead. A counter-based approach is not blind, but it also not adaptive. For example, nodes near the edge of the network (leaf nodes) may not have their counter value met, resulting in them performing the additional SAR broadcast. This broadcast is wasteful, since no new nodes will be covered. 8
coverage at the same level as the counter-based solution, with a lower overhead. This is due to the suppression of broadcasts from leaf nodes that do not cover additional areas of the network. The MAC utilization version adapts to the increasing loss level in the network. Applied properly, this metric can also be valuable. Since each metric is valuable on its own, they are included in the criteria used for the adaptive SAR protocols. The adaptive metrics cover cases where a counterbased metric does not perform well. They adjust according to more external network indicators, such as the observed network traffic load. Figures 5 and 6 show the coverage and overhead for the three approaches in a sparse 30 node environment, with modifications to the AHBP protocol. In these graphs, it can be seen that the probabilistic approach has the poorest coverage, along with an overhead that does not adapt to the network conditions. The counter-based approach has coverage higher than the probabilisitic approach, along with an overhead that tends to increase as the loss rate increases in the network. Lastly, the adaptive approach has the best overall coverage, and an overhead that is increased as the loss rate increases in the network. This overhead starts out around the level of the base AHBP protocol, and stays below the other approaches until the loss rate is sufficiently high. Even at very high loss rates, the overhead is below the level of flooding, which is constant at a value of 1. The results for the other protocols and densities were similar, and are not presented here to conserve space. Furthermore, since the adaptive approach had the best results, the studies in the rest of this section will deal solely with the adaptive protocols.
the observed average number of neighbors. This metric is combined with the MAC utilization, and the second broadcast is made only if the metric is above a threshold. 2) Location-Based Algorithm (LBA): This protocol uses the same SAR metrics as the Flooding protocol described above. The only difference between the two protocols is that LBA attempts to decrease the number of broadcasts by not rebroadcasting a packet if the sending neighbors are nearby. 3) Ad Hoc Broadcast Protocol (AHBP): This combines the MAC utilization metric along with a variation of the density criteria. Since AHBP is a static CDS algorithm, the nodes that are needed to broadcast are known at the time of scheduling the SAR timer. When broadcasted packets are received, the SAR timer is updated with the node information that sent the packet. When the timer expires, a metric based upon the ratio of nodes that were heard from compared to the missing nodes, along with the MAC utilization, to rebroadcast the packet. This approach allows the static algorithm to only broadcast if the missing CDS nodes haven’t been heard from. An additional broadcast if the nodes have been covered will gain no new coverage, and is purely wasteful. 4) Scalable Broadcast Algorithm (SBA): This approach is similar to the approach found in the AHBP modification. The nodes that were not covered in the original broadcast are tracked, and each duplicate NWB packet is tracked for information pertaining to these nodes. When the timer expires, a metric based on MAC utilization and the ratio of the heard from compared to missing nodes is used to determine whether the second broadcast will occur.
D. Adaptive SAR Protocols The design of the adaptive SAR protocols is as follows: 1) Flooding: This combines the counter-based approach, MAC utilization, density, and observed SAR effect. When a node receives a flooded packet, a timer is set to later rebroadcast the packet an additional time. The node increments a counter every time the packet is heard after the original reception. When the SAR timer expires, this counter is examined. If it is equal to zero, it is likely that an additional broadcast will be wasteful. After a few cases of this in a row, SAR can be disabled. The node keeps track of the counter information for the past few broadcasts, and uses this as the observed number of neighbors. A metric is made based on the ratio of the counter value to
E. Adaptive SAR Evaluation SAR was implemented according to the criteria presented in the previous section. For the NWB algorithms other than flooding, the additional rebroadcast is only scheduled if an initial rebroadcast is decided upon by the algorithm. For example, in AHBP where a subset of the nodes is selected to relay the packet further, only those nodes that have an original forwarding responsibility are tasked with an additional rebroadcast according to the probabilistic or counter-based criteria. Figure 7 shows the coverage obtained with the protocols modified with SAR, along with Flooding and DCB for comparison. Referring to Figure 1, the coverage for the SAR protocols is higher than their respective 9
0.8
0.8
0.6
0.6 Coverage
1
Coverage
1
0.4
0.4
0.2
0.2
Flooding DCB Flood-SAR LBA-SAR AHBP-SAR SBA-SAR
0
Flooding DCB Flood-SAR LBA-SAR AHBP-SAR SBA-SAR
0 0
0.1
0.2
0.3
0.4
0.5
0
0.1
0.2
Drop Probability
Coverage with SAR, 30 Nodes
1.4
1.2
1.2
1
1
0.8
0.8
0.6
0.4
0.5
0.4
0.5
0.6
0.4 Flooding DCB Flood-SAR LBA-SAR AHBP-SAR SBA-SAR
0.2
Flooding DCB Flood-SAR LBA-SAR AHBP-SAR SBA-SAR
0.2
0
0 0
0.1
0.2
0.3
0.4
0.5
0
Drop Probability
Fig. 8.
0.4
Coverage with SAR, 60 Nodes
Fig. 9.
1.4
Overhead
Overhead
Fig. 7.
0.3 Drop Probability
0.1
0.2
0.3 Drop Probability
Overhead with SAR, 30 Nodes
Fig. 10.
base protocols. Furthermore, the coverage of the SAR protocols is significantly higher than that of DCB. The overhead for the protocols in a 30 nodes environment is shown in Figure 8. With the SAR protocols, the overhead increases as the drop probability increases. This shows that as the loss rate goes up, additional nodes compensate for the loss dynamically and schedule a rebroadcast. This dynamic adaptation does not exist in DCB because its degree of redundancy is statically fixed (at 2), regardless of the loss rate experienced in the network. To demonstrate that these solutions are viable for dense networks as well as sparse networks, the same protocols were evaluated in a 60 node environment. Figure 9 plots the coverage obtained by the protocols, while Figure 10 plots the corresponding overhead. Again, the coverage obtained by the SAR protocols is better than the corresponding base protocols and DCB. The overhead remains below flooding, except at very high loss rates in the network. This overhead adapts and increases as the loss rate is raised throughout the network. Examining all of this data, it can be seen that the coverage of AHBP with SAR is raised to a level near flooding, but still remains significantly below the overhead present in flooding. The CDS algorithms modified here tended to have a lower overhead than flooding, but
Overhead with SAR, 60 Nodes
a much higher coverage. This achieved the first goal of making use of the CDS trees present in many algorithms which cuts down on redundancy. The second goal of increasing the robustness of the algorithms in face of losses was also met. F. Summary Flooding provides a fixed degree of redundancy: in low density scenarios and at high loss rates, this redundancy is not sufficient. The SAR approaches presented here outperform flooding by dynamically adding additional redundancy where it is needed. In contrast, in high density scenarios, the available redundancy is sufficient to keep coverage acceptable at a high loss rate, but is overkill at low loss rates. In this case, SAR outperforms flooding under low losses by keeping the overhead much lower than that of flooding and dynamically growing it as the loss rate increases. Finally, for many applications, such as route discovery, the increased reliability is worth a small increase in overhead. Different criteria were examined as potential metrics for deciding when to perform an additional rebroadcast. By utilizing the different criteria, protocols were designed that adapted to the network conditions, resulting in high network coverage with a low overhead. 10
This solution is an example of a NWB robustness solution that uses implicit feedback. Simple criteria were used to trigger an additional rebroadcast. This rebroadcast was performed only when necessary, and added a significant level of coverage as a result. Furthermore, since the feedback is implicit, no extra strain is placed on the network in order to achieve a more robust NWB.
performance. The results show that a large improvement in coverage in all cases, with coverage higher than that of flooding, and significantly higher than that of DCB. However, this comes at an increase in overhead that rises with the loss rate (to retain coverage level in the face of losses). When considering the effect of the increased overhead one must consider the effect on the overall network behavior; this is application sensitive. For example, if NWBs are used for discovering paths in routing protocols, a single NWB may uncover multiple paths which can be used to transport thousands of packets before another NWB is needed. In such an application, the overhead of the NWB is marginal compared to the total number of packets sent; increasing the overhead to improve the reliability of the flood does not detract from the performance of the network appreciably. Alternatively, in an application where NWBs are dominant, the efficiency of the NWB becomes more important.
VII. C ONCLUSIONS Network-Wide Broadcasts (NWBs) are important operations in Ad hoc networks that are used in several routing and group communication algorithms. Existing research has targeted efficient NWB to reduce the amount of redundancy inherent in flooding (the simplest NWB approach). As a result, the NWB becomes more susceptible to loss of coverage due to transmission losses that result from heavy interference or transmission errors. This problem arises because NWBs rely on an unreliable MAC level broadcast operation to reach multiple nodes with one transmission for a more efficient coverage of the nodes. In the presence of interference or transmission errors, this results in nodes not receiving the NWB. We outline the NWB solution space and explain the properties of the different solutions in terms of overhead as well as reliability. We demonstrate the problem using probabilistic dropping of packets (which allows us to control loss rate systematically). As the loss rate rises, and as the density of the network goes down, the coverage achieved by an NWB operation drops. We showed that this is especially true for other NWB algorithms that also rely on MAC level broadcasts because they target reducing the redundancy present in floods, making them more susceptible to losses. Furthermore, we showed that approaches that statically determine the set of forwarding nodes perform much worse than ones that dynamically evaluate whether their rebroadcast is likely to be redundant. As a result, the recent DCB algorithm, while it improves coverage relative to other static approaches, remains substantially more vulnerable to losses than dynamic approaches. We evaluated a network level solution to combat the problem of NWB operations not being robust. Our aim was to locally detect the likelihood of the broadcast packet loss without feedback from the receivers. If such a loss is suspected, a selective additional rebroadcast approach can be used to compensate. SAR was added to the the NWB algorithms (Flooding, LBA, AHBP, and SBA). DCB is the only algorithm we are aware of that attempts to increase the robustness of NWBs and we used it as a measuring point for our
We did not show the effect of mobility on the studied NWB protocols. The topology sensitive approaches such as the CDS algorithms degrade with mobility while the dynamic approaches are resilient to it. The problem occurs due to tracking of neighbor information, leading to stale neighbor information data.. With mobility, this information changes leading to incorrect rebroadcast decisions being made. Adding mobility to the presentation of these results makes the effect of SAR more difficult to quantify. With these approaches, the NWB algorithms can now be thought of in two parts: (1) Redundancy control: this phase of the algorithm targets reducing the redundancy in the rebroadcasts; and (2) Reliability control: this phase of the algorithm attempts to recover from losses occurring in the transmissions necessary to cover the nodes. While most existing NWB approaches target the first problem, we believe that ours is the first solution that targets both effectively. Thus, having a combined solution results in low overhead NWB in low loss environments, and dynamically increased overhead to recover from losses in high loss environments. We saw that these properties indeed hold for our protocols. In contrast, flooding provides a constant redundancy level which can be more than needed or less than needed to provide robust coverage. The only NWB algorithm we are aware of for improved reliability is DCB – unfortunately this algorithm has fixed redundancy as well (two, due to the double coverage) and its static nature makes it especially vulnerable to losses. 11
R EFERENCES
[20] P. Rogers and N. Abu-Ghazaleh, “Robustness of networkwide broadcasts in manets,” in Proceedings of the 2nd IEEE International Conference on Mobile Ad Hoc and Sensor Systems (MASS 2005), Nov. 2005. [21] K. Viswanath and K. Obraczka, “An Adaptive Approach to Group Communications in Multi-Hop Ad Hoc Networks,” in Proc. of International Conference on Networking, 2002. [22] Mashud Kabir, “Region-Based Adaptation of Diffusion Protocols in MANETs,” M.S. thesis, University of Stuttgart, Germany, Nov. 2003. [23] K. Tang and M. Gerla, “MAC layer broadcast support in 802.11 networks,” in Proc. of IEEE MILCOM 2001, Oct. 2001, pp. 544–548. [24] P. Rogers and N. Abu-Ghazaleh, “Directed Broadcast: A MAC level Primitive for Robust Network Broadcast,” in Proceedings of the IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2005), Aug. 2005. [25] UC Berkeley/LNBL/ISI, “The ns-2 network simulator with the cmu mobility extensions,” 2002, http://www.isi.edu/ nsnam/ns/. [26] W. Peng and X. Lu, “AHBP: An efficient broadcast protocol for mobile ad hoc networks,” Journal of Science and Technology (Beijing, China), 2002.
[1] T. Clausen and P. Jacquet, “Optimized link state routing protocol,” Internet Draft, Internet Engineering Task Force, Oct. 2003, http://www.ietf.org/internet-drafts/draft-ietf-manet-olsr-11.txt. [2] S. Das, C. Perkins, and E. Royer, “Performance comaprison of two on-demand routing protocols for ad hoc networks,” in Proc. of INFOCOM 2000, Mar. 2000. [3] M. Lewis, F. Templin, B. Bellur, and R. Ogier, “Topology broadcast based on reverse-path forwarding (tbrpf),” Internet Draft, Internet Engineering Task Force, June 2002, http://www.ietf.org/internet-drafts/draft-ietf-manet-tbrpf-06.txt. [4] E. Royer and C. Perkins, “Multicast operation of the ad hoc ondemand distance vector routing protocol,” in Proc. of the ACM International Conference on Mobile Computing and Networking (MobiCom’99), 1999, pp. 207–218. [5] S. Ni, Y. Tseng, Y. Chen, and J. Sheu, “The broadcast storm problem in a mobile ad hoc network,” in Proceedings of ACM/IEEE International Conference of Mobile Computing and Networking (MOBICOM’99), Sept. 1999. [6] E. Pagani and G. Rossi, “Reliable broadcast in mobile multihop packet networks,” in Proceedings of ACM MOBICOM’97, Sept. 1997, pp. 34–42. [7] J. Lipman, P. Boustead, and J. Chicharo, “Reliable optimised flooding in ad hoc networks,” in Proceedings of the IEEE 6th CAS Symposium on Emerging Technologies: Frontiers of Mobile and Wireless Communication, May 2004. [8] D. De Couto, D. Aguayo, B. Chambers, and R. Morris, “Performance of multihop wireless networks: Shortest path is not enough,” in Proceedings of the First Workshop on Hot Topics in Networks (HotNetsI), Oct. 2002. [9] K. Alzoubi, P.-J. Wan, and O. Frieder, “Message-optimal connected dominating sets in mobile ad hoc networks,” in Proceedings of the 3rd ACM international symposium on Mobile ad hoc networking and computing (MobiHoc 2002), 2002, pp. 157–164. [10] R. Gandhi, S. Parthasarathy, and A. Mishra, “Minimizing broadcast latency and redundancy in ad hoc networks,” in Proceedings of the 4th ACM international symposium on Mobile ad hoc networking and computing (MobiHoc 2003), 2003, pp. 222–232. [11] T. S. Rappaport, Wireless Communications, Principles, and Practice, Prentice Hall, 1996. [12] V. Bharghavan, A. Demers, S. Shenker, and L. Zhang, “MACAW: A media access protocol for wireless lan’s,” in Proc. SIGCOMM ’94, 1994, pp. 212–225. [13] C. L. Fullmer and J. J. Garcia-Luna-Aceves, “Solutions to hidden terminal problems in wireless networks,” in Proceedings of SIGCOMM 1997, 1997, pp. 39–49. [14] B. Crow, I. Widjaja, J. Kim, and P. Sakai, “IEEE 802.11 wireless local area networks,” IEEE Communications Magazine, pp. 116– 126, Sept. 1997. [15] R. Sivakumar, P. Sinha, and V. Bharghavan, “Braving the broadcast storm: Infrastructural support for ad hoc routing,” Computer Networks: The International Jorunal of Computer and Telecommunication Networking, vol. 41, no. 6, pp. 687–706, Apr. 2003. [16] H. Lim and C. Kim, “Multicast tree construction and flooding in wireless adhoc networks,” in Proceedings of the ACM International Workshop on Modeling, Analysis and Simulation of Wireless and Mobile Swstems (MSWIM), 2000. [17] W. Peng and X. Lu, “On the reduction of broadcast redundancy in mobile ad hoc networks,” in Proc. of MobiHoc 2000, 2000. [18] K. Tang and M. Gerla, “MAC Reliable Broadcsat in Ad Hoc Networks,” in Proceedings of IEEE MILCOM 2001, mar 2001. [19] W. Lou and J. Wu, “Double-covered broadcast (dcb): A simple reliable broadcast algorithm in manets,” in Proc. of INFOCOM 2004, 2004.
12