Scheduling techniques in wireless mesh networks - Jason Ernst

0 downloads 0 Views 1MB Size Report
Resources are assigned to each node as shown in Eq. 1. (Eq.1). Where: R is the resources .... So it may be better to distribute some of the free resources to ...... main difference with our approach is the use of distributed requirement tables located at each of ... scheduling is done through the use of START and E D packets.
SCHEDULING TECHNIQUES IN WIRELESS MESH NETWORKS

A thesis Presented to The Faculty of Graduate Studies of The University of Guelph

by JASON B. ERNST

In partial fulfilment of requirements for the degree of Master of Science April, 2009

© Jason B. Ernst, 2009

ABSTRACT

SCHEDULING TECHNIQUES FOR WIRELESS MESH NETWORKS

Jason Bruce Ernst University of Guelph, 2009

Advisor: Dr. Mieso Denko

Wireless mesh networks (WMN) are a promising technology which provides wireless broadband connectivity to the Internet. This thesis is an investigation of scheduling problems in WMNs. First, existing scheduling solutions are discussed and classified based on technique and implementation framework. Then two novel proposed schemes are discussed in detail. The first proposed technique is a multiple gateway fair scheduling scheme. This scheme consists of distributed routing and requirement tables and a propagation algorithm for scheduling at the gateways. Simulation results confirm that fair scheduling has better performance than the scheme without fair scheduling and that multiple gateways are beneficial. The second proposed scheme is a cross-layer mixed-bias solution. We bias against distance from gateway, size of queue, link quality and a combined mixed-bias technique. Simulation results confirm that the mixed-bias approach performs better than IEEE 802.11 DCF for wireless mesh networks with respect to the metrics used for evaluation.

ACKNOWLEDGEMENTS

First, I would like to thank my advisor, Prof. Mieso Denko and my co-advisor Prof. Hongbing Fan for all of their insightful guidance and helpful comments and advice during the development of the ideas presented in my thesis. I am also grateful for the expertise provided by committee members, Prof. Xining Li and Prof. Obimbo who provided additional feedback and helpful suggestions. Additionally, I would like to acknowledge my parents, Bruce and Debbie, my sister Nicole, Sarah Fewster and many other friends and family who provided support, inspiration and encouragement to complete my thesis in a timely manner. I would like to thank my colleagues and friends in the PERWIN research group at Guelph: Thabo Nkwe, Nikhi Saxena, Dario Guiao, Jayt Jughmohan, Brian Heisler and Henry Phan for attending my various presentations and providing fresh perspectives and questions on all aspects of my work as well as additional support and inspiration throughout my studies and the University of Guelph. Lastly, I am grateful for the support staff, administration and other faculty members in the Computing and Information Science department at the University of Guelph, who were always helpful in submitting forms, booking rooms and other equipment whenever necessary.

TABLE OF CONTENTS TABLE OF CONTENTS................................................................................................. i List of Figures ................................................................................................................ iv List of Acronyms ............................................................................................................ v Chapter 1 Introduction .................................................................................................... 1 1.1. Introduction to WMNs......................................................................................... 1 1.2. Problem statement................................................................................................ 2 1.3. Contributions........................................................................................................ 3 1.4. Organization of thesis .......................................................................................... 4 Chapter 2 Background and related work ........................................................................ 5 2.1. Definition of fairness ........................................................................................... 5 2.2. Motivation for fair scheduling in WMNs ............................................................ 8 2.3. Classification and comparison of fair scheduling techniques.............................. 8 2.3.1. Hard fairness ................................................................................................. 9 2.3.2. Max-min fairness .......................................................................................... 9 2.3.3. Proportional fairness ................................................................................... 10 2.3.4. Mixed-bias scheduling ................................................................................ 10 2.3.5. Maximum throughput ................................................................................. 11 2.3.6 Comparison of techniques............................................................................ 12 2.3.7. Centralized and decentralized scheduling................................................... 14 2.4. Analysis of limitations and assumptions in current techniques ......................... 15 2.5. Motivation for cross-layer design in WMNs ..................................................... 18 2.6. Overview of cross-layer architectures ............................................................... 19

i

2.7. Analysis of cross-layer design ........................................................................... 20 2.7.1. Strengths and weaknesses of cross-layer design......................................... 21 2.7.2. Comparison of cross-layer techniques ........................................................ 22 2.7.3. Cross-layer extension of existing scheduling solutions .............................. 24 2.8. Motivation for cross-layer mixed-bias framework ............................................ 25 Chapter 3 Proposed approaches .................................................................................... 27 3.1. System assumptions ........................................................................................... 27 3.2. Overview of fair scheduling approach ............................................................... 28 3.3. Overview of cross-layer mixed-bias scheduling................................................ 30 Chapter 4 Detailed descriptions of the proposed approaches ....................................... 34 4.1. Fair scheduling for WMNs with multiple gateways .......................................... 34 4.1.1. Requirement tables...................................................................................... 36 4.1.2. Requirement propagation............................................................................ 38 4.1.3. Clique generation ........................................................................................ 40 4.1.4. Schedule generation .................................................................................... 40 4.2. Cross-layer mixed-bias techniques .................................................................... 41 4.2.1 Biasing against distance from gateways ...................................................... 43 4.2.2. Biasing against low demand (queue size)................................................... 45 4.2.3 Biasing against poor link quality ................................................................. 46 4.2.4 Combined cross-layer mixed-bias scheduling ............................................. 47 Chapter 5 Performance evaluations of the proposed approaches ................................. 49 5.1. Comparison of simulation environments ........................................................... 50 5.1.1. Custom C/C++ simulation environment ..................................................... 50

ii

5.1.2. Network simulation 2.................................................................................. 50 5.1.3. Network simulation 3.................................................................................. 51 5.2 Performance metrics ........................................................................................... 52 5.3. Performance evaluation of fair scheduling with multiple gateways .................. 53 5.3.1. Simulation environment.............................................................................. 54 5.3.2. Performance metrics ................................................................................... 55 5.3.3. Simulation parameters ................................................................................ 55 5.3.4. Effects of number of mesh routers.............................................................. 56 5.3.5. Effects of number of gateways ................................................................... 58 5.3.6. Summary of fair scheduling results ............................................................ 59 5.4. Performance evaluation of cross-layer mixed-bias scheduling ......................... 60 5.4.1. Simulation environment.............................................................................. 61 5.4.2. Performance metrics ................................................................................... 61 5.4.3. Simulation parameters ................................................................................ 62 5.4.4. Effects of number of mesh routers.............................................................. 63 5.4.5. Effects of number of gateways ................................................................... 66 5.4.6. Effects of number of traffic flows............................................................... 68 5.4.7. Summary of cross-layer mixed-bias results................................................ 69 Chapter 6 Conclusions and future work........................................................................ 70 6.1. Conclusions........................................................................................................ 70 6.2. Future work........................................................................................................ 71 REFERENCES ............................................................................................................. 73

iii

List of Figures Figure 1 Venn diagram showing relationship between throughput and fairness................ 6 Figure 2 Example of a Mobile WMN for various applications ........................................ 16 Figure 3 Wireless Mesh Network (WMN) with multiple gateways ................................ 29 Figure 4 Distributed Requirement Tables Combined at GW ........................................... 38 Figure 5 Average Packet Delivery Ratio with Varying Mesh Routers............................. 56 Figure 6 Average Delay with Varying Mesh Routers ...................................................... 57 Figure 7 Average Packet Delivery Ratio with Varying Gateways ................................... 58 Figure 8 Average Delay with Varying Gateways ............................................................. 59 Figure 9 Packet Delivery Ratio, Two Flows..................................................................... 64 Figure 10 Average End-To-End Delay, Two Flows......................................................... 65 Figure 11 Average End-To-End Delay, Five Flows ......................................................... 66 Figure 12 Average Packet Delivery Ratio, 250 Clients, 50 MR....................................... 67 Figure 13 Average End-To-End Delay, 250 Clients, 50 MR ........................................... 68 Figure 14 – Packet Delivery Ratio, Varying Flows.......................................................... 69

iv

List of Acronyms AODV

Ad-hoc On Demand Distance Vector

DCF

Distributed Coordination Function

DSDV

Destination-Sequenced Distance-Vector

ETX

Expected Transmission Count

GPS

Global Positioning System

GW

Gateway

MAC

Medium Access Control

MANET

Mobile Ad Hoc Network

MC

Mesh Client

MR

Mesh Router

NS2

Network Simulation 2

NS3

Network Simulation 3

OLSR

Optimized Link State Routing

PDR

Packet Delivery Ratio

PER

Packet Error Rate

QoS

Quality of Service

RTT

Round Trip Time

SINR

Signal to Noise plus Interference Ratio

STDMA

Spatial Time Division Multiple Access

TDMA

Time Division Multiple Access

WLAN

Wireless Local Area Network

WMN

Wireless Mesh Network

v

Chapter 1 Introduction 1.1. Introduction to WMNs Wireless Mesh Networks (WMNs) have become the focus of much research since they allow for increased coverage while retaining the attractive features of low cost and easy deployment.

WMNs have been identified as key technology to enhance and

compliment existing network installations as well as provide access where traditional technology is not available or too costly in install [18]. A WMN is made up of mesh routers (MRs), which have limited or no mobility, and mesh clients (MCs) which are often fully mobile. The mesh routers form the backbone of the network allowing the clients to have access to the network through the backbone. We propose an algorithm for fair scheduling in WMNs with multiple gateways. We also propose another algorithm for scheduling which places more emphasis on throughput while retaining a basic level of throughput called mixed-bias. This technique biases against characteristics of the network which are detrimental to performance, fairness, or both. Many protocols currently implemented for WMNs have evolved from traditional single-hop wireless local area networks (WLAN) and mobile ad-hoc networks (MANET). However, both of these networks have characteristics which make them very different from WMNs. While WLANs have relatively static topologies, MANETs on the other hand are fully mobile. Therefore, using protocols designed solely for either of these networks alone does not take advantage of some of the most advantageous features of WMNs. In MANETs all nodes are routers and suffer from limited power and bandwidth. In a WMN the MRs have greater resources available than the MCs which is a property

1

that may be exploited. Although a lot of research efforts have been made to address these problems and some new specialized algorithms have been proposed specifically for WMNs, there are still many challenges in the area. Many of the existing solutions make many assumptions that can be relaxed to allow for a more general approach to be taken. There is much motivation for studying fair scheduling in WMNs. Without work on fair scheduling for wireless mesh networks it would be extremely difficult to design a commercial network where all users could expect relatively equal service. Many existing deployments of WMNs do not account for greedy users or flows, which means that it is possible for extremely unequal service between users. Some existing protocols are designed without fairness in mind. These protocols focus on other important characteristics such as high throughput. Greedy nodes may take advantage of the network by using an unfair share of the network resources such as bandwidth, causing other nodes to get an unfair share.

Many WMN deployments are optimized with respect to

throughput, delay, or some other features that give little regard to fairness. This thesis provides two approaches to scheduling: One approach that emphasizes fairness while relaxing the assumption of a single gateway. The second approach is a cross-layer mixed-bias approach that emphasizes throughput while retaining a basic level of fairness. 1.2. Problem statement Since a WMN is a multi-hop wireless network, there are unique challenges to deal with when compared with traditional wired and wireless networks. The wireless channel is a broadcast medium, meaning that all nodes within a certain range are subjected to interference and cannot transmit simultaneously. At the same time, it is difficult to sense whether communication is taking place in other parts of the network because of hidden

2

and exposed node problems [16,24] where an intermediate node may be stuck in between two nodes which are trying to transmit simultaneously but out of range of each other. The solution to many of these problems can be scheduling of transmissions. The scheduling algorithms that exist have problems as well. As mentioned previously, scheduling in WMNs seems to be a problem of balancing two often disjoint goals; one is ensuring fairness among client nodes in the network, another is ensuring the network is performing at as close to capacity as possible. The goal of a good scheduling algorithm is to find a balance between these two goals. This thesis will provide scheduling techniques for wireless mesh networks which find a balance between fairness and throughput in WMNs using both traditional layered approach and emerging cross-layer design based optimization 1.3. Contributions The main contributions of this thesis are (i) in-depth comparison and analysis of existing scheduling techniques [19], cross-layer design approaches [20], and simulation tools for WMNs (ii) the proposal and implementation of a unique fair scheduling algorithm for WMNs with multiple gateways (GW)s [19] and (iii) the proposal and implementation of a unique algorithm for scheduling using a mixed-bias approach. The main contributions to the fair scheduling approach are distributed routing and requirement tables at each router which help in generating the scheduling. The main contributions to the mixed-bias approach are three new mixed-bias techniques: mixedbias against queue length, mixed-bias against link quality and lastly a more general combined mixed-bias approach which combines biasing approaches against many characteristics. The proposed techniques are compared to similar existing solutions as a

3

baseline to gauge the performance of our approach against the existing techniques. The performance is evaluated using simulation with respect to two metrics: packet delivery ratio and average end-to-end delay. 1.4. Organization of thesis The remainder of this thesis is organized as follows. Chapter 2 will give background and related work on scheduling in WMNs. We will define fairness, give motivations for studying scheduling, and compare and contrast existing work. This will be followed by an overview on cross-layering. Cross-layering is an important technique for enhancing performance in wireless networks and will be used in this thesis. We will also compare and analyse scheduling and cross-layer based techniques to provide a thorough review of the literature in the area. Chapter 3 will include a brief overview of both the fair scheduling and mixed-bias techniques which are proposed in this thesis. Chapter 4 presents detailed description of both of the approaches along with in-depth descriptions of the algorithms and components of each solution individually. This includes discussions on distributed routing tables, the requirement propagation in the WMN, and the scheduling for the fair approach. For the mixed-bias approach, four mixed-bias techniques are presented formally in detail. In Chapter 5 experimental results are presented. We also provide a discussion of the performance of our proposed approaches in terms of packet delivery ratio and average end-to-end delay. Finally, Chapter 6 concludes the thesis and discusses future research directions.

4

Chapter 2 Background and related work In this chapter we first define fairness with respect to wireless mesh networks. We then give a brief introduction to fair scheduling techniques. This is followed by a classification, comparison, and analysis of current scheduling solutions. The literature review establishes where our proposals stand in comparison to the existing work. We identify areas where more research could be accomplished in the future. Moreover we identify work that is most similar to our own. Lastly, we describe how cross-layer design can be used to further improve scheduling in wireless mesh networks and why a mixedbiased cross-layer approach is a promising technique for cross-layer scheduling. 2.1. Definition of fairness A number of scheduling and resource allocation techniques have been proposed for WMN in literature [14,27,28,30,31,37,39,40,41,55]. The trend is a tradeoff between the throughput and fairness using a constant weighting system or a dynamic weighting system that changes the weights over time to achieve a long-term fairness. It is important to note that fairness could occur at different points in a wireless mesh network. Some researchers have proposed per-mesh-router fairness or per-link fairness [42]. There is also a notion of “uplink-downlink fairness” [31,35,46] because the mechanisms in some current solutions, such as IEEE distributed coordination function (DCF) [46] allows for inequality between the directions of flow in WMNs. In other words, an improvement in downlink throughput may severely affect performance of the uplink or vice-versa. However, more recently [42,54] has focused on per-client fairness. The motivation behind this is that in commercial applications each user is paying an equal amount of

5

money for services from the network so each user should get equal Quality of Service (QoS). It is also important to consider which metrics fairness is being defined with respect to. For example, a scheduling algorithm could provide fairness in terms of the possible throughput available but the delay may not be equal. Certain nodes in the network may remain starved for traffic while other nodes are free to communicate for various reasons. It is also important to consider that fairness and scheduling is affected by intruders in the system. Many of the existing solutions for scheduling in WMNs rely on the assumption of co-operation between nodes, and this is not always the case in real world networks. The Venn-diagram shown in Figure 1 helps to illustrate the tradeoff between throughput and fairness in existing solutions. In one circle is throughput and in the other is fairness. On the right side of the diagram in Figure 1, the algorithms which favour throughput exist where fairness is a very low priority.

Max-Min Max Throughput

Fairness

Throughput Proportional, Weighted Mixed Bias

Hard-Fairness, Round Robin

Figure 1 Venn diagram showing relationship between throughput and fairness as well as some fair scheduling techniques [19]

These algorithms usually give preference to flows which are least expensive by some criteria. These criteria may be distance from the gateway, delay, small flows and

6

other similar metrics. However, this approach allows for starvation or reduced QoS for flows which do not meet the criteria. Preference is given to greedy flows. On the right side of the diagram in Figure 1 is absolute or hard-fairness. This side gives little priority to throughput and ensures that each client gets a fair share of the network resources. This may be achieved by using a time division mechanism or other similar approaches. The problem with this approach is that not all flows require the same amount of resources at all times so the resources may remain unused at times resulting in poor throughput. One approach which aims for a balance between the competing goals of fairness and throughput, denoted as max-min fairness [62] works by maximizing the minimum data rates for each flow. It results in higher throughput than hard-fairness, however, the overall throughput is still much less than maximum throughput and leaves much to be desired. The most interesting definition of fairness then is a compromise between hard-fairness and maximum throughput. In [9,31,37,47,54,59] this approach has been denoted as proportional fairness. Proportional fairness assigns priority to certain flows based on criteria such as the number of hops or amount of resources requested. Similarly, the max-min approach has also been modified with a proportional factor as well yielding improved results. A new approach called mixed-bias is a hybrid approach which emphasizes throughput while still providing a basic level of fairness. In the scheme proposed in [54], a portion of the resources are assigned to a strong biasing against nodes which are far away from each other. In order to prevent starvation, however, another portion is assigned to a proportional or max-min scheme as well. This is one of the first approaches that is able to offer a minimum level of fairness while retaining throughput which is often even greater than of proportional fairness or max-min.

7

2.2. Motivation for fair scheduling in WMNs The first motivation for studying fairness in WMNs is in networks where users are paying equal amounts of money for service and expect a similar quality of service (QoS). Often existing solutions focus on either the problem of throughput or the problem of fairness. It is often difficult to create a solution which addresses both of these problems since they are divergent goals. Recently, however, with works like those of [54], it is possible to have high throughput solutions that avoid node starvation. Mesh Clients (MCs) which are far away from the gateways (in terms of hops) often receive much lower QoS than those which are very close. This is because while the farther users’ packets are traversing all of the hops along the path, there is a transmission and queuing delay at each hop. The nodes which are close to the gateways do not experience this and can often transmit many packets while the farther nodes are still waiting for one packet to arrive. However, if we give each node enough time to transmit regardless of distance the throughput of the network decreases dramatically. This is because the delay increases greatly by giving each node enough time to transmit regardless of distance to the GW. Some nodes may end up waiting almost indefinitely while other nodes are transmitting. 2.3. Classification and comparison of fair scheduling techniques Fair scheduling protocols for wireless mesh networks can be classified into five categories. These categories in order of fairness from the most fair to the least fair are: Hard-fairness [37,9,42,46,55], max-min [62,54], proportional fairness [9,31,37,47,54,59], mixed-bias [54] and maximum throughput [7]. This classification and the relationship among them are shown visually in Figure 1.

8

2.3.1. Hard fairness Hard fairness [37,9,42,46,55] is also known as round-robin scheduling. It has been used in some of the earliest wireless networks and in simplistic network models since it is the least complex. It is the fairest scheme since each node is guaranteed exactly equal amount of time in order. In networks where the nodes only require a small proportion of resources hard fairness causes problems. Since each node is given time to transmit at regular intervals, if the node does not have any data to send, the time is wasted. This leads to very low overall throughput. At the same time, however, the problem of node starvation does not exist. Resources are assigned to each node as shown in Eq. 1.

R=

Where:

1 # flows through the node

(Eq.1)

R is the resources allocated to the node

2.3.2. Max-min fairness Max-min fairness [54,62] allocates resources in order of increasing demand. The minimum amount of resources assigned to each node is maximized. So if there are more than enough resources for each node, every node gets what it needs. If there is not, the resources are split evenly. This means that the nodes which require fewer resources get a higher proportion of their need satisfied. The nodes which require more resources end up dropping many packets and thus the network ends up with still quite low packet delivery ratio. This type of scheme works best in situations where there is not large differences in resources requested at each node. This can be a problem in a mesh network because intuitively, the nodes closer to the gateways will experience much higher traffic than

9

those on the outside of the network, yet may end up dropping many of the packets anyway. This may be partially solved by increasing the resource capacity of nodes closest to the gateways. 2.3.3. Proportional fairness Proportional fairness [9,31,37,47,54,59] allocates resources proportional to some characteristic in the network. For example, one may choose to give priority to nodes which are close to the gateways in a wireless mesh network. The amount of resources allocated then would be proportional to how close the node is to the gateway. The strength of the proportionality can be controlled depending on the proportionality factor as can be seen in Eq. 2.

R=

Where:

1 cβ

(Eq. 2)

R is the resources allocated to the node c is the characteristic which priority is given to, c > 0 β is the proportionality factor, β > 0

2.3.4. Mixed-bias scheduling Mixed-bias [54] scheduling allows for different levels of control over resources. Rather than just allowing for one bias, this scheme mixes two different biasing levels together. A certain proportion of the resources are assigned to one factor and the rest to another factor as shown in Eq. 3. This allows the scheduling algorithm to provide two different biasing levels or “mixed-biasing” against a certain characteristic. Rather than just strongly biasing against that characteristic which may result in certain nodes to be

10

starved, the mixed-biasing allows for a combination of weak and strong biasing meaning that a portion of the resources are reserved to provide a minimum service level, even for the nodes which are undesirable in terms of certain characteristics. This is shown mathematically in Eq. 3.

R=

Where:

α c

β1

+

1−α c β2

(Eq. 3)

R is the resources allocated to the node c is the characteristic which priority is given to, c > 0 β1, β2 are the proportionality factors, β1, β2 > 0 α is the fraction of resources assigned to each bias α >= 0

2.3.5. Maximum throughput Maximum throughput [7] scheduling has only one goal. As the name suggests, this goal is to maximize throughput. This is the only concern of this particular type of scheduling. Whichever node requires the most resources, or can transmit the fastest or most data gets access to the resources first.

This ensures a very high throughput,

however, there is a limitation with this approach. Nodes which have less priority, such as; those far away from gateways, those with fewer users, fewer flows, or less demanding traffic are essentially ignored. If enough time passes, all of the packets waiting in the queues at these MRs are dropped causing some nodes to be starved for traffic. This causes performance problem and should be avoided.

11

2.3.6 Comparison of techniques In general, as fairness increases throughput decreases, although in [59] a study compared round-robin to proportional fairness and found that in certain situations (such as indoors) both perform similarly due to fading and interference effects. If this is the case it is better to use the fair scheduling approach because fairness is an important consideration in networks, especially if ensuring fairness produces little or no noticeable effect in the overall throughput. In some cases, ensuring fairness may actually increase the overall throughput. For example, consider the case where a node close to the GW is using far more resources than other nodes in the network. A farther node may request resources very infrequently, however, because of the closer node, its requests may be ignored or rarely serviced because the closer node will almost always get its requests to the GW before the farther node. Having a network which offers high throughput to only some users is not ideal. It is better to have lower overall throughput, but similar throughput available to all users. This does not necessarily mean that instantaneous throughput must be fair. Long term fairness is also desirable. At a discrete point in time, certain nodes may not require their full share of the resources. So it may be better to distribute some of the free resources to other nodes. There are two causes to unfairness in a wireless mesh network: (i) A small portion of users are consistently requesting more than their share of resources and; (ii) All or most users in the network are consistently requesting more than their share. The first situation results in nodes which are starved due to the greedy nodes. The second situation results in the complete breakdown of the network if sufficient scheduling is not implemented. In an extremely congested network, there are no nodes left to borrow

12

resources from and so a good scheduling algorithm is required to balance the share of resources. The scheduling scheme closest to achieving this goal is mixed-bias scheduling. Proportional fairness and max-min scheduling are actually subsets of this type since they both can be incorporated into mixed-biasing techniques as a biasing strategy. The most common scheduling technique in literature is proportional fairness [9,31,37,47,54,59]. While using a weighting or proportionality factor is a reasonable idea there are other more recent techniques that have emerged. In [37] proportional fairness is partially used. When the resources in the network are plentiful, the system defaults to maximum throughput and when it becomes congested or busy it enforces fairness in the network. Another approach, as mentioned earlier is mixed-bias [54]. Mixed-bias is similar to that of [37] since it segregates the network resources and applies different fairness techniques to each section of resources. So as the network becomes more and more congested and busy the fairness enforcement level becomes stricter. For instance, the first ten percent of the network may use round-robin scheduling to give precedence to greedy flows while there are plentiful resources and the network is not congested. As the network becomes more congested the greedy nodes become penalized more so that a long-term fairness is achieved. In the case of [54], however, instead of biasing against congestion, it was chosen to bias against distance between nodes.

Some other

characteristics which may be biased against include congestion (queue size), link quality, and transmission rate; however, this was not noted in [54].

13

2.3.7. Centralized and decentralized scheduling A scheduling algorithm can also be classified based on whether or not they are centralized, the type of fairness and the metric or mechanisms they use in scheduling. In [45] there is a comparison between the key features of centralized and distributed approaches for scheduling. In [45], there are three observations which explain why distributed scheduling is beneficial: (i) nodes which cannot communicate with the coordinator cannot communicate at all in centralized schemes (ii) overhead from nodes communicating with the coordinator is reduced or eliminated in distributed approach and; (iii) the single point of failure problem is eliminated. For situations where the MRs are anticipated to be static, or the network size is small, it may be easier and more beneficial to use centralized scheduling. In contrast, when the MRs are mobile, it may be better to make use of a distributed approach in case the network becomes partitioned due to mobility. If reliability is a concern or the network size is large, distributed scheduling may also be preferred due to increased reliability and lower overhead. For fair scheduling in WMNs many of the algorithms use the concept of a bandwidth allocation vector [31] or similar approaches [54] to determine how much of the network resource is required from a flow, as well as when and how to schedule the resources. Also fair scheduling algorithms which attempt to avoid collision altogether make use of a compatibility matrix to determine which nodes can communicate at the same time without collisions [9, 42]. Table 1 summarizes the three classifications discussed to allow quick comparison of recently proposed scheduling algorithms for WMNs. In order to keep the table to a manageable size the following notations were

14

used: RR: Round Robin, PF: Proportional Fairness, M-M: Max-Min and M-B: MixedBias. TABLE 1. SUMMARY OF FAIR SCHEDULING ALGORITHMS FOR WMN [19]

Reference

Type of Fairness

Salem [42]

RR with spatial re-use PF

Erwu [31] Koutsonikolas [9] Gupta [47] Viadya [45] Sorensen [56] Cao [37] Nandiraju [46] Singh [54]

Metric / Mechanism

compatibility matrix for collision avoidance, distributed bandwidth requirement weight-factor, distributed RR interference threshold, SINR, centralized PF with service levels access threshold, distributed PF Back-off interval, distributed PF, RR TDMA with and without weighting, distributed PF when necessary bandwidth-allocation vector, centralized / distributed RR uplink-downlink independent uplink & downlink DCF, centralized M-B bias weight function

2.4. Analysis of limitations and assumptions in current techniques In this section we provide an in-depth analysis of some of the assumptions and limitations of the current approaches for fair scheduling in WMNs. The assumptions we will focus on are: limited or no mobility for the MRs, fixed topology of MRs and gateways, the assumption of a single gateway and downlink and uplink equivalence. We will discuss how older techniques developed for single hop and ad-hoc networks may be adapted and applied to WMNs to further advance research and improve scheduling. Finally, there will be a summary table once again to allow for easy comparison between algorithms in terms of assumptions and possible direction for future research. In all of the following sections, an overview on the assumption is provided. Second, the importance of the assumption is identified. Lastly, the consequences of relaxing the assumption in

15

future work would be. For more detail on which techniques make which assumptions see summary Table 2. The assumption of limited or no mobility for mesh routers in a WMN is made in [9,29,31,37,45,46,47,62]. This assumption is made in order to reduce complexities when developing the initial algorithm. However, if this assumption is relaxed it allows for a more general solution which is more flexible and useful. Consider for example a WMN where the MRs are mounted on cars, trains and buses as part of a transit system as shown in Figure 2. The MRs could provide Internet access to passengers on the transit system, allow for wireless surveillance systems on the vehicles to keep passengers safe, or to collect information on the locations of the vehicles to provide estimates on arrival times. In this system we could still make the assumption that the MRs have more resources (power resources, processing and memory) compared with the MCs. Such networks constitute a typical application of future mobile WMNs and could make use of techniques used in wireless ad hoc and sensor networks

as well as those from WLANs for

scheduling and load balancing.

Figure 2 Example of a Mobile WMN for various applications

16

Another assumption is static topology [9,29,31,37,45,46,47,62]. In this case, it is assumed that the topology of the MRs and gateways is either fixed or rarely changes and as such can be manually configured. This is contrary to one of the most important benefits of WMN. A WMN is supposed to be self-configuring, self-healing and flexible so MRs and gateways should be able to be added / removed and as mentioned above, mobile. Coincidentally, all of the solutions which assume no mobility also assumed static topology. This is interesting because it is often much easier to handle adding or removing nodes than it is to handle moving nodes, since there is no handoff requirement in the first case. Again, to keep the scheduling and load balancing algorithms simple it is assumed that there is only one gateway [42,45,46,47,56,62] in the WMN or even assumed that there was no gateway at all (the traffic was limited to local network traffic only) [9,36,37,54]. However, it has been pointed out that one of the main uses for WMN is to provide internet access with expanded service areas from traditional WLANs so that means the majority of the traffic flow is between the gateways and the MCs [61]. Having only one gateway in this scenario is a major bottleneck so the existing solutions should be extended to be able to support any number of gateways to make a truly scalable WMN. In [31,35,46], it is explored whether uplink and downlink scheduling can be treated equally for scheduling and load balancing. The reasoning behind this is that there could be a flow which makes use of uplink traffic to large extent while hardly requiring any downlink traffic, so if there are different schedules for both uplink and downlink, perhaps a higher throughput and greater fairness could be achieved [46]. In [9,29,36,45,47,61,62] it is just assumed that downlink and uplink are equivalent. In

17

[37,56] only one is dealt with at once (for example just uplink scheduling) [37] leaving the reader with the assumption that the opposite (for example downlink) [56] may be dealt with in the exact same manner. Table 2 summarizes the previous five categories to allow for easy comparison and identification of areas of future work for scheduling in WMNs. For clarity we define downlink as DL and uplink as UL in Table 2. TABLE 2. SUMMARY OF ASSUMPTIONS FOR FAIR SCHEDULING IN WMN.

Reference Salem [42] Erwu [31] Koutsonikolas [9] Bejerano [61] Ramachandran [29] Gupta [47] Viadya [45] Sorensen [56] Cao [37] Bejerano [62] Nandiraju [46] Singh [54] Popa [36]

Limited Mobility partial yes yes partial yes yes yes yes yes yes yes yes no

Assumptions Fixed Single GW Topology partial yes Yes yes Yes no GWs partial no Yes no Yes yes Yes yes Yes yes Yes no GWs Yes yes Yes yes Yes no GWs No no GWs

DL = UL Yes No Yes Yes Yes Yes Yes DL only UL only Yes no yes yes

Multihop Yes Yes Yes Yes Yes No No No Yes No No Yes Yes

2.5. Motivation for cross-layer design in WMNs As discussed in [15], certain features within WMNs demand cross-layered design such as advanced antenna technologies, physical layer technologies [22,27,28] and multichannel [23] or multi-radio technologies [3,49].

Sometimes it is necessary to have

information from various layers in order to make decisions in higher layers in the network. This may result in better decision making when it comes to routing, allocating resources or scheduling.

It has been confirmed in [34] that cross-layer design is a

promising technique for performance improvements in WMNs. Several solutions for

18

scheduling in WMNs do not consider cross-layered optimizations or leave this as a future work [36,42,54]. Additionally, there are also some works that do consider cross-layer design but are for ad-hoc networks or single-hop wireless local area networks instead. These solutions could be extended to include WMNs as well and take advantage of the unique properties that exist. Cross-layering allows network layers which normally are unable to communicate in the traditional layered network models to share data. This shared data may allow more intelligent decision making in terms of routing or scheduling. One innovative technique for achieving cross-layered design is to have a network status stack which contains information from all of the layers in the traditional network stack [38]. The traditional stack is then modified to take information out of this parallel stack in order to make more intelligent decisions. 2.6. Overview of cross-layer architectures There are several popular architectures which can be used for cross-layer design in WMNs. In [38], a common stack exists in addition to the traditional layers. The shared stack is accessible by all layers. Each layer can then use information from this shared stack to make more informed decisions at various points during communications. The main advantage of this technique is that it is very extensible compared to other techniques, and often the traditional layered network stack can remain intact. If there is information in the dual stack then the protocols can use it. Otherwise, they can fall back to legacy techniques. The implementation of this technique in practice can vary, from a simple parallel stack to a database style system. In contrast to this, there are various highly coupled techniques, which require two or more layers to communicate directly with one another by modifying the existing

19

layered model. These types of solutions are difficult to extend and maintain and should be avoided [57]. These types of designs may be useful for cutting edge work that is not mature, since the development is less planned and can go ahead more rapidly. Lastly, there are also systems which do not directly communicate, but insert extra control packets into the system that are meant to be read by the layers and discarded before the actual communication occurs. This type of system may not be as practical since extensive modifications may need to be made in order for the layers to understand the new control packets. Since there is little regard for software engineering concepts in this approach, future modifications and extensions may become difficult and costly. Each cross-layer architecture has its benefits and drawbacks, however, the main priority in cross-layered architectures are primarily: (i) easy maintenance and (ii) extensibility. Several others confirm this viewpoint [21,38,57,58]. The performance aspect of the cross-layered approach comes from selecting suitable information and how that information is used. The architecture should be independent from this. Using an architecture based on good software engineering principles allows future work that is increasingly complex and based on even more feedback from all the layers. In this way, with greater feedback we can achieve better performance and make far better decisions than we can in a traditional layered design. Without good architecture, however, this becomes difficult and we are limited to a small amount of feedback. 2.7. Analysis of cross-layer design Scheduling has been studied in operating systems for user scheduling, in databases for transaction scheduling. Cross-layered design has been studied to some extent in wired networks. There is less motivation for cross-layer work in wired networks because the

20

channels are more reliable, and are not prone to radio interference in the way that wireless networks are. Other the other hand, in wireless networks, cross-layering has been studied in both ad-hoc networks and single hop wireless local area networks. These solutions could both be extended and the same techniques could be used in wireless mesh networks. 2.7.1. Strengths and weaknesses of cross-layer design There has been some debate in the past few years on whether or not cross-layered design in wireless networks is beneficial. In [57] a cautionary perspective on cross-layer design is presented. It states that the cross-layer approach can cause undesired interactions and make it difficult for future innovations on the protocol. There are several considerations that must be taken into account when making use of cross-layer design for wireless networks. These include avoiding unbridled design that causes “spaghetti code”, being careful to consider unintended effects from cross-layer optimizations, (for example timescale separation may be required for some optimizations) and long term architectural value of the optimizations [57,58]. On the other hand, the performance improvements and increased ability to make informed decisions based on more data makes cross-layering an attractive option for use in wireless mesh network scheduling protocols. Cross-layering is especially useful at the medium access control (MAC) layer which is the layer that decides when a device has the ability to access the medium. If much information is combined here to give the MAC layer an informed decision, the network may perform much better. For instance, if the MAC layer knows information about the link quality, the transmission speeds available, how much queue space it has left, and its distance from its final destination, it is able to gauge how much priority its transmission should have. As long as the system for getting the relevant information to the MAC layer is designed carefully,

21

there is no reason why cross-layering should not be applied. Similarly at the routing layer, if we know information about the state of the links, we can make a good choice on which link may be the best option for sending the packet. As mentioned previously, one popular method for retaining good design and a cross layered approach is by using a parallel stack which may be accessed by a modified traditional stack [38]. In this way the traditional layered stack from wired networks is retained and still modular while avoiding code which is difficult to maintain and improve. 2.7.2. Comparison of cross-layer techniques Based on literature reviewed, we identified four techniques or methods in which cross-layered design is applied for fair scheduling in wireless mesh networks. These are power adaption, rate control and route adaption and network coding. The following paragraphs will outline some examples of each technique of cross-layered design. The layers used in each technique will also be identified. 1. Power Adaption: Power adaption is where the power levels of competing nodes are adjusted to ensure greater fairness between the two. This technique is used in [23,24]. In addition to ensuring more fairness in regards to scheduling it also allows the MCs to save power which is especially important in networks where the clients are restricted in this respect, for example in a wireless sensor mesh network. In [24] the cross-layering is between the network and link layer while in [23] cross-layering is between the link layers (MAC) and the physical layer. According to [13], using this approach “solves the hidden terminal problem without aggravating the exposed terminal problem.”

22

2. Rate Control: Rate control allows the MRs to control the transfer rates of the MCs which are associated with them. This technique is useful for scheduling while retaining respectable throughput. The rates for a given link are raised when the link quality is higher, however, to ensure fairness it cannot go above a certain threshold where other MCs are affected. This technique is used in [23, 60]. In [60] cross-layering is between the transport and link layer. In [13] this type of cross-layering is useful for increasing network performance because of more accurate link and network conditions provided to the corresponding congestion (rate) control. In [23] it is between the network, transport and link layers. One additional interesting example is [27] where a combination of power control and rate control is used. This protocol is a link (MAC) and physical layer cross. 3. Route Adaption: Route adaption is used as a congestion avoidance technique that can also help ensure fairness in the network. This technique works by changing the routing information for certain flows based on the congestion levels within the path. If a given path becomes congested, information from lower layers (such as link layer) informs the higher layers (example network layer) that a new path should be taken. Once again there must be a threshold which is crossed and only some of the paths should be informed so that a “ping-pong effect” is avoided where the paths are constantly changing back and forth from constant congestion. This technique is used in [39, 41]. In [41] the cross-layering is between the network and link layer and in [39] it is between the transport and network layer. 4. /etwork Coding: Lastly, network coding is used to allow multiple unicast transmissions to occur simultaneously. In [28] a concept of poison-antidote is presented

23

where the poison is considered to be the coded bit which is decoded by the antidote. The cross-layering in this protocol occurs in the link (MAC) and physical layers. For a summary of all of the above mentioned protocols see Table 3. TABLE 3. CROSS-LAYERED DESIGN FOR SCHEDULING TECHNIQUES

Reference

Technique

Layers

J. Tang et. al [22]

Power Control

Link (MAC), Physical

J. Thomas [24]

Power Control

Network, Link (MAC)

X. Wang et. al [60]

Rate Control

Transport, Link

J. Tang et. al [23]

Rate Control

Transport, Network, Link

K. Karakayali et. al [27] Power / Rate

Link (MAC), Physical

M.J. Neely et. al [39]

Route Control

Transport, Network

M.S. Kuran et. al [41]

Route Control

Network, Link

K. Li et. al [28]

Network Coding Link (MAC), Physical

2.7.3. Cross-layer extension of existing scheduling solutions There are many solutions already proposed for scheduling in wireless mesh networks that do not already make use of cross-layered optimizations. Many of these existing solutions may be altered and extended with cross-layering to provide increased performance or to make more intelligent decisions in some aspects of the protocol. One particular example is [42] where a fair scheduling protocol for wireless mesh networks is proposed. Several assumptions are made including: single gateway, uplink-downlink equivalence, static topology of MRs and non-mobile MRs. These assumptions could be relaxed and the cross-layered design approach could be applied resulting in an extended version of the protocol which improves much over the original. For more examples of protocols which could be extended with cross-layering, it may be useful to consult [16] or

24

other general surveys on WMNs. Many existing protocols are listed as references which may benefit from cross-layered design. The limitations of cross-layering techniques are similar to those of normal scheduling techniques in wireless mesh networks. The solutions that currently exist make many assumptions including: single gateways (or no gateways) [12,30], limited or no mobility of mesh routers [11, 12, 22,23], non-overlapping cellular coverage areas [23], static topologies [23,11] and uplink-downlink equivalence. All of these assumptions leave much future work to be done in the area. If these assumptions are relaxed more general and flexible solutions could be designed. On the other hand as noted in [60], the complexity of cross-layered design may be the reason why so few solutions have been extended with cross-layering. This limitation can, however, be solved if the optimality requirement of the scheduling is relaxed. When this is the case, it is often claimed that a whole class of relatively simple and efficient scheduling can be implemented in a distributed fashion. 2.8. Motivation for cross-layer mixed-bias framework There is much motivation for using a cross-layered mixed-bias framework for scheduling in wireless mesh networks. The merits of using a cross-layered approach have been outlined previously in section 2.5 so this section will focus mostly on the mixedbias approach. In the original framework by [54], promising results are demonstrated when compared with other biasing schemes such as proportional fairness and max-min. In the mixed-bias technique, the amount of hops away from a gateway was biased against. Rather than naively increasing the biasing factor, the original work proposed a mixed-biasing technique which strongly biases against the hops for a certain portion of

25

the resources while maintaining a weaker bias for the rest. For example, half the resources may use the strong biasing scheme while the rest may fall back on proportional fairness. We propose to take this further and bias against several characteristics at one time in a combined mixed-biased approach. This way, the network resources will be further divided into different mixed-bias schemes. This more general framework will allow for further customization of the network and allow for biasing against many characteristics at once. This will allow greater performance by taking into consideration more factors which negatively affect network performance such as poor link quality and greedy nodes (nodes with high sustained demand). If nodes and flows which have desirable characteristics as determined by the biasing function, a reduction in the overall number of retransmits should be achieved. A reduction in retransmission will greatly increase performance since retransmission and the exponential back-off function can significantly increase the time it takes a packet to reach its destination [1].

26

Chapter 3 Proposed approaches In this thesis, we present two approaches for scheduling in WMNs. The first approach is the fair scheduling approach. This approach makes use of a technique similar to that of [42] while relaxing several assumptions made in the original work. The key contributions in this approach are multiple gateways and a distributed clique propagation algorithm. The emphasis of this approach is fairness, leaving overall throughput as a secondary goal. This is especially important when trying to provide equal service to all users within the WMN. However, this approach is not always the best for maximizing throughput and can waste some of the bandwidth in the network. As an alternative to this approach we also present the mixed-bias scheduling approach. This approach is a unique interpretation of the mixed-bias technique [54] except instead of biasing against just one characteristic of the network like the original work, we propose to bias against several characteristics so that a balance between throughput and fairness will be provided to all users. 3.1. System assumptions While one of the goals of a completely general and flexible wireless mesh network is to make as few assumptions as possible, this work makes several key assumptions. These assumptions were mainly made in order to keep the amount of time for the research to a reasonable length and to keep the complexity of the system to a manageable level. The assumptions we make in this thesis are common among similar works, however, our work is unique with respect to some in that we do reduce some assumptions. One of the main assumptions we make in this work is that there is no

27

mobility in any nodes in the network. Mobility introduces new complexity in that a mesh client may no longer be associated to the network via a single connection point at a single MR. This property requires some mechanism to keep track of the point of attachment in the work for each MC and introduces many new problems due to the increased overhead in handling this situation. There is also the problem of whether the network or the client should handle the hand-offs, with each affecting performance differently. In the case where MRs are mobile, even more problems are introduced. If the MRs are sufficiently mobile, the network may break apart into partitions and certain regions may be unable to communicate with the rest of the network. 3.2. Overview of fair scheduling approach The fair scheduling approach we propose relaxes the assumption from [42] that the network has only one gateway. We also assume that the requirement and routing tables are distributed across the mesh routers. In our fair scheduling approach it is assumed that MRs and GWs are not mobile. Their positions are fixed throughout the simulation. This assumption is quite common in many of the existing solutions. However, the goal is that eventually this assumption could be relaxed resulting in the concept of a Mobile Mesh Network where the MCs and even the MRs are not fixed and the topology of the network is extremely dynamic. There are many benefits and applications of this type of network. It could be used in transit systems, military applications or disaster relief. Rather than having to deal with multiple handoffs of many moving clients, the moving clients could be associated with a moving MR. This would allow the network to focus on dealing with only one handoff while all of the MCs associated with the MR retain their attachment to the network. The scheduling we propose is a spatial time

28

division multiple access (STDMA) scheduling algorithm.

This is a good approach

because it eliminates interference by only allowing links which do not interfere transmit simultaneously. In wireless mesh networks in particular, interference has been identified as one factor which severely degrades performance and scalability [26,32]. In contrast to most existing solutions, we assume that the network may contain multiple gateways as shown in Figure 3. This is an important assumption because limiting the network to one gateway causes an extreme bottleneck. Even if the traffic within the network is balanced, having only one gateway can decrease the performance of the network because it remains the only location where traffic can enter or exit the network. Once the assumption of a single gateway is relaxed, we are then free to apply gateway load balancing across the multiple gateways similar to [53]. The single gateway property also implies that the network has a single point of failure.

Figure 3 Wireless Mesh Network (WMN) with multiple gateways [19]

There are two solutions to the single gateway problem. One is to assume that the gateway always has enough capacity to serve the demand of the network, regardless of its size. However, this solution does not solve the single point of failure problem. The other option, which we have chosen in this thesis, is to allow multiple gateways so that the load

29

of the traffic is spread around more evenly. Multiple gateways have also been used in other WMNs such as MIT Roofnet [17, 50], however, the emphasis was routing and thus scheduling was not studied. Another common assumption is uplink-downlink equivalence. It has been shown experimentally that uplink and downlink should be treated with separate techniques [31,40,46]. In this thesis we concentrate on uplink scheduling. The downlink portion scheduling has not been considered within the scope of this thesis since it is a large problem on its own which could incorporate further enhancements such as multicast trees. 3.3. Overview of cross-layer mixed-bias scheduling Our cross-layered mixed-bias approach takes ideas from many existing crosslayer works. Many scheduling solutions focus on biasing against only one characteristic of the network which is detrimental to scheduling or throughput. This is often true regardless of whether the scheduling scheme is hard-fairness, max-min or proportional. For example [22, 24, 30] focus on preventing links from interfering in order to improve the scheduling by adjusting the transmission power of the nodes. [54] gives preference to nodes which are close to each other when scheduling. However, in order for a scheduling algorithm to be effective and flexible in many situations it may be beneficial to try to capture many characteristics in one scheduling algorithm. Cross-layering allows the algorithm to gain information from many layers which normally cannot contribute to scheduling. This may be one reason why scheduling algorithms only focus on one characteristic.

30

In our approach, we propose a cross-layer, mixed bias scheduling algorithm for wireless mesh networks. The cross-layering will provide information on link-quality and distance between nodes. Link quality will be provided from the physical layer while distance can be provided in many ways. Distance could be computed by the number of hops between two points, by measuring the delay or by using real life coordinates if the nodes are equipped with Global Positioning Systems (GPS). A portion of the scheduling resources will be biased according to a set of heuristics that penalize nodes for various “bad behaviors” such as distance from the gateway, overuse of traffic, poor link quality and so on. Each heuristic will be assigned a different proportion of the network resources which will be determined experimentally. Another portion of the resources will be left for absolute fairness in order to ensure that none of the links are starved and that some minimum level of service is maintained. Then the collective system will be optimized to produce what we expect to be a high throughput fair scheduling for wireless mesh networks. In [54], it is argued that a fraction of the total resource allocation should be strongly biased against long connections while the rest remains allocated in a fair (maxmin), or weakly biased manner (proportional fairness). In their proposal they make use of only two different scheduling schemes in their mixed-biasing. The approach that will be taken in this thesis, however, is that by using more than two different scheduling approaches, more could provide even greater utilization of network resources. For example, in the original Mixed-bias approach, only the long connections (as in number of hops) are biased against. It was demonstrated that even while biasing against the long connections, the decrease in performance for those connections was similar to

31

proportionally fair solutions.

At the same time, because of the biasing, the short

connections benefited greatly.

It is also possible, however, to bias against greedy

connections, poor quality connections or any other metric. Then as a whole this entire system of biases could be optimized together to maximize both the fairness of the scheduling and the throughput of the network. In our approach, we make use of mixed-biasing to enhance our previous work which was an STDMA scheduling for wireless mesh networks [19]. The same approach can also be implemented in a network using an 802.11 distributed co-ordination function by giving preference in the MAC layer queues to the flows which have the least bias against them. The mixed-biasing is expected to improve the throughput of the network while maintaining some degree of fairness due to some of the network resources being allocated in a fair manner. As opposed to just biasing against long connections, however, our approach will bias against other factors using a heuristic approach. In our approach we will represent the capacity of the network, C as

n

C=

∑ (γ

f + γ 2 f 2 + ... + γ i f i + ... + γ n f n ) = 1

1 1

i =1

n

∑γ

1

+ γ 2 + ... + γ i + ... + γ n = 1

i =1

Where:

C is the total capacity of the network γ1, γ2 … γi … γn are the weight of a given biasing scheme f1, f2 ... fi … fn are the biasing functions for certain characteristic

32

(Eq. 4)

In this general case, any number of biasing function schemes may be applied which may result in a fine grained tuning of the network against specific biases. If starvation is to be avoided and some level of fairness ensured, at least one function must not bias against any node or flow but give “hard fairness”. Also the associated weighting scheme for this function must not be zero. The weighting could be either static or dynamic. For example in a dynamic implementation each weight may be allowed to shift between pre-determined ranges depending on the utilization of the network. For example one may let γ1 shift from a weight of 0.25 to 0.5 depending on the utilization of the network or some other parameter.

33

Chapter 4 Detailed descriptions of the proposed approaches In this chapter, we will present the more technical details of the two approaches in general, as well as individual algorithms in particular. In the first section, we will give a detailed description of the first main contribution of this thesis: A fair scheduling for WMNs with multiple gateways. Then in each subsection, the individual algorithms and components which make up this scheme are outlined in detail.

These components

include the distributed requirements table, the requirements propagation algorithm, the clique generation algorithm and the schedule generation algorithm. In the second section of this chapter, we will discuss in detail the mixed-bias approach in general. This will then be broken down into each of the mixed-bias techniques that will be combined in the second main contribution of this thesis: the combined mixed-bias scheduling approach. In this approach all three of the mixed-bias techniques are combined to form one complex scheduling for the network which biases against several characteristics which are detrimental to network performance and fairness.

4.1. Fair scheduling for WMNs with multiple gateways In this scheme, the approach is made up of several different components. Each of these components will be outlined in greater detail in the following subsections. This section will provide a general outline of the fair scheduling approach with multiple gateways, highlighting the main contributions we have made to this approach. We have proposed an enhancement of the original fair scheduling approach proposed by [42] which we call the distributed requirements table. The original work proposed only a scheduling, however, does not provide a mechanism for maintaining and

34

collecting requirements. The requirements are required for generating the scheduling since this information tells how busy each link is. Thus we propose a distributed manner of accomplishing this. Each mesh router keeps track of a local requirement table. In this requirement table, the demand on each link between the router and a neighbour is kept. When a new schedule is requested, each gateway asks for the partial requirement tables from each mesh router associated with it. The gateway then combines these tables to form one complete requirement table which it uses to generate cliques and eventually the scheduling. One main difference from [42] approach is that we assume multiple gateways. This means that each gateway in the network is responsible for scheduling all of the links which will forward packets towards it. The single gateway assumption is a significant one for two reasons: (i) The single gateway causes an extreme bottleneck in the network. All traffic which flows in and out of the network must use this node and so any scheduling work done in the network is limited by the single gateway. (ii) Similarly, the single gateway node causes a single point of failure in the network. If the gateway node is to go down in this scheme, there is no recovery. When multiple gateways are assumed, the bottleneck is eliminated. Not all of the traffic is destined to the same node in the network and is spread more evenly, especially with strategic gateway placement. With a more complex scheme than we proposed, one could further take advantage of the multiple gateways and perform load balancing on the multiple gateways so that under-utilized gateways could be taken advantage of for further performance improvements. Lastly, the single point of failure is eliminated as well. If one gateway experiences an outage, the network has the ability to reconfigure itself to forward packets and perform scheduling from another gateway. Once

35

the requirement table is formed, the gateway uses this information along with the clique information to form a scheduling plan. The clique information is all of the sets of links which may transmit at the same time without interfering with one-another. The clique information is generated once before any transmissions occur in the network in a manner similar to the way neighbours are discovered in [25]. In our system model we assume static nodes and topology, so no nodes are added or removed and there is no mobility. Thus we do not need to generate this information more than once in the life of the simulation. This is important because this operation is very expensive computationally, because clique enumeration is known to be a difficult problem to compute. If we were to assume non-static topology, we may have to make an assumption of a certain network size based on the computational resources of the gateway nodes in the network. Using both the clique information and the requirement information, we can then determine which links should be activated together and for how long. A further modification of this scheme would be to use different characteristics other than demand on a link, for example the quality of the link and the distance from the gateway could also be taken into account using a biasing scheme as we have proposed.

4.1.1. Requirement tables The type of fairness used in this solution is round-robin style with spatial re-use. We use centralized schedule generation at the gateways which makes use of distributed routing tables located at the mesh routers. We propose the requirement propagation algorithm which allows each gateway to distribute the requirements and routing table for the scheduling into the network. At each mesh router, the path to the gateway is maintained. In this table, requirements for the links on this path are also maintained. For

36

each client requesting to use this mesh router, each link along the way to the gateway in the local table is given a requirement. When the gateway signals the start time for new schedule generation, it requests the local requirement information from all of the mesh routers which are currently using it as their primary gateway. It then combines the requirements to help determine the scheduling as shown in Figure 4. In Figure 4, each mesh router has a local requirement table. This requirement table keeps track of the requirement for itself and for all the nodes on the path towards the gateway.

A

requirement is added when a MC sends data to a MR. At that particular MR, the requirement is incremented for itself and for all hops to the gateway in its local table since all of these nodes will have to relay the packet. A single gateway is responsible for generating the scheduling for all of the nodes which route through it. Then when a new scheduling must be generated, the gateways request the requirement from each table. Each gateway then combines the requirement information from each mesh router with the compatibility matrix. The compatibility matrix represents the links which may transmit simultaneously without interference and is computed or setup manually once when the network is setup. The gateway then computes the scheduling. After the scheduling is computed, START packets are sent to the MRs when they are free to transmit and E(D packets are sent to the same MRs when their transmission period has ended. This continues until the end of the current schedule and the process repeats. In our solution, each gateway is responsible for generating a scheduling for the mesh routers making use of it to relay packets to the Internet. The schedule generation algorithm from [42,7] requires that a compatibility matrix be generated for the network before the algorithm operates. The compatibility matrix is a way of representing which

37

links may be activated simultaneously without interference or collisions at MRs. One main difference with our approach is the use of distributed requirement tables located at each of the MRs.

Figure 4 Distributed Requirement Tables Combined at GW [19]

4.1.2. Requirement propagation The requirement propagation algorithm given in Algorithm 1, allows the gateway to keep track of the requirements across all of the links. At the MR, a table containing a partial representation of the network is kept for all of the MRs on the way to the gateway. When a MC associates with a given MR, the requirement is incremented for all the MRs along the way to the gateway in the local table. When a new schedule generation is to be completed, the GW requests for the requirements from all of the MRs and combines the results from the partial tables to determine which links must be activated and for how long.

38

Requirement Propagation Algorithm Pseudocode

1. Associate MC with MR 2. Generate a Client Requirement at MR for the MC 3. For each link between MR and GW - Requirement(current-link) ++ 4. For each Hop - Requirement(current-link) -5. On Drop: For each link between MR and GW - Requirement(current-link) -Algorithm 1 Requirement Propagation [19]

In this scheme, each gateway is responsible for generating the centralized scheduling for all of the links routing to it. The distribution and coordination of the scheduling is done through the use of START and E(D packets. The gateway sends a START packet to the MR when it has scheduled time to send and an E(D packet when it no longer has permission. It is assumed that these control packets are sent on a different channel from the data and thus do not interfere with data traffic. At the end of one cycle of scheduling, the process is repeated with a new scheduling plan being computed and distributed throughout the network. The round-robin nature of the scheduling allows the solution to be simple compared to techniques that include weighting functions.

At the same time, when

compared with a naive round-robin technique, less time is wasted waiting for links which have no traffic to send since time is only allocated to links with requirements. Since we are concerned with fairness among clients who are paying similarly for equal service, this solution works well. Many existing solutions make use of similar round-robin style techniques [9,42,46,59] but none of them use of multiple gateways. Using a single GW to serve a large mesh network is impractical, however, since it becomes a bottleneck 39

quickly as the network size grows. In [42] the solution was distributed in the sense that the scheduling had to be spread around the network to all the MRs from the centralized GW, however, the algorithm presented provided no means for the distribution to be accomplished. We provide a method for this in our solution.

4.1.3. Clique generation In order to determine which groups of links should be scheduled together, a concept of gain which was introduced in [42] is used to select groups of links which have the greatest load. Gain is defined as the sum of the requirements of all the links minus the greatest requirement.

The scheduling algorithm uses the path and requirements

information to give permission to certain MRs to transfer at the required timeslots. When the fair scheduling algorithm is enabled, a MR may only send packets when it has permission to do so. If it does not have permission, it retries until a waiting threshold has been crossed at which point the packet is dropped. When collision occurs because a buffer is full the packet is dropped. The performance of the network could be improved further if a retry or backup mechanism was implemented or if load balancing was applied at the GWs.

4.1.4. Schedule generation Scheduling is generated for the all of the mesh routers in the network using the concept of a compatibility matrix similar to that used in [42,55]. The compatibility matrix is then used to determine which links can be enabled at the same time without causing interference. In our network model, this means that the two MRs do not have a common neighbor and are not neighbors with each other. Due to the positioning of the

40

MRs and the communication ranges, if two MRs are not neighbors and do not share a common neighbor, they are not close enough to cause interference with each other and they do not compete for the resources of a common neighbor. This way both may communicate at the same time. The spatial TDMA scheduling allows multiple links to be activated at the same time when they do not interfere. So the network can be used far more efficiently than it could if only one link in the entire network were active [13]. Furthermore, since the algorithm uses the concept of compatibility, no two links are active that compete for resources so collisions are avoided. The solution presented here is different from many other TDMA solutions because it only allocates time for links which actually have requirements associated with them.

4.2. Cross-layer mixed-bias techniques There are four main techniques we will present in the following sections. The first technique is similar to that of [54]. This technique uses mixed-biasing to bias against the distance from the gateways. This technique is important because the farther away from the GW a MR is, the more hops the packet must traverse to arrive at the GW. This means the probability of a successful delivery decreases as the MRs are farther away from the GW. Also the average delay increases. Thus if we allow fewer packets from the farthest gateways will achieve higher throughput overall. At the same time, as the packets move closer to the GW successfully, there is a greater chance they will arrive since the closer MRs are given preference. The second technique favours MRs which have full queues, and thus biases against those with empty or near-empty queues. This is important because if we can give some preference to these routers, perhaps fewer packets will be dropped by reducing the

41

frequency with which the queue is full. By giving preference to full queues, we let the near empty queues build up and at the same time, allow the extremely full queues to empty. This results in a balancing of all the queues in the network. If we can reduce the number of dropped packets, the delay will likely also fall significantly since the overhead in resending a packet is often quite large. This is especially true in multi-hop networks such as WMNs since the retransmission control packets must also traverse the multiple hops. The third technique biases against poor links. This is important because link quality may change often depending on objects blocking signals, environmental conditions (such as weather, temperature etc) that may make certain links in the network perform better than others. When we bias against these links, we give preference to those links which are performing well and allow them to transmit more, thus increasing the overall packet delivery ratio. The last technique is the most ambitious technique. This technique is the combined mixed-bias approach. In this approach several mixed-bias techniques are combined to form one all-encompassing scheduling algorithm which allows preferred access to only those MRs which exhibit all of the qualities of a preferred MR. The criteria of which MRs are preferred are completely up to the network administrator and could even be modified dynamically. For example under certain conditions or applications, more or less characteristics could be factored into the combined mixed-bias formula. It is expected that the more characteristics we use at one time, the more complete of a picture of the network we receive and thus we are able to make a more informed decision on how to schedule access to the MAC layer.

42

It should be noted that for all of these techniques, there are two different approaches that could be applied when deciding on transmission scheduling.

(i) A

centralized approach that could be taken if the gateway is controlling the scheduling (such as in the STDMA scheduling from the fair scheduling with multiple gateways approach) (ii) A distributed technique where each MR itself decides whether to send the packet at a given time or to hold off on sending and give the opportunity to send to another MR. This technique relies on the assumption that all MRs in the network will cooperate, which is not always the case. Despite this weakness, the lack of a central authority removes the single point of failure problem. We will present some experimental results from both schemes since there are merits to both approaches. It is up to the network designer / administrator to decide on which technique is best for the particular application or network.

4.2.1 Biasing against distance from gateways In our approach for biasing against distance, we follow the same approach as [54], which forms the basis for much of this work. In the original approach, there is no concept of gateways but rather an ad-hoc network with a random source-destination traffic model. It is claimed that mixed-bias provides increased performance compared to both max-min and proportional fairness if a stronger bias is applied to nodes which are far away from each other while at the same time applying a weaker bias so that there is still limited service to the non-preferred nodes. This is different from other scheduling algorithms because in other solutions, some nodes are starved while others receive all of the resources. Avoiding starvation is important problem to consider in WMN design especially since factors such as interference fading and other factors already contribute to

43

low reliability in WMNs [51]. If the problem of starvation can be reduced, the reliability of the network will increase significantly. In the mixed-bias approach there are two different biases, one with a factor of 5 and one with a factor of 2 (which is similar to proportional fairness). There is also a factor of weighting between the two competing schemes of 0.5, which splits the amount of resources given to each scheme evenly. We make use of the same parameter values in our approaches to allow for easy comparison. In our system model we are applying a similar technique to a WMN which has random source and one gateway as a destination. This is different from the original approach because we introduce the concept of a gateway making the network model closer to that of WMN and less like a wireless ad hoc network. We chose to use the same biasing parameters for our system; however, this could all be altered resulting in vastly different results. Experimenting with different parameters could be left up to the network administrator since it is more dependent on the environment the network resides in and the applications which are being run on it. We include analysis of the technique presented by [54] as a verification of their results, and as a benchmark for our own schemes. Eq. 5 shows the mixed-bias equation which is used to assign resources to a given node based on its distance from the gateway it is routing to [54].

R=

α d

β1

+

1−α d β2

(Eq. 5)

Where R is the resources allocated to the node d is the distance from the node to the gateway, d > 0 α is the weight for each bias technique, 0 < α < 1 β1, β2 are the biasing constants that determines the strength of each bias, β1, β2 > 0

44

For this technique there are several ways in which the distance between the nodes could be computed. If each node had GPS capabilities, the distance could be easily computed whenever necessary by an additional query within a RTS packet. Alternatively, TTL counts or hop counts from higher layers could be used to determine the distance in hops between nodes and a table of known distances could be built up such that each node would know roughly how many hops it is away from the GW.

4.2.2. Biasing against low demand (queue size) Similarly to the previous technique, we chose to bias against those MRs which have little data queued to be transmitted. By doing this, the MRs which do have full or nearly full queues have the opportunity to clear space for incoming traffic rather than dropping packets. At the same time, those MRs which have lots of room in their queues have the ability to hold onto them until there is more data to send all at once. Of course this technique will not work as well when the entire network is completely congested, it depends on the assumption that at least some of the network has some free resources to make use of. However, this assumption is not unreasonable since one usually also assumes that a WMN can re-route packets through various links if one link has problems due to poor quality or over-use. For this assumption to be correct, this would mean that there must be some other link which can handle the extra capacity. This means that it is unlikely many of the solutions we have now would operate well in a WMN which is congested. Similarly to the distance biasing, we propose to split the resources available to this scheme in half, and bias proportionally (factor of 2) and strongly (factor of 5) against the queue size. The queue size that we are referring to for this technique is the MAC layer

45

queue. The formula which represents how resources are allocated with respect to queue size is seen in Eq. 6.

1 α 1−α = β1 + β 2 R q q

(Eq. 6)

Where R is the resources allocated to the node q is the length of the queue, q > 0 α is the weight for each bias technique, 0 < α < 1 β1, β2 are the biasing constants that determines the strength of each bias, β1, β2 > 0

In this technique, we use the inverse of the normal resource allocation function because we want to give preference to those nodes which have the largest queues. This means that we waste less time giving resources to those nodes which do not have much to transfer in a given time. The mixed-bias technique, however, still gives some priority to those nodes which do have a small amount of traffic, so it will prevent starvation of these nodes.

4.2.3 Biasing against poor link quality Again for this approach, we follow the mixed-bias framework proposed by in [54]. The reason for biasing against poor link quality is again to try to reduce dropped packets and end-to-end delay.

Link quality would be determined using a signal to

interference plus noise ratio (SINR) similar to [7] or packet error rate (PER) from [33]. This value would help us determine how many resources to allocate to a node as shown in Eq. 7. When we avoid links with poor quality by waiting for the link to hopefully improve, we allow links which are nearby with higher link quality to transmit more 46

packets. This may give the link the opportunity to improve in quality. If the link doesn’t improve in quality, at least it is getting fewer opportunities to send packets, so the poor service the users will experience is related directly to the quality of their own link and does not negatively affect the rest of the network. In other schemes this might not always be the case. The poor quality link may try to communicate as if the link were behaving normally. Again we use the same biasing parameters in this scheme with a proportional bias (factor of 2) and a strong bias (factor of 5) mixed together.

R=

α q β1

+

1−α q β2

(Eq. 7)

Where R is the resources allocated to the node q is the quality of the link α is the weight for each bias technique, 0 < α < 1 β1, β2 are the biasing constants that determines the strength of each bias, β1, β2 > 0

4.2.4 Combined cross-layer mixed-bias scheduling Lastly, our combined mixed-bias technique is novel. It takes all of the mixed-bias techniques outlined previously and combines them into an all-encompassing technique which considers multiple characteristics of the WMN at once. This technique provides a more complete snapshot of the network at a given moment for the MR to make a more informed decision on its state. Rather than just looking at one characteristic as [54] did the combination of characteristics provides a more general solution which is more flexible and useful in a wide variety of environments and applications. This combined

47

mixed-bias approach is similar to the cross-layer solution from [8] except our approach is for scheduling instead of routing. Also our approach uses mixed-biasing which is quite different from the simpler metrics used in [8]. The key to the combined mixed-bias technique is providing a fraction of the resources to each of the mixed-bias techniques. For example, half the resources could be assigned to the distance technique while a quarter could be assigned to queue length and the last quarter to link quality. The network designer or administrator could specify these quantities manually. In a more complex network, these parameters could dynamically change depending on which applications are being run on the network at a given time or what the environment around the network is like. The combined mixed-bias scheme is shown in Eq. 8.

R = γ 1 R1 + γ 2 R2 +γ 3 R3

Where R is the resources allocated to the node R1 is the resources calculated using Eq. 1 γ1 is the weight of scheme 1 in the combined biasing R2 is the resources calculated using Eq. 2 γ2 is the weight of scheme 2 in the combined biasing R3 is the resources calculated using Eq. 3 γ3 is the weight of scheme 3 in the combined biasing γ1, γ2, γ3 > 0 γ1 + γ2 + γ3 = 1

48

(Eq. 8)

Chapter 5 Performance evaluations of the proposed approaches In this chapter, we will evaluate the performance of the proposed approaches. Each approach is compared to existing solutions or a baseline approach in order to gauge the performance appropriately. For the case of fair scheduling with multiple gateways, since our approach is most similar to that of [42], we compare our solution with multiple gateways against their solution with a single gateway. Additionally, as a baseline we compare the two approaches to the approach without fair scheduling. In the case of the mixed-bias approach, we compare our solutions to [54] since this approach is closest to ours. In their approach, they bias against only one characteristic of the network: distance. We also provide an evaluation of this same technique so that comparisons may be drawn appropriately. The main difference between our evaluation and that in [54] is evaluation was done using the Network Simulation 3 (NS3) tool rather than Matlab. The NS3 simulation environment more closely models a wireless network environment than Matlab which is a more general purpose simulation tool. This allows our results to provide additional verification to those found in [54]. In addition to this, we compare our proposed mixed-biasing techniques against queue length as well as a combined mixed-bias approach which combines biasing against both queue length and distance from the gateway. Our approach is a more generalized one that may be applied to wide variety of unwanted characteristics in WMNs while making use of the original mixed-bias framework.

49

5.1. Comparison of simulation environments There are many options available for performing experimental analysis on protocols for WMNs. In to select a suitable environment for evaluating the proposed methods we will present a brief comparison of three of the choices.

5.1.1. Custom C/C++ simulation environment The C/C++ Simulation environment was used for all of the experiments in the fair scheduling approach. It was also used for the multiple gateway experiments in the mixedbias approach. The reason for this is because the C/C++ simulation environment was designed to be a multiple gateway environment from the beginning. The NS3 tool which was used for the remaining experiments was not able to support multiple gateways very easily. Since the C/C++ simulation environment was designed to support multiple gateways from the start, it was chosen for the multiple gateway experiment for the mixed-bias scheme as well.

5.1.2. Network simulation 2 When deciding on simulation environments to use for implementing the experiments, we reviewed Network Simulation 2 (NS2) [43] extensively since in the past this has been the standard tool used for wireless network simulation. One main advantage of using NS2 is the large number of protocols and modules which have been created over the years. For instance it supports popular routing protocols such as ad-hoc on-demand distance vector routing (AODV) [4] and dynamic distance-sequenced distance-vector routing (DSDV) [5]. However, there are several problems with NS2 that make other options more attractive. NS2 was designed originally to be a wired network simulation. All of the wireless capabilities were added during later updates to the software and

50

because of this, much of the code for wireless is not consistent. In many cases, modules have been written by many people and in many different styles and coding standards. When trying to extend this work, it takes a great deal of time to understand how the different objects interact. Furthermore, NS2 uses two different languages. The main simulation program is written in the C programming language, however, the scenarios which define the environment, nodes, applications and other entities is written in the TCL scripting language. There is a complex interface between these two languages which makes using NS2 even less intuitive to use and extend. This was especially true in modifying the MAC layer, which is where much of our extensions are applied.

5.1.3. Network simulation 3 NS3 [44] is the successor to the most popular network simulation tool for wireless networks, NS2. NS3 has several advantages over ns2. It uses a single language, C++, rather than the dual language C and TCL environment which NS2 used. This makes understanding much simpler than NS2 and requires less of a learning curve. Additionally, NS3 has an automatic documentation tool which is very useful in learning the environment. On the other hand, NS3 is still fairly new and does not have near the amount of online support and resources that NS2 has built over the years. For instance the only wireless mesh routing protocol available is optimized link state routing (OLSR) [48]. However, the core development team is eager to help and answer any questions through the NS3 mailing list online. NS3 has also been designed from the ground up to be a wireless and wired simulation tool while NS2 was only designed originally for wired networks. This alone is a key advantage in using NS3 since the code for wireless

51

simulation is more intuitive and requires fewer workarounds for simulating wireless networks.

5.2 Performance metrics Since both experiments made use of the same performance metrics, we will discuss these metrics together before going into detail on each experiment.

In our

experiments, we analyse the performance of the proposed approaches with respect to two metrics. The first metric is packet delivery ratio (PDR). Packet delivery is an important metric in determining the performance of a scheme because it gives an idea of how many packets are making it from source to destination. It is a measure of how many packets have made the complete trip, and does not include packets which have made it part of the way and have been dropped or lost. We define packet delivery ratio as shown in Eq. 9.

m

(Eq. 9)

∑P

received

PDR =

1

n

∑P

sent

1

Where:

PDR is the packet delivery ratio Preceived is the number of packets received at the destination GW Psent is the number of packets sent from the source MR m is the number of GWs n is the number of MRs

The second metric we use to evaluate the performance of a scheme is average end-to-end delay. This metric is also important in gauging performance because the PDR

52

is not meaningful without it. It does not matter how many packets arrive successfully at the end destination if they take a very long time to get there. The average end-to-end delay is computed by adding the delay from source to destination along the path a given packet takes from source to destination. Similarly, to PDR, this statistic does not include the delays for packets which have been dropped along their path from source to destination since we are only concerned with packets which arrive successfully. We define average end-to-end delay as shown in Eq. 10.:

n

∑d Delay =

Where:

(Eq.10) i

1

n

Delay is the average end-to-end delay d is the delay a successfully received packet has experienced n is the number of successfully received packets

5.3. Performance evaluation of fair scheduling with multiple gateways In order to evaluate our proposed fair scheduling with multiple gateways, we compare our approach against two other techniques. As a baseline, we compare against a network which makes no used of scheduling at all. In this case, any node is free to transmit whenever it has a packet in its queue. This approach is similar to that of the ALOHA protocol which was used in the earliest wireless networks. In order to provide a more realistic comparison we also compared our approach to that of [42] with a single gateway. Our approach also uses a similar technique to this except it allows for multiple gateways. The main purpose of this experiment is to prove that using multiple gateways

53

is extremely beneficial and should always be considered when designing WMN architectures.

5.3.1. Simulation environment Currently the performance evaluation was carried out using simulation experiments. The simulation focuses on packets transmissions from MRs to GWs. MCs are generated at the start of the simulation and are randomly distributed within the simulation environment. Each MC is associated with the closest MR and each MR routes its packets to the closest GW. This means that any packets that experience a collision at the association stage are not counted in the reported results. We consider this problem separate from the one we are trying to address in this thesis. In this thesis we are concerned with fair scheduling among the MRs. The control packets for distributing the scheduling are assumed to be sent on another channel and thus do not impact the performance of the network. Additionally, the simulation environment acts as an omniscient observer in that it performs the scheduling and distributes in to the gateways. In a real-world implementation this would need to either be performed through a centralized GW or via some kind of distributed GW solution. The interference model assumes that two nodes interfere if they are within range and transmitting at the same time or if there is a buffer collision. When interference occurs, retransmission is allowed until a threshold timeout is reached. The performance of the fair scheduling was evaluated using two simulation parameters. Our solution is compared to three different models: (i) model with no fair scheduling and a single gateway, (ii) no fair scheduling and multiple gateways and (iii) fair scheduling with a single gateway. We also separately compare the fair scheduling

54

with a single gateway [42] to that with multiple gateways to emphasize the benefit of using multiple gateways. Uplink traffic only is considered for these results since we consider downlink scheduling a separate problem which can take advantage of caching and multicasting to yield further improvements. The results shown in the figures are averaged over 10 simulation runs.

5.3.2. Performance metrics This simulation study uses two performance metrics. The first metric is average packet delivery ratio. It is computed as the ratio of the total number of packets delivered to the total number of packets sent. The second metric used in the simulation is the average delay. It measures the time taken by a packet to reach its destination. These metrics can help to gauge the performance of the protocol effectively.

Detailed

information including how each metric is calculated is shown in Section 5.2.

5.3.3. Simulation parameters In order to keep the scheduling algorithm simple, it is often assumed that there is one or no gateways [42,45,46,54,55] in the WMN. However, one of the main uses for WMN is to provide Internet access with expanded service areas from traditional WLANs and hence the majority of the traffic flow is between the gateways and the MCs [61] via MRs. Having only one gateway in this scenario is a major bottleneck so the existing solutions should be extended to be able to support any number of gateways to make a truly scalable WMN. There are several parameters used in this simulation. The main parameter settings are summarized in Table 4. The two main parameters varied during the simulation were the number of mesh routers and the number of gateways. The retry threshold is used

55

when a collision occurs either from interference or buffer overflow. The packet is allowed to be retransmitted unless the retry threshold has expired. The retry threshold can be adjusted depending on the network conditions.

TABLE 4 FAIR SCHEDULING SIMULATION PARAMETERS

Parameter Environment Dimensions Node Range Number of Mesh Routers Number of Mesh Clients Number of Gateways Mean Packet Arrival Mean Hop Delay Retry Threshold

Value 1000m x 1000m 250m 10 to 50 250 1 to 5 0.01 s 0.01 s 0.01 s

5.3.4. Effects of number of mesh routers 1

Average Packet Delivery Ratio

0.9 0.8 0.7 0.6 0.5 Single GW, No FS

0.4

5 GWs, No FS Single GW, FS

0.3

5 GWs, FS 0.2 10

15

20

25

30

35

40

45

50

Mesh Routers

Figure 5 Average Packet Delivery Ratio with Varying Mesh Routers

Figure 5 shows the average packet delivery ratio as a function of the number of mesh routers in the network. Results are plotted for the case with a single gateway and five gateways for both fair scheduling and no scheduling. As the network size increases, the difference between the techniques becomes more pronounced. The cases with multiple gateways have the highest packet delivery ratio. The results show that a single

56

gateway with no scheduling performs poorly delivering only 30% of the packets successfully to the Internet. It is interesting to note that using multiple gateways without fair scheduling can actually perform better than fair scheduling with a single gateway as can be seen in Figure 5. This demonstrates how important it is to consider the case of multiple gateways. The purpose of the comparison between multiple and single gateways in this figure is to highlight the importance of relaxing the assumption of single gateways and to compare our approach to that of [42]. As expected, multiple gateways yield higher delivery ratio for all network sizes. This is likely because on average there are fewer hops between any given MR and its GW. This is important because each hop increases the likelihood of encountering a MR that is busy which could result in packet loss at the worst or delay at best. 0.03 Single GW, No FS 5 GWs, No FS 0.025

Single GW, FS

Average Delay (s)

5 GWs, FS 0.02

0.015

0.01

0.005

0 10

15

20

25

30

35

40

45

50

Mesh Routers

Figure 6 Average Delay with Varying Mesh Routers

In Figure 6, average delay as a function of the number of mesh routers is shown. The case of a single gateway without fair scheduling is again the worst case. The results are similar to that of Figure 4 in that multiple gateways have the greatest performance in terms of average delay as well. With small network sizes (under about 30 nodes) all four techniques perform similarly. However, when the network size increases our approach 57

(fair scheduling with multiple gateways) is the best approach in terms of average delay. Again the case with multiple gateways performs better in terms of delay. As in the previous figure, this is likely due to the lower average hops any given node must take to get a gateway. The delay is not only accumulated because of a greater amount of hops in the single GW case but also because of the time spent waiting for a free buffer.

5.3.5. Effects of number of gateways Figure 7 shows the average packet delivery ratio as a function of the number of gateways in the network. These results were compiled with 50 mesh routers because larger network sizes are affected by a lack of gateways the most. This is reinforced by the results in Figure 6 which show a large difference between the performance with 1 and 5 gateways with both fair scheduling and no scheduling. Moreover, the results show performance improvement with fair scheduling with respect to average packet delivery ratio. 1 Fair Scheduling

Average Packet Delivery Ratio

0.9

No Fair Scheduling

0.8 0.7 0.6 0.5 0.4 0.3 0.2 1

2

3

4

5

Gateways

Figure 7 Average Packet Delivery Ratio with Varying Gateways

Similarly, Figure 8 shows the average delay as a function of the number of gateways with a network size of 50. In this case, the results of fair scheduling can best be

58

seen with a single gateway. This is likely because as each additional gateway is added, both the fair scheduling and the no scheduling cases benefit significantly by easing congestion on the single bottleneck gateway.

0.03 Fair Scheduling No Fair Scheduling

Average Delay (s)

0.025

0.02

0.015

0.01

0.005

0 1

2

3

4

5

Gateways

Figure 8 Average Delay with Varying Gateways

5.3.6. Summary of fair scheduling results In summary, the fair scheduling algorithm we have proposed performs well with respect to packet delivery ratio when compared to a solution without fair scheduling. When compared with the solution without fair scheduling to the fair scheduling solution with respect to delay, however, our approach performed worse. This leads us to believe that a trade-off between packet delivery ratio and delay exists when it comes to fair scheduling algorithms. Depending on the amount of delay, it may be worthwhile to introduce some extra delay into the transmission if it can avoid costly retransmissions which significantly increase the overall time. There are several factors which may have caused higher delay in the fair scheduling approach. The most significant factor being the amount of time that must be sacrificed in order to ensure there are no collisions in the medium. Lastly, compared to an approach with a single gateway, our approach with

59

multiple gateways performed better, validating the observation that a single gateway can severely restrict the overall throughput of the network.

5.4. Performance evaluation of cross-layer mixed-bias scheduling In this section and the following subsections we will discuss the performance of cross-layer mixed-bias scheduling techniques. These techniques are based on the work by [54], however, we generalize the idea proposed in the original work. In the original work, only one network characteristic (distance) is biased against. In our work we also propose to bias against any characteristic which may affect the throughput or delay in a negative manner. Additionally, we propose that a combined approach can then use mixed-biasing against all of these undesirable characteristics with a defined weighting for each technique so that the only the most desirable flows are given preference to transmit at a given time. A desirable flow can depend on the application of the network and can be adjusted based on which characteristics the network administrator chooses to bias against. In our performance evaluation we evaluate standard 802.11 ad-hoc MAC layer against three mixed-bias techniques. The first technique is mixed-bias against distance (hops from gateway) which is comparable to the approach given by [54]. The second technique is mixed-bias against queue length and the third is a combined mixed-bias approach which is mixed-bias against both distance and queue length. The routing algorithm used in each case is the OLSR routing algorithm. This is one of the only routing algorithms which has been implemented and supports wireless multi-hop networks in NS3 at the time of writing.

60

5.4.1. Simulation environment The simulation experiments on the cross-layer mixed-bias scheduling were performed using two different simulation environments: the NS3 simulation environment and an extended version of the C/C++ simulation environment used in the Section 5.3.1. There are several reasons for making this choice. The C/C++ simulation environment allows us to compare the mixed-bias approach with respect to the fair scheduling approach. It also allows us to use the mixed-bias technique in an STDMA MAC layer model. In this model, the scheduling is done via the hybrid approach. The gateway decides on the final scheduling provided to the MRs based on distributed information provided to it from the MRs it is serving. In NS3, we are able to evaluate the mixed-bias approach in a more realistic 802.11 MAC layer model.

This environment uses the fully distributed model of

scheduling. Each MR itself decides whether it should transmit or hold off and let another MR transmit based on the values of the characteristics it is using to evaluate its own state. In the NS3 environment since MAC, inference and other models are more realistic, we can focus more on how the approaches would behave in a real network. At the same time the C/C++ environment also provides valuable insight since it allows us to evaluate fair scheduling approaches in terms of the mixed-bias approaches.

5.4.2. Performance metrics In the mixed-bias experiments we evaluate each approach based on two performance metrics. More detailed information on these two metrics is available in Section 5.2. Here we focus on how the metrics are used specifically for the mixed-bias experiments. The first metric is average packet delivery ratio. This metric allows us to

61

determine how many packets are successfully being delivered end-to-end in the network. It is important to know this ratio because when packets are not being delivered successfully, the retransmit operation is extremely costly. This is especially true if the packet must be retransmitted across several hops as each hops accumulates delay additively across as it traverses the network on its way from source to gateway. Additionally, we evaluate the performance of the approaches based on the average end-to-end delay for each packet. Like the previous metric, this metric is important as it gives an idea of how long a packet is taking to traverse the network. In networks where delay is important such as those being used for voice, video games and other multimedia applications this metric is extremely useful. The Internet is increasingly being used for all of these types of applications more and more so this metric is especially important.

5.4.3. Simulation parameters In each scenario we compare the approaches with a varying number of mesh routers. This is an important parameter because it helps to tell if a given approach is scalable. This has been identified in [16] as one of the main goals of WMN protocols since it is often a severe limitation. Similarly, we evaluate the approach against the number of flows in the network. For this experiment we define a flow as traffic between a given MR and the GW in the network. Since we have no concept of MCs, this flow could consist of many clients connected to a given MR which are all being routed to the GW at a given time. If a MR does not have a flow originating from it at a given time, it is only being used as a forwarding node. In other words, the more flows in a given scenario, the greater the utilization of the network. This is also important in determining how scalable a protocol is. This parameter also gives us a

62

n idea of how the network performs as it becomes congested. We keep the number of gateways constant for these simulations since the goal of this experiment is not to verify the benefits of using multiple gateways. We assume that like the previous experiment with multiple gateways, increased performance could be obtained using multiple gateways. A summary of the main NS3 parameters is shown in Table 5. The simulation parameters for the C/C++ simulation are similar to those in Table 4 from Section 5.3.3. TABLE 5. MIXED-BIAS NS3 SIMULATION PARAMETERS

Parameter Environment Dimensions Node Range Number of Mesh Routers Number of Traffic Flows

Value 1000m x 1000m 150m 10 to 30 2 to 5

Mean Packet Arrival

0.01 s

5.4.4. Effects of number of mesh routers Figure 9 shows the effect of the number of MRs on the packet delivery ratio in a WMN with two traffic flows towards a single GW. In these results all of the mixed-bias techniques perform fairly similarly. One important result is that all of the mixed-bias approaches perform better than the 802.11 MAC distributed co-ordination function. Another important result is that in most cases, mixed-bias based on queue size performed the best. This result was somewhat surprising and may be because of the biasing parameters that were selected for the experiments. Since the mixed-bias distance technique often performed the worst of the mixed-bias approaches, it likely dragged down the performance of the combined approach. This suggests that different weighting

63

between the two approaches may yield greater combined performance. In all of the results presented the resources assigned to each scheme in the combined approach was split evenly. Tuning the biasing parameters is expected to result in greater performance for the combined mixed-biasing scheme; however, these experiments are beyond the scope of this thesis. The main purpose of these results was to verify that the combined mixed-biasing approach performs at least as well or better than other mixed-biasing schemes and in some situations this was the case. 0.9 0.85

Packet Delivery Ratio

0.8 0.75 0.7 0.65 0.6 0.55

M-B Distance Combined M-B

0.5

M-B Queue Size

0.45

802.11

0.4 10

15

20

25

30

Number of MRs

Figure 9 Packet Delivery Ratio, Two Flows

In Figure 10, the effects of varying the number of MRs on the average end-to-end delay is evaluated. In this experiement the network had two flows and a single GW. The highest delay is the worst performing, and in this case the best performing approach was the combined mixed-bias approach.

This is an important result because, while the

combined approach may not always result in the highest packet delivery ratio (Figure 9) 64

it does give a low delay. This tradeoff may be worthwhile, however, the time it takes for retransmissions is also important in the case where packets are dropped. The difference in delay between some solutions is significant. For example with 25 MRs, there is double the delay using the 802.11 approach compared with the combined mixed-bias approach. 0.035

Average End-To-End Delay

0.03 0.025 0.02 0.015 0.01

M-B Distance Combined M-B

0.005

M-B Queue Size 802.11

0 10

15

20

25

30

Number of MRs

Figure 10 Average End-To-End Delay, Two Flows

Figure 11 shows the effect of varying MRs with respect to average end-to-end delay. In this experiment there are five flows and a single gateway. One interesting result here is the mixed-bias distance approach [54]. When the network size grows large (30 MRs), the delay increases dramatically compared to the other solutions. This means for larger networks, the approach in [54] alone may not be suitable. This is where the combined mixed-bias approach becomes an attractive option. The extremely poor delay result for the mixed-bias distance with 30 MRs also suggests that in this situation, when

65

using the mixed-bias combined approach it may be best to assign more resources to the queue-size rather than an even split. Here we still have comparable packet delivery ratios while still retaining low delay. 0.1 0.09 Average End-To-End Delay

0.08 0.07 0.06 0.05 0.04 0.03

M-B Distance

0.02

Combined M-B

0.01

M-B Queue Size 802.11

0 10

15

20

25

30

Number of MRs

Figure 11 Average End-To-End Delay, Five Flows

5.4.5. Effects of number of gateways In Figure 12, the effect of varying gateways against packet delivery is shown. These results were obtained using the C/C++ simulation environment so they are slightly different from those in the previous figures which used NS3. Additionally, the results use the Fair Scheduling approach from Section 5.3 as a baseline rather than the 802.11 approach since it was not implemented in the C/C++ environment. In these results we can see that again the mixed-bias approaches perform the best, however, not by very much. The fair scheduling approach performs similarly to the mixed-bias approach but still slightly fewer packets are delivered. When there are fewer gateways the combined 66

mixed-bias approach performs slightly worse than the other two mixed-bias approaches which suggests a different combination of weightings may be more effective. For example with one gateway, the best performing mixed-bias approach was the queue size so in this situation it may be best to give more resources to this approach rather than an equal split.

1

Packet Delivery Ratio

0.9

0.8

0.7

0.6

M-B Distance Combined M-B M-B Queue Size

0.5

Fair Scheduling

0.4 1

1.5

2

2.5

3

3.5

4

4.5

5

Number of GWs

Figure 12 Average Packet Delivery Ratio, 250 Clients, 50 MR

In Figure 13 we compare the effect of varying gateways to the average end-to-end delay. Again this result was obtained using the C/C++ simulation. One interesting result is the combined mixed-bias approach, which performs poorly compared to the others when there is only one GW. This result suggests that the combined mixed-bias approach is affected by the bottleneck introduced when using a single GW, while the other approaches are not affected as much.

67

0.012 M-B Distance Combined M-B

Average End-To-End Delay

0.01

M-B Queue Size Fair Scheduling

0.008

0.006

0.004

0.002

0 1

1.5

2

2.5

3

3.5

4

4.5

5

Number of GWs

Figure 13 Average End-To-End Delay, 250 Clients, 50 MR

5.4.6. Effects of number of traffic flows In Figure 14 we compare the effect of varying the number of traffic flows on the average PDR. These results were obtained with a network size of 30 MRs and a single GW. As the number of flows increase the performance of the network in all cases drops significantly. In almost all cases, the mixed-bias approaches perform as well or better than the 802.11 approach. The exception is with four flows where the 802.11 approach performs slightly better than the mixed-bias approaches. The reason for this may be that a maximum is being reached in the 802.11 approach. When the number of flows is increased again to five flows the PDR drops significantly which indicates that a maximum may have been reached with the 802.11 approach.

68

0.9 M-B Distance

Packet Delivery Ratio

0.8

Combined M-B

0.7

M-B Queue Size

0.6

802.11

0.5 0.4 0.3 0.2 0.1 0 2

2.5

3

3.5

4

4.5

5

Number of Flows Figure 14 – Packet Delivery Ratio, Varying Flows

5.4.7. Summary of cross-layer mixed-bias results In summary, the mixed-bias approaches we proposed performed well with respect to the performance metrics. Often the mixed-bias approaches performed as well or better than the IEEE 802.11 MAC in terms of packet delivery ratio. In terms of average end-toend delay the mixed-bias approaches consistently performed better. Another interesting result is that the fair scheduling approach from Section 5.3 was outperformed by the mixed-bias approach. These results provide validation that the mixed-bias approach is a promising approach to explore in greater depth. Particular the combined mixed-bias approach which performed as well as all of the other mixed-bias approaches. With further tuning of the biasing parameters it is expected that even better performance can be obtained.

69

Chapter 6 Conclusions and future work This chapter will give concluding statements regarding the research presented as well as give direction on what can be done in the future as possible extensions of this work.

6.1. Conclusions In this thesis, two scheduling schemes were proposed for wireless mesh networks. The first scheme was a fair scheduling approach with multiple gateways. In this scheme we proposed distributed requirement and routing tables along with a requirement propagation algorithm which allowed scheduling information to be collected at multiple gateways. The second scheme was a collection of cross-layer mixed-bias scheduling techniques.

In these approaches mixed-biasing against various characteristics was

evaluated along with a novel combined scheme which made use of multiple characteristics at once. For both schemes performance evaluation was carried out using simulation. Each scheme was compared against existing approaches. The fair scheduling approach was compared against existing work with a single gateway and the results showed that using multiple gateways does indeed perform better than a single gateway in terms of packet delivery ratio and end-to-end delay. When compared against a solution that did not use any fair scheduling, the fair scheduling approach also performed better in terms of packet delivery ratio, however, in terms of delay it performed worse. This was likely due to the delay introduced by waiting to transmit only when no interference will be encountered. Moreover, due to the higher

70

packet delivery ratio, more packets were dropped in the non fair scheduling approach causing lower delay overall. The mixed-bias approaches we proposed were compared against IEEE 802.11 MAC in order to show that the mixed-bias approach is indeed a worthwhile scheduling approach for wireless mesh networks. Furthermore, we compared the original mixedbias distance approach against 802.11 MAC and our own approaches. All the experiments for mixed-bias evaluation were evaluated in terms of packet delivery ratio and average end-to-end delay. The results provided verification for the original work since the mixed-bias distance approach performed better than 802.11 MAC in almost every case. Our proposed mixed-bias queue size approach performed better than the mixed-bias distance approach in most of the comparison scenarios. Additionally, the new mixed-bias combined approach performed at least as well as the other mixed-bias approaches in almost every case which showed that it is indeed a promising technique that may be further explored in order to increase performance further.

6.2. Future work Since scheduling in multi-hop networks such as wireless mesh networks is a challenging problem, there are a number of possibilities for future research work which could be extended from the work presented in this thesis. The first is studying and experimenting with different biasing parameters. It is currently beyond the scope of this thesis since we were aiming to show that combined mixed-biasing is a promising framework for scheduling in WMNs.

However, the

performance of the scheduling could be improved or degraded depending on the application of the network, the environment and the selection of the biasing parameters.

71

Further study in this area could help to develop a more comprehensive understanding of how the values of these biasing parameters affect the network and how the parameters can be adjusted to cope with changing environmental conditions and application scenarios. The second area for future work is to experiment with the approaches presented in this thesis in a test-bed environment. There are many examples of this type of research in literature [2,50]. It is often difficult to predict how a protocol or algorithm will perform with real hardware, even with the most sophisticated network simulation tools. At the same time, however, the simulation studies are effective way to obtain a starting point for work in the test-bed environments as they are flexible and allow changing simulation parameters easily during the experiments. The third possible area for future research is designing scalable a communication architecture and protocols for supporting full mobility including all entities such as mesh clients, mesh routers and gateways. This would make the wireless mesh network more flexible and allow for greater deployments and applications. At the same time, certain features would still give it advantages over mobile ad hoc networks, such as MRs with greater resources to be exploited.

72

REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

A. Acharya, A. Misra, S. Bansal, “High-performance architectures for IP-based multihop 802.11 networks,” in IEEE Wireless Communications, vol. 10, pp 22-28 A. Zimmermann, M. Gunes, M. Wenig, U. Meis, J. Ritzerfeld, “How to study wireless mesh networks: a hybrid testbed approach,” in Proc. of 21st IEEE Int. Conf. on Advanced Information (etworking and Applications (AI(A 2007), pp. 853-860, May 2007. B. Aoun, R. Boutaba, G Kenward, “Analysis of capacity improvements in multi-radio mesh networks,” in Proc. of 63rd IEEE Vehicular Technology Conf. (VTC 2006), vol. 2, pp. 543-547, May 2006. C.E. Perkins, E.M. Royer, “Ad-hoc on-demand distance vector routing,” in Proc. of 2nd IEEE Workshop on Mobile Computing Systems and Applications (WMCSA 1999), pp. 90-100, February 1999. C.E. Perkins, P. Bhagwat, “Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers,” in ACM SIGCOMM Computer Communication Review, vol. 24, pp. 234-244. October 1994. C. Zhu, M.J. Lee, T. Saadawi, “On the route discovery latency of wireless mesh networks,” in IEEE Consumer Communications and (etworking Conference (CC(C 2005), pp. 19-23, January 2005. D. Chafekar, V.S.A.Kumar, M.V. Marathe, S. Parthesarathy, A. Srinivasan, “Crosslayer latency minimization in wireless networks with SINR constraints,” in Proc. of 8th Int. Symposium on Mobile ad hoc networking and computing, pp. 110-119, 2007. D. Guo, J. Li, M. Song, J. Song, “A novel cross-layer routing algorithm in wireless mesh networks,” in Proc. of 2nd Int. Conf. on Pervasive Computing and Applications (ICPCA 2007), pp. 262-266, July 2007. D. Koutsonikolas, S. M. Das, Y.C. Hu, “An interference-aware fair scheduling for multicast in wireless mesh networks,” in Journal of Parallel and Distributed Computing vol. 68, pp. 372-386, March 2008. D.S.J. De Couto, D. Aguayo, J. Bicket, R. Morris, “A high-throughput path metric for multi-hop wireless routing,” in Proc. of 9th Int. Conf. on Mobile Computing and (etworking, pp. 134-146, 2003. F. Foukalas, V. Gazis, N. Alonistioti, “Cross-layer design proposals for wireless mobile networks: a survey and taxonomy,” in IEEE Communications Surveys and Tutorials, vol. 10, pp. 70-85, 2008. G. Thamilarasu, R. Sridhar, “Exploring cross-layer techniques for security: challenges and opportunities in wireless networks,” in IEEE Military Communications Conference (MILCOM 2007), pp. 1-6, October 2007. H.T. Cheng, H. Jiang, W. Zhuang, “Distributed medium access control for wireless mesh networks,” Wireless Communications and Mobile Computing vol. 6, pp. 845864, 2006. H-Y. Wei, S. Ganguly, R. Izmailov, Z.J. Haas, “Interference-aware IEEE 802.16 WiMax mesh networks,” in Proc. of IEEE 61st Vehicular Technology Conf. (VTC 2005), vol. 5, pp. 3102-3106, June 2005.

73

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

I.F. Akyildiz, X. Wang, “Cross-layer design in wireless mesh networks,” in IEEE Transactions of Vehicular Technology, vol. 7, pp. 1061-1076, March, 2008. I.F. Akyildiz, X. Wang, W. Wang, “Wireless mesh networks: a survey,” Computer (etworks vol. 47, pp. 445-487, 2005. J. Bicket, D. Aguayo, S. Biswas, R. Morris, “Architecture and evaluation of an unplanned 802.11b mesh network,” in Proc. of 11th Int. Conf. on Mobile Computing and (etworking, pp. 31-42, 2005. J. Eriksson, S. Agrawal, P. Bahl, J. Padhye, “Feasibility study of mesh networks for all-wireless offices,” in Proc. of 4th Int. Conf. on Mobile Systems, Applications and Services (MobiSys 2006), pp. 69-82, 2006. J.B. Ernst, M.K. Denko, “Fair scheduling for wireless mesh betworks with multiple gateways”, in Proc. of 23rd IEEE Int. Conf. on Advanced Information (etworking and Applications (AI(A 2009),May 2009. J.B. Ernst, M.K. Denko, “A review of cross-layer design for scheduling in wireless mesh networks,” in Proc. of IEEE Int. Conf. on Science and Technology for Humanity (STH 2009), September 2009. J.H. Li, W. Peng, R. Levy, A. Staikos, M. Chiang, “On systemic cross-layer design for ad-hoc networks,” in Military Communications Conference (MILCOM 2008), pp. 1-7, November 2008 J. Tang, G. Xue, C. Chandler, W. Zhang, “Link scheduling with power control for throughput enhancment in multihop wireless networks,” in IEEE Transactions on Vehicular Technology, Vol. 55, pp. 733-742, May 2006. J. Tang, G. Xue, W. Zhang, “Cross-layer design for end-to-end throughput and fairness enhancement in multi-channel wireless mesh networks,” in IEEE Transaction on Wireless Communications, Vol. 6, 10 pp 3482-3486, October 2007. J. Thomas, “Cross-layer scheduling and routing for unstructured and quasi-structured wireless networks,” in Proc. of IEEE Military Communications Conf. (MILCOM 2005), vol. 3, pp. 1602-1608, 2005. J. Zheng, M.J. Lee, “A resource efficient and scalable wireless mesh routing protocol,” in Ad Hoc (etworks, vol. 5, pp. 704-718, 2007. K. Jain, J. Padhye, V.N. Padmanabhan, L. Qiu, “Impact of Interference on multi-hop wireless network performance,” in Proc. of the 9th Int. Conf. on Mobile Computing and (etworking, pp 66-80, 2003. K. Karakayali, J.H. Kang, M. Kodialam, K. Balachandran, “Cross-layer Optimization for OFDMA-based wireless mesh backhaul networks,” in proceedings of (WC(C 2007), 2007. K. Li, X. Wang, “Cross-layer design of wireless mesh networks with network coding,” in IEEE Transactions on Mobile Computing, April, 2008. K.N. Ramachandran, M.M. Buddhikot, G. Chandranmenon, S. Miller, E.M. BeldingRoyer, K.C. Almeroth, “On the design and implementation of infrastructure mesh networks,” in Proc. of the 1st IEEE Workshop on Wireless Mesh (etworks (WiMesh 2005), September 2005.

74

[30]

[31]

[32]

[33]

[34]

[35]

[36]

[37]

[38]

[39]

[40]

[41]

[42]

[43] [44] [45]

[46]

K. Wang, C.F. Chiasserini, R.R. Rao, J.G. Proakis, “A distributed joint scheduling and power control algorithm for multicasting in wireless ad hoc networks,” in Proc. of IEEE Int. Conf. on Communications, pp. 725-731, 2003. L. Erwu, J. Shan, S. Gang, G. Luoning, “Fair scheduling in wireless multi-hop selfbackhaul networks,” in Proc. of the Advanced Int. Conf. on Telecommunications and Int. Conf. on Internet and Web Applications and Services (AICT/ICIW 2006), pp. 96, February 2006. L. Huang, T.-H. Lai, “On the scalability of IEEE 802.11 ad hoc networks,” in Proc. of 3rd ACM Int. Symposium on Mobile ad hoc networking and computing, pp. 173182, 2002. L. Iannone, K. Kabassanov, S. Fdida, “Evaluating a cross-layer approach for routing in wireless mesh networks,” in Telecommunication Systems Journal, vol. 31, pp. 173193. March 2006. L, Iannone, K. Kabassanov, S. Fdida, “The real gain of cross-layer routing in wireless mesh networks,” in Int. Symposium on Mobile and Ad Hoc (etworking and Computing, pp. 15-22, May 2006. L. Ma, M.K. Denko, “A routing metric for load-balancing in wireless mesh networks,” in Proc. of 21st Int. Conf. on Advanced Information (etworking and Applications Workshops (AI(AW 2007), vol. 2, pp. 409-414, May 2007. L.Popa, A. Rostamizadeh, R.M. Karp, C. Papadimitriou, I. Stoica, “Balancing traffic load in wireless networks with curveball routing,” in The 8th Annual Int. Symposium on Mobile Ad Hoc (etworking and Computing (MobiHoc 2007), pp. 170-179, September 2007. M. Cao, V. Raghunathan, P.R. Kumar, “A tractable algorithm for fair and efficient uplink scheduling of multi-hop wimax mesh networks,” in Proc. of 2nd IEEE Workshop on Wireless Mesh (etworks (WiMesh 2006), September 2006. M. Conti, G. Maselli, G. Turi, S. Giordano, “Cross-layering in mobile ad hoc network design,” Computer vol 37, pp. 48-51, 2004. M.J. Neely, R. Urgaonkar, “Cross-layer adaptive control for wireless mesh networks,” in Ad Hoc (etworks vol. 5, pp. 719-743, 2007. M.K. Denko, L. Ma, “Uplink scheduling in wireless mesh networks,” in Proc. of GLOBECOM Workshops, pp. 1-6, November 2008. M.S. Kuran, G. Gur, T, Tugcu, F, Alagoz, “Cross-layer routing-scheduling in IEEE 802.16 mesh networks,” in Proc. of (Mobilware 2008), February 2008. N.B. Salem, J.-P. Hubaux, “A fair scheduling for wireless mesh networks,” in Proc. of 1st IEEE Workshop on Wireless Mesh (etworks (WiMesh 2005), 2005. Network Simulator 2 [Online] http://www.isi.edu/nsnam/ns/ Network Simulator 3 [Online] http://www.nsnam.org/ N.H. Viadya, P. Bahl, S. Gupta, “Distributed fair scheduling in a wireless lan,” in Proceedings of 6th Int. ACM Conf. on Mobile Computing and (etworking (MobiCom 2000), pp. 167-178, August 2000. N.S.P. Nandiraju, H. Gossain, D. Cavalcanti, K.R. Chowdhury, D.P. Agrawal, “Achieving fairness in wireless LANs by enhanced IEEE 802.11 DCF,” in Proc. of

75

[47]

[48]

[49]

[50] [51]

[52]

[53]

[54]

[55]

[56]

[57]

[58]

[59]

[60]

[61]

IEEE Int. Conf. on Wireless and Mobile Computing, (etworking and Communications (WiMob 2006), pp. 132-139, 2006. P. Gupta, Y. Sankarasubramaniam, A. Stolyar, “Random-access scheduling with service differentiation in wireless networks,” in Proc. of the 24th Joint Conf. of IEEE Computer and Comm. Societies (I(FOCOM 2005), vol. 3, pp. 1815-1825, March 2005. P. Jacquet, P. Muhlethaler, T. Clausen, A. Laouiti, A. Qayyum, L. Viennot, “Optimized link state routing protocol for ad hoc networks,” in Proc. of IEEE Technology for the 21st Century, pp. 62-68. 2001. R. Draves, J. Padhye, B. Zill, “Routing in multi-radio, multi-hop wireless mesh networks,” in Proc. of 10th Conf. on Mobile Computing and (etworking, pp. 114128, 2004. Roofnet [Online] http://pdos.csail.mit.edu/roofnet/doku.php S. Dominiak, N. Bayer, J. Habermann, V. Rakocevic, X. Bangnan, “Reliability analysis of IEEE 802.16 mesh networks,” in Proc. of 2nd Int. Workshop on Broadband Convergence (etworks, pp. 1-12, May 2007. S. Khan, A.A. Pirzada, M. Portmann, “Performance comparison of reactive routing protocols for hybrid wireless mesh networks,” in Proc. of 2nd Int. Conf. on Wireless Broadband and Ultra Wideband Communications, pp. 1-6, 2007. S.M. Das, H. Pucha, Y.C Hu, “Mitigating the gateway bottleneck via transparent cooperative caching in wireless mesh network,” in Ad hoc networks, vol. 5, pp. 680703, August 2007. S. Singh, U. Madhow, E.M. Belding, “Beyond proportional fairness: a resource biasing framework for shaping throughput profiles in multihop wireless networks,” in Proc. of the 27th IEEE Conference on Computer Communications (I(FOCOM 2008), pp. 2396-2404, April 2008. S. Nelson and L. Kleinrock, “Spatial TDMA: a collision-free multihop channel access protocol,” IEEE Transactions on Communications, vol. 33, pp. 934-944, September 1985. T.B. Sorensen, M.R. Pons, "Performance evaluation of proportional fair scheduling algorithms with measured channels," in Proc. of the 62nd IEEE Vehicular Technology Conference (VTC 2005), pp. 2580-2585, September 2005. V. Kawadia, P.R. Kumar, “A cautionary perspective on cross-layer design,” in IEEE Wireless Communications, pp. 3-11, 2005. V. Srivastava, M. Motani, “Cross-layer design: a survey and the road ahead,” in IEEE Communications Magazine, vol. 43, pp. 112-119, December 2005. X. Lin, N.B. Shroff, R. Shrikant, “A tutorial on cross-layer optimization in wireless networks,” IEEE Journal on Selected Areas in Communications vol. 24, pp. 14521463, August 2006. X. Wang, K. Kar, “Cross-layer rate control for end-to-end proportional fairness in wireless networks with random access,” in Proc. of (MobiHoc 2005), 2005. Y. Bejerano, S.J. Han, A. Kumar, "Efficient load-balancing routing for wireless mesh networks," in The Int. Journal of Computer and Telecommunications (etworking vol. 51, pp. 2450-2466, July 2007. 76

[62]

Y. Bejerano, S.J. Han, L. Li, “Fairness and load balancing in wireless LANs using association control,” in Proc. of the 10th Int. Conf. on Mobile Computing and (etworking (MobiCom 2004), pp. 315-329, 2004.

77