A distributed laxity-based priority scheduling scheme for time-sensitive ...

91 downloads 389569 Views 426KB Size Report
compared to that in conventional wired networks. MANETs pose ... multi-hop wired network, when a node has data .... Considering the advantages of distributed.
Ad Hoc Networks 3 (2005) 27–50 www.elsevier.com/locate/adhoc

A distributed laxity-based priority scheduling scheme for time-sensitive traffic in mobile ad hoc networks I. Karthigeyan, B.S. Manoj, C. Siva Ram Murthy

*

Department of Computer Science and Engineering, Indian Institute of Technology, Madras 600036, India Received 11 April 2003; received in revised form 8 August 2003; accepted 6 September 2003 Available online 26 November 2003

Abstract Characteristics of Mobile Ad hoc Networks such as shared broadcast channel, bandwidth and battery power limitations, highly dynamic topology, and location dependent errors, make provisioning of quality of service (QoS) in such networks very difficult. The Medium Access Control (MAC) layer plays a very important role as far as QoS is concerned. The MAC layer is responsible for selecting the next packet to be transmitted and the timing of its transmission. We have proposed a new MAC layer protocol that includes a laxity-based priority scheduling scheme and an associated back-off scheme, for supporting time-sensitive traffic. In the proposed scheduling scheme, we select the next packet to be transmitted, based on its priority value which takes into consideration the uniform laxity budget of the packet, the current packet delivery ratio of the flow to which the packet belongs, and the packet delivery ratio desired by the user. The back-off mechanism devised by us grants a node access to the channel, based on the rank of its highest priority packet in comparison to other such packets queued at nodes in the neighborhood of the current node. We have studied the performance of our protocol that combines a packet scheduling scheme and a channel access scheme through simulation experiments, and the simulation results show that our protocol exhibits a significant improvement in packet delivery ratio under bounded end-to-end delay requirements, compared to the existing 802.11 DCF and the Distributed Priority Scheduling scheme proposed recently in [ACM Wireless Networks Journal 8 (5) (2002) 455–466; Proceedings of ACM MOBICOM Õ01, July 2001, pp. 200–209].  2003 Elsevier B.V. All rights reserved. Keywords: Mobile ad hoc networks; Medium access control; Distributed packet scheduling; Laxity; Real-time traffic; Time-sensitive traffic; Priority-based scheduling

1. Introduction The paradigm of any time any where access to information has resulted in a rapid increase in the *

Corresponding author. Tel.: +91-44-22578340; fax: +91-4422578352. E-mail addresses: [email protected] (I. Karthigeyan), [email protected] (B.S. Manoj), murthy@iitm. ernet.in (C. Siva Ram Murthy).

use of wireless technologies over the past few years. Convergence of the Internet and wireless networking has resulted in the emergence of several types of wireless networks, Mobile Ad hoc Networks (MANETs) being one of them. A MANET is an autonomous system of mobile nodes connected through wireless links. It does not have any fixed infrastructure. Nodes in a MANET keep moving randomly at varying speeds, resulting in

1570-8705/$ - see front matter.  2003 Elsevier B.V. All rights reserved. doi:10.1016/j.adhoc.2003.09.007

28

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

continuously changing network topologies. Each node in a MANET, apart from being a source or a destination, is also expected to route data packets for other nodes in the network which want to reach other destination nodes located beyond their own direct transmission ranges. A MANET may operate either in isolation, or may be connected to the Internet through gateway routers. Unlike their wired network counterparts, resources such as battery power and bandwidth are limited in MANETs. MANETs find varied applications in real-life environments, such as communication in battlefields, communication within groups of people with laptops and other hand-held devices attending conferences, communication among rescue personnel in disaster affected areas, law enforcement agencies, and wireless sensor networks. Routing in MANETs is much more complex compared to that in conventional wired networks. MANETs pose unique challenges such as, mobility of the nodes, error-prone shared broadcast channel, hidden and exposed terminal problems, and constraints on resources such as bandwidth and battery power. Due to the above factors, providing end-to-end packet delivery and delay guarantees in MANETs is a tough proposition. Packet scheduling in the Medium Access Control (MAC) layer is one of the approaches adopted for trying to provide delay and packet delivery guarantees. Several schemes exist for packet scheduling in wired networks. The same above factors make it inefficient and costly to use such schemes for MANETs. Since transmissions by a node in a wireless network are broadcast in nature, MAC layer algorithms have been developed to control access to the shared broadcast channel. Packet scheduling in the MAC layer is used to choose the next packet to be transmitted, such that a sincere attempt is made to satisfy the overall end-to-end delay and packet delivery guarantees given to the user. Wireless scheduling algorithms differ significantly from their wired network counterparts. In a multi-hop wired network, when a node has data packets for transmission, it needs to worry about the packets in its own transmission queue only. But this is not the case with a wireless node. Since the channel is broadcast in nature, multiple nodes contend for the channel simultaneously, resulting

in transmission errors. Hence a node must also be aware of the nature of traffic at nodes in its locality. Wireless packet scheduling algorithms can be broadly classified into two major categories, centralized scheduling algorithms and distributed scheduling algorithms. In centralized scheduling algorithms, a particular node is chosen to serve as the coordinator/ scheduler. This node coordinates access to the channel. A node that wants to transmit must first get the permission to do so from the coordinator and only then can it start transmitting. There exist several schemes by means of which the scheduler grants the nodes access to the channel. The IEEE 802.11 Point Coordination Function (PCF) is one such centralized scheduling scheme, where a coordinating access point periodically polls each node in its coverage area in order to grant the nodes access to the shared channel. Distributed scheduling schemes have no centralized coordinator. Each node listens to the channel, and takes a decision whether to transmit or not based on its observations. Hence, the nodes in such a network work independently. The 802.11 Distributed Coordination Function (DCF) is a distributed scheduling scheme. Here, nodes make use of control packets such as Request to Send (RTS) and Clear to Send (CTS). When a node has a packet for transmission, it transmits an RTS message first. Nodes in the vicinity of the transmitting node on hearing this packet, update their Network Allocation Vectors (NAVs). Similar updates are done when the CTS transmitted by the receiver node is heard by the neighbor nodes. The NAV at a node, reflects the state of the channel in the immediate future, thereby guiding the node in taking a decision on when to transmit its ready packet. Thus the information carried by the RTS and CTS packets, helps in reducing contention for the channel, and thereby aids in bringing down the number of collisions in the channel. Refs. [1,2] are two schemes proposed for networks with centralized scheduling. In [1], the authors identify four key properties that a packet fair queuing scheme needs to possess for working well in a wireless network, which are, delay and throughput guarantees for error-free sessions, long term fairness for error sessions, short term fairness

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

for error-free sessions, and graceful degradation of QoS for sessions that have received excess service. They further go on to propose the Channelcondition Independent Fair Queuing (CIF-Q) scheme which they claim to possess all four properties. In [2], a queuing algorithm, called as Idealized Weighted Fair Queuing (IWFQ) algorithm is proposed. It is adapted from the wireless fluid fair queuing model in order to handle location dependent channel errors. Flows are allocated slots and the algorithm swaps bad slots of a flow resulting due to channel error with good slots of another backlogged flow. It also tries to keep the leads and lags of flows bounded. The above centralized schemes have been proposed for wireless packet cellular networks with base stations. A centralized scheduling algorithm has several drawbacks such as • Overhead at the scheduler/access point is very high as it needs to maintain the state information of all nodes within its coverage area. • A single point failure at the scheduler brings the entire coordination system down. • If a node moves out of the direct transmission range of the scheduler because of mobility, the node can never transmit data packets. • It is not scalable. When the number of nodes within the coverage area of the scheduler increases, the overhead on the scheduler increases and the overall system throughput comes down. As MANETs do not have the luxury of centralized base stations that can coordinate access to the channel, they require a different approach for packet scheduling. Some of the distributed packet scheduling schemes have been proposed in [3–5]. The Distributed Fair Scheduling (DFS) algorithm proposed in [3] tries to emulate the Self-Clocked Fair Queuing (SCFQ) scheme in a distributed manner. The authors also propose several mapping schemes for the back-off function. In [4], a set of localized distributed fair queuing models have been proposed. A packet scheduling model that tries to balance the trade-off between maximizing the channel utilization and achieving fairness is proposed in [5]. The main objective of the above schemes is to achieve maximum fairness.

29

But the main goal of our work is to provide improved packet delivery with bounded end-toend delay guarantees in MANETs. Refs. [6–9] are some of the schemes proposed for achieving this goal. The Global Time (GT) algorithm proposed in [6] provides a near FCFS packet transmission schedule. A scheme called Coordinated Earliest Deadline First (CEDF) proposed in [7] uses randomization and earliest-deadline-first scheme for scheduling packets. The scheme proposed in [8,9] modifies the 802.11 DCF to provide end-to-end delay guarantees. In the Distributed Priority Scheduling scheme proposed in [8,9], a node piggy-backs its current packetÕs priority index on the RTS packets it sends, and its head-of-line packetÕs priority index (head-of-line packet of a node is the packet to be transmitted next following the current packetÕs transmission) on the DATA packets it sends. The receiver of the RTS and DATA packets copy the priority index from the incoming packet and piggy -back the same on the outgoing CTS and ACK packets, respectively. Priority index of a packet expresses the priority of the packet with respect to other packets. The highest priority packet is the packet with the lowest priority index. A Scheduling Table maintained at each node holds the priority indices of packets in its queue, along with those of packets obtained by overhearing control and DATA packets. As such, provided all packet transmissions in its neighborhood are heard properly without errors, the node gets to know the priority of its highest priority packet with respect to other highest priority packets queued at nodes in its neighborhood. Based on this comparative rank of the packet the back-off interval of the node is varied. Considering the advantages of distributed scheduling over centralized scheduling, we adopt a distributed scheduling approach for our scheme. The schemes briefed above do not take into consideration the individual flow characteristics such as the end-to-end delay requirement of the flow and the current packet delivery ratio. We in this paper, propose a distributed scheduling scheme where we try to meet the packet delivery guarantees given to the user with bounded delays.

30

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

1.1. Motivation Advances in communication technologies and the growth of many new multi-media applications, make it necessary for MANETs to support various types of traffic, such as video and voice data, that require firm deadline and bandwidth guarantees for any meaningful use of such applications. As described above, most of the existing packet scheduling schemes concentrate more on achieving fairness in scheduling packets belonging to different flows at a node, and not on achieving QoS guarantees such as guaranteed packet delivery and end-to-end delay bound. Even in schemes that attempt to provide QoS guarantees, the scheduling is localized, i.e., the scheduling process at a node is in no way affected by the state of any neighbor node. Also, the existing schemes do not have provisions to reflect the current state of the flows (say current packet delivery ratio of the flow) in the scheduling decisions. Hence, need arises for a scheme that overcomes these shortcomings. We propose a distributed scheduling scheme in this paper, where scheduling decisions are made not only taking into consideration the states of neighboring nodes, but also based on feedback from the destination nodes regarding packet losses. Our scheme reorders packets utilizing the laxity information of the packets and the packet delivery ratios of the flows. The rest of the paper is organized as follows. We describe the network model assumed, data structures used, and packet formats in Section 2. We give a detailed description of our protocol in Section 3. Section 4 describes the performance studies we carried out on our protocol, and the comparison studies done with respect to the 802.11 DCF scheme and the Distributed Priority Scheduling scheme [8,9]. Finally, in Section 5, we conclude this paper, also giving directions for future research.

2. Network model, data structures, and packet formats 2.1. Network model We consider a MANET, which is a packetswitched multi-hop wireless network, where nodes

move randomly at varying speeds, continuously. The nodes share a single broadcast wireless channel for communication. Each node can be identified by a unique node identifier (id). Packets transmitted by a node can be received by all nodes within the direct transmission range of that node. We consider all such nodes to be in the neighborhood/locality/broadcast region of the transmitting node. Each node maintains two tables, namely Scheduling Table and Packet Delivery Ratio Table, which would be described later in this section. Our protocol assumes promiscuous mode support at the MAC layer. Hence, a node would be able to hear and access packets transmitted by all nodes within its direct transmission range, though the node might not be the intended recipient of those packets. 2.2. Data structures and packet formats 2.2.1. Scheduling table (ST) The Scheduling Table (ST) maintained by a node contains information about the packets to be transmitted by the node, and the packets overheard by the node, sorted according to their priority values. Each entry of the ST is associated with a unique packet which might either be present in the nodeÕs transmission queue, or whose information might have been overheard by the node. The highest priority packet is the packet having the lowest priority value, occupying the top-most position in the ST. Each ST entry consists of the following fields, fromNode, deadline, remHops, actualSrc, flowId, and priority. fromNode is the id of the previous node from which the packet was received. deadline refers to the end-to-end delay target of the packet. It is set by the original source of the packet. remHops gives the number of hops remaining for the packet to reach its destination. The deadline and remHops fields are used to calculate and update the priority of the packet at periodic intervals as long as an entry for the packet remains in the nodeÕs ST. actualSrc is the id of the original source of the packet and flowId is the id of the flow to which the packet belongs. actualSrc and flowId together uniquely identify a flow emanating from a source. The priority field gives the priority value/priority index of the packet

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

(Mechanisms for obtaining the values of priority and remHops would be described later in Section 3). Entries in the ST are arranged in the increasing order of the priority field. As and when a node receives a packet (RTS, CTS, DATA, or ACK) or when it overhears any packet (in the promiscuous mode), the node extracts the piggy-backed information from the packet and updates its ST. We have chosen the size of the ST as 20. But the optimum size of the ST depends on various factors such as node density and packet arrival rate. The format of the ST entry is shown in Fig. 1. 2.2.2. Packet delivery ratio table (PDT) This table, maintained at every node, consists of the following fields, actualSrc, flowId, pktsSent, and acksRcvd. The number of entries in the PDT maintained by a node is equal to the sum of the number of flows passing through that node and the number of other flows the node is able to hear in the promiscuous mode, i.e., each flow for which the node is the source/destination, or for which the node serves as an intermediate node, or which the node is able to hear, has a unique entry in the nodeÕs PDT. As in the ST, the actualSrc and flowId fields together uniquely identify a flow. The pktsSent field of a PDT entry gives the count of packets belonging to the corresponding flow that have been sent by the node (if it is a source node or an intermediate node), or the count of packets received by the node (if it is the destination for that flow), or the number of packets belonging to that flow transmitted (if it is located in the broadcast region of the transmitting node), so far. Similarly the acksRcvd field gives the number of acknowledgment (ACK) packets for that flow received so far by the node. The acksRcvd field is NULL for the destination node. A node uses the PDT entries

fromNode

deadline

remHops

flowId

actualSrc

actualSrc

flowId

31

pktsSent

acksRcvd

Fig. 2. Format of packet delivery ratio table entry.

pertaining to a particular flow to calculate the current packet delivery ratio of that flow. The format of the PDT entry is shown in Fig. 2. 2.2.3. RTS frame In addition to the regular 802.11 fields, the additional fields in the RTS frame used by our protocol are, deadline, remHops, flowId, and actualSrc. Before sending the RTS, the node searches its ST to find the highest priority packet in its transmission queue. Details such as, the deadline target of this packet, the remaining number of hops to be traversed, the flow id to which this packet belongs, and the id of the original source that transmitted this packet are inserted into the corresponding fields of the RTS packet. These four fields constitute the piggybacked information of the RTS packet. Nodes that hear this RTS packet (including those that hear in the promiscuous mode), retrieve the piggy-backed information and make entries in their corresponding STs. The fields newly added to the RTS frame are shown in Fig. 3. 2.2.4. CTS frame The CTS frame also has the same four piggybacked information fields as the RTS frame (Fig. 3). Here again the node transmitting the CTS frame, piggy-backs its highest priority packet information on the CTS frame, and nodes that hear this CTS frame update their STs accordingly. 2.2.5. DATA frame Apart from the regular 802.11 DATA frame fields, the DATA frame that we use in our scheme has the following additional fields:

priority deadline

Fig. 1. Format of scheduling table entry.

remHops

flowId

actualSrc

Fig. 3. New fields added to RTS and CTS frames.

32

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

deadline, remHops, flowId, actualSrc, pbDeadline, pbRemHops, pbFlowId, and pbActualSrc. Here, deadline is the end-to-end deadline/delay target of the current packet, remHops the remaining number of hops to be traversed by the packet, flowId the flow id of the current packet, and actualSrc is the id of the packetÕs original source. In addition to these fields, each DATA packet also carries piggybacked information. Note that the piggy-backed information transmitted through the RTS and CTS frames correspond to the highest priority packet in the nodeÕs transmission queue, i.e., the highest priority packet to be transmitted by the node. But the piggy-backed information in the DATA frame, corresponds to the highest priority packet in the nodeÕs ST. Since a nodeÕs ST contains information about the nodeÕs packets as well as information about packets obtained from the piggy-backed fields of the overheard RTS, CTS, DATA, and ACK packets, under ideal network conditions the highest priority packet of the ST corresponds to the highest priority packet in the neighborhood of that node. The piggy-backed information fields in the DATA frame are, pbDeadline, the deadline target of the highest priority packet in the ST of the node, pbRemHops, the remaining number of hops, pbFlowId, the flow id of that packet, and pbActualSrc, the id of the node that originated the packet. The new fields added to the DATA frame as per our scheme are shown in Fig. 4. 2.2.6. ACK frame An ACK frame transmitted by a node, just like the DATA frame, carries piggy-backed information about the highest priority packet in the ST of the node. It also carries feedback information for that flow. The PDT of a node needs to be updated with the number of packets received successfully by the destination till the current point of time.

deadline

remHops

flowId

pbDeadline pbRemHops pbFlowId

actualSrc

pbActualSrc

Fig. 4. New fields added to the DATA frame.

deadline

remHops

flowId

actualSrc

fbFlowId

fbSrc

fbCount

Fig. 5. New fields added to the ACK frame.

This is very important because the packet delivery ratio is an important parameter used for calculating the priority of the packet and for calculating the back-off period. The destination node, on receiving a DATA packet, updates the count of number of packets received so far in the pktsSent field of its PDT, and piggy-backs the same value in the fbCount field of the very next ACK frame it sends. The immediate sender node, on receiving this ACK frame, extracts the value from the fbCount field, and updates the value in the acksRcvd field of its PDT. The other two piggybacked fields carrying feedback information, namely fbFlowId and fbSrc are used to determine the entry in the PDT to be updated. Suppose a node which is not the intended destination of an ACK packet overhears the ACK packet in the promiscuous mode, it updates the corresponding entry in its PDT, if it has an entry for the same flow in its PDT. Otherwise it discards the packet after extracting the piggy-backed information from it. The fields of the ACK frame which have been newly added in our scheme are shown in Fig. 5.

3. Description of our protocol In this section we give a detailed description of our MAC layer protocol. The routing protocol we used is Dynamic Source Routing (DSR) [10]. The MAC layer for its operation requires certain information regarding the packets (such as the number of hops to the destination) it receives from the network layer. To obtain this information from the network layer, we assume the presence of an interface function between the MAC and network layers through which the MAC layer requests and receives the required information. Our protocol, like other protocols (for example GT

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

algorithm [6], DPS [8,9], etc.), also assumes the availability of a globally synchronous clock at each node.

which have entries in the ST are available at the node (Mechanism for obtaining laxity budget of a packet would be explained later in this section). This information is also sent to the priority function module, which uses the information fed to it to calculate the priority indices of packets in the ST of the node. The scheduler uses the priority indices obtained from the priority function module to order packets in the nodeÕs input queue. Each node, after transmitting a packet, increments the count of number of packets sent, pktsSent, field in its PDT. When a packet is successfully received at the destination, the destination node updates the count of number of packets received so far, in the pktsSent field of its PDT. It then piggy-backs this count value, the flow id, and the id of the actual source of the received packet, on the ACK packet which is sent to the received packetÕs immediate sender. The immediate sender node, on receiving the ACK packet, retrieves the count of packets received at the destination, which is part of the piggy-backed information, and updates the acksRcvd field of the corresponding entry in its PDT. When this node receives the next DATA packet belonging to the same flow, the latest count information regarding the number of ACK packets received so far that had been updated in its PDT previously, is piggy-backed on the new ACK packet to be transmitted in response to this new DATA packet. When a packet is transmitted by a node, apart from the transmitting node, other nodes in the broadcast region of the transmitting node that hear the packet in the promiscuous mode, also update the pktsSent field for the flow in their PDTs.

3.1. Feedback mechanism The current packet delivery ratio of a flow for which an entry is present in the nodeÕs ST, is an important parameter used for calculating the priority of a packet belonging to that flow with respect to packets belonging to other flows that have entries in the ST of the node. Hence, a node needs to keep track of the packet delivery ratios of all flows that have entries in its ST. This is achieved by means of a feedback mechanism. The overall functioning of the feedback mechanism is depicted in Fig. 6. Incoming packets to a node are first queued in the nodeÕs input queue according to their arrival times. The scheduler sorts them according to their priority values and inserts them into the transmission queue. The highest priority packet is selected from the transmission queue and transmitted. The node, after transmitting a packet, updates the information maintained in its PDT regarding the number of packets transmitted so far. The destination node of a flow on receiving data packets initiates a feedback by means of which information about the number of packets received by it is conveyed to the source. These two information, denoted by Si in Fig. 6 are received by the feedback information handler. The feedback information handler, in parallel, also sends the previous state information Si1 to the priority function module. The laxity budgets of all packets

Priority Function

Feedback Information Handler Si

Si–1

Laxity Information

33

ACK Packet from Next Node

Si

flow 1 flow 2

DATA Packet to Next Node

flow n Input Queue (FCFS)

Scheduler

Transmission Queue

Fig. 6. Feedback mechanism.

34

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

When such nodes hear the corresponding ACK packet, they again update the acksRcvd field of the corresponding entry in their PDTs. Consider the simple example path 1–2–3–4 shown in Fig. 7. T denotes the pktsSent field and A denotes the acksRcvd field in the PDT of the node. Here, node 1 is the source, and node 4 is the destination of a series of DATA packets whose flow id is, say 1. Node 1 starts transmitting packets as shown in Step 1 of Fig. 7. When node 2 receives the first packet from node 1, it processes it, forwards the packet to node 3, and increments T to one (Step 2). In its first ACK packet to the immediate sender node 1, it sets the number of ACKs received field fbCount for flow 1 in the ACK packet to zero. In Step 3, node 3 on receiving the packet repeats the same procedure what node 2 did when it had received a packet. It first sends an ACK packet to node 2. Note that here again in this ACK packet, the fbCount field is set to zero. It then forwards the received DATA packet to node 4 after updating T to 1. Now assume that node 4, the destination node, also receives this packet

T=1

T=0

T=0

T=0

2

3

4

A=0

A=0

A=0

A=0

T=2

T=1

T=0

T=0

2

3

4

A=0

A=0

A=0

1

1 A=0

Step 1

Step 2

without any error. Node 4 immediately increments the count of packets received, and updates the value of T to 1. When node 4 sends an ACK for this received packet, this updated count of number of packets received (which is 1 when the first packet is received), along with the actual source id and flow id, are inserted in the fbCount, fbSrc, and fbFlowId fields, respectively of the ACK packet, and the packet is transmitted. Node 3 on receiving this ACK packet, extracts the fbCount value, which is 1 here, from the ACK packet and updates its A value. In Step 4, node 3 piggy-backs this same value on the next ACK packet it sends for the third DATA packet it receives from node 2. Finally in Step 5, when node 2 transmits an ACK packet for the fifth DATA packet it receives from node 1, this value of A ( ¼ 1) is piggy-backed on it. Node 1 on receiving this packet, updates its A value to 1. Thus after a small initial delay, the count of number of packets received at the destination node reaches the source. With every new ACK packet it receives, the source node also gets the updated count. The main objective of the feedback process is to obtain the current packet delivery ratio of each flow that has an entry in the nodeÕs ST. Now that the PDT of a node contains information such as flow id (flowId), actual source id (actualSrc), packets sent count (pktsSent), and count of ACK packets (acksRcvd) for each active flow it is aware of, the corresponding packet delivery ratio (PDR) for flows can be computed as PDR ¼

T=2

T=1

T=0

2

3

4

A=0

A=0

A=1

T=4

T=3

T=2

T=1

2

3

4

A=0

A=1

A=2

T=5

T=4

T=3

T=2

2

3

4

A=2

A=3

T=3 1

1

1 A=1

Fig. 7. An example path.

Step 3

Step 4

Step 5

acksRcvd : pktsSent

ð1Þ

Note that acksRcvd entry in the PDT of a node at any point of time denotes the number of DATA packets received by the destination till that time. When a path break occurs or when a connection is closed, the corresponding entries for the flow in the PDT become stale. To remove such entries, a timer called PDTExistenceTimer is used for each flow passing through the node. The timer is reset each time an ACK packet for that flow is received. If no ACK packets are received within the timeout period, entries in the PDT pertaining to the corresponding flow are purged. The PDTExistenceTimer timeout period in our studies, is taken to be 500 ms.

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

When a path break occurs, the source node reroutes the flow on a new path. Before transmitting the next packet on this new path, the source node starts a timer called NewPathTimer. While transmitting the next packet belonging to the rerouted flow, the source node piggy-backs on it the count of packets belonging to that flow transmitted so far. This piggy-backing operation continues till the NewPathTimer expires. Once the timer expires the piggy-backing stops and the new data packets are transmitted as before. Based on a flag field on the data packet, the nodes on the new path would be able to retrieve the piggy-backed information consisting of the count of packets belonging to that flow transmitted to far. As mentioned above, the destination node piggy-backs the count of data packets received successfully so far on each ACK packet it transmits. After a small initial delay this information would also be available at the intermediate nodes on the new path. Hence, all new intermediate nodes on the rerouted path would be able to obtain the current packet delivery ratio of the flow after a small initial delay. The feedback mechanism involves minimal overhead. Since the count information is piggybacked on the ACK packets themselves, no additional control packets are required. But, the size of the ACK packet gets increased slightly. Also, the higher layers in the protocol stack need not be modified. This makes our feedback scheme portable and hence it can be used with any of the higher layer protocols. 3.2. Priority index calculation The priority index of a packet is an indicator representing the priority of the packet. The lower the priority index, the higher the packetÕs priority. Priority index of a packet is defined as below: PDR PI ¼  ULB ð2Þ M where PI is the priority index of the packet, deadline  currentTime ULB ¼ remHops is the uniform laxity budget of the packet, and M is a user defined parameter, representing the desired packet delivery ratio for the flow.

35

The priority indices of packets form the basis for scheduling the next packet for transmission. As given above, the expression for priority index consists of two significant terms PDR=M and ULB. When more number of packets belonging to a flow meet their delay targets, the term PDR=M would have a high value. Hence for packets belonging to that flow, the priority index would have a higher value and therefore the actual priority of the packets would be lowered. When very few packets of a flow meet their delay targets, the value of PDR=M would be very less, thereby lowering the value of priority index and increasing the priority of packets belonging to that flow. ULB also plays an equally important role. Since remHops, the number of hops remaining to be traversed, is in the denominator of the expression for ULB, when a packet is near its source and needs to travel several hops to reach its destination, its priority index value will be lowered, thereby increasing the priority of the packet. When the packet nears its destination, the fewer number of hops to be traversed tends to increase the priority index, thereby lowering the priority of the packet. The value of remHops is obtained at the MAC layer of the source node in the following manner. When a DATA packet generated at the application layer of the source node reaches the MAC layer, the MAC layer uses a separate special interface function to request the network layer for the remaining number of hops to be traversed by the packet. Once it obtains this information, the value is entered in the remHops field of the DATA packet. At the MAC layer of subsequent nodes that receive this DATA packet, the remHops field is extracted, decremented by one, and updated back before the packet is forwarded to the next node on the path to its destination. Thus the MAC layer of all nodes that handle the DATA packet, get to know the remaining number of hops to be traversed by the packet. Hence the ULB of the packet can now be calculated. 3.3. Scheduling table updates An attempt is made by a node to update its ST every time a packet is received by the node at its radio layer.

36

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

RTS and CTS packets transmitted by a node carry piggy-backed information related to the highest priority packet queued at the node. Before transmitting an RTS packet, the node determines the highest priority packet (the one with the lowest priority index) among all the packets queued at its transmission queue. When it transmits the RTS packet, it piggy-backs the priority information (consisting of the delay target, remaining number of hops, flow id, and actual source id) corresponding to this highest priority packet on the RTS. Since nodes in the network operate in the promiscuous mode, all nodes within the direct transmission range of the sender hear this RTS packet. Each node that hears the RTS, first retrieves the piggy-backed priority information, then calculates the priority index of the corresponding packet (that is queued at the sender of the RTS), and finally adds a corresponding entry in its ST. Once the intended receiver of the RTS packet receives the RTS without any error, it responds with a CTS if it is ready to receive the packet. But before sending the CTS, this node too searches for the highest priority packet in its transmission queue. It then piggy-backs priority information related to this packet on the CTS. Each node receiving the CTS follows the same procedure as is followed when an RTS is received, for updating its ST. DATA and ACK packets transmitted by a node carry piggy-backed information corresponding to the highest priority packet entry in the ST of the node. A DATA packet carries two sets of information. The first is about the DATA packet itself. The end-to-end delay target (deadline), remaining number of hops (remHops), actual source id (actualSrc), and the flow id (flowId), constitute this information. When a node receives a DATA packet, using the above information, the uniform laxity budget (ULB) of the packet can be calculated. The packet delivery ratio (PDR) of the flow to which the received packet belongs can be obtained using the count of DATA packets sent (pktsSent) and the count of ACK packets received (acksRcvd) for the flow, available at the PDT of the node. So now the priority index of the packet (PI), as given in Eq. (2), can be calculated. If the node is the intended receiver, before the packet is inserted

into the transmission queue of the node, its priority index is calculated, and an entry is made in the ST, provided a duplicate entry for the same packet does not exist already. The ST contains entries for packets in the increasing order of their priority indices and the new entry is inserted accordingly. The second information, which is piggy-backed on the packet, is that regarding the highest priority entry in the immediate sender nodeÕs ST. The sender of the packet piggy-backs information corresponding to the top-most entry in its ST (lowest priority index). Note that, if an entry corresponding to a packet queued at the senderÕs queue turns out also to be the highest priority entry in its ST, then both the data packet information, and the piggy-backed information would be the same. The piggy-backed information is retrieved from the DATA packet by all nodes within the direct transmission range of the sender that hear the packet. The priority index is then calculated, and an entry is made in the ST of the node. Once this is done, if the current node is the intended receiver, then the received packet is forwarded to the upper layer, otherwise the packet is discarded and no further processing is done. When a node receives a DATA packet without any error, it responds by sending an ACK packet to the sender. The ACK packet also carries similar piggy-backed information as that carried by the DATA packet. Here again, a node sending the ACK packet piggy-backs information about the highest priority entry in its ST. And a node receiving this ACK packet, calculates the corresponding priority index and updates its ST. It then processes the packet if it is the intended receiver, otherwise it discards the packet. 3.3.1. Refreshing the scheduling table The rank of the highest priority packet of a node in the nodeÕs ST, plays a vital role in determining the back-off period for which the node needs to wait before transmitting. Since the priority index of a packet keeps changing with time, it needs to be updated constantly. Each node before calculating its back-off period, and before inserting a new entry into its ST, re-calculates and updates the priority index of each entry in its ST.

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

When a node receives or hears a DATA packet, if an entry for the corresponding packet exists in its ST, then that entry is deleted. But the sender node deletes its entry from the ST, only when an ACK packet for the transmitted DATA packet is received. It may happen that a DATA packet transmitted, is not heard by a node which had previously been located within the transmission range of a sender node containing the highest priority packet in its locality. This might be because of the movement of that node from its original position, or because of location dependent errors that arise due to the broadcast nature of the channel. In such cases the stale entries might affect the desired scheduling of packets. Another reason for stale entries in the ST is, when the network load is high, some of the packets would miss their deadline targets while waiting in the nodeÕs queue itself. Such packets will never be transmitted. In order to remove stale entries, whenever table updates are performed, entries whose deadline targets have been missed already (i.e., the current time is greater than the deadline target of the entry), are deleted. 3.3.2. Incorrect information in scheduling table Under certain circumstances incorrect/invalid information might be present in the scheduling table. The node for which an entry had been made in the ST might have moved away. Since the current node does not hear the corresponding DATA packet, the entry is not removed, and the no-morevalid entry just stays in the ST. Another possible situation is that the DATA packet got collided at the current node which has an entry in its ST for the collided DATA packet. This entry also would just remain in the ST. Such incorrect entries might affect the correct scheduling of packets. To remove such entries from the ST, a timer called STExistenceTimer is used for each entry in the ST. This timer is reset each time the node receives piggy-backed priority information regarding the same DATA packet. Once this timer expires, if the corresponding entry is still present in the ST, it is removed. 3.3.3. An example An example to illustrate the working of the ST updation mechanism is shown in Fig. 8. Here Ni denotes the ith ranked packet in the transmission

37

queue of node N. The topology is such that, a packet transmitted by any node can directly reach the other two nodes. Assume packet A1 has the highest priority in the broadcast region of node A (that covers both nodes B and C), followed by packet B1. The RTS packet transmitted by node A carries piggy-backed information regarding packet A1. Nodes B and C on receiving this RTS packet add an entry for A1 in the top-most positions of their STs. Now node B piggy-backs information about packet B1, its highest ranked packet, in the CTS it transmits. On receiving this CTS packet, nodes A and C update their STs. Note that since B1 has a lower priority compared to A1, its rank in the ST is lower than that of A1. Now node A transmits packet A1. It has to piggy-back information pertaining to the highest ranked entry in its ST. It is packet A1 again in this case. Nodes B and C, since they already contain an entry corresponding to A1 in their STs, do not make any new entries. Instead, since A1 has been received successfully, they remove entries corresponding to A1 from their STs. Both their STs now have B1 as their top-most entry. Finally node B transmits an ACK packet. It piggy-backs information regarding the highest ranked entry in its ST, which is B1 here. Nodes A and C on receiving the ACK, since they already have an entry corresponding to B1 in their STs, do not make fresh entries into their STs. Node A removes the entry corresponding to A1 from its ST as it had been successfully received at node B. At this stage packet B1, which is the highest ranked packet in the neighborhood of node B, occupies the topmost position in the STs of all three nodes. 3.4. Back-off mechanism The objective of the modified back-off procedure is to reflect the priority of the nodeÕs highest priority packet with respect to the highest priority packets queued at nodes in the neighborhood of the current node, on the back-off period to be taken by the node. If r is the rank, in the ST of the node, of the current packet to be transmitted, n is the number of retransmission attempts made for the packet, and nmax is the maximum number of retransmission attempts permitted, then the back-off interval is given by

38

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50 Transmission Queue C3 C2 C1 C1 C2 C3

(i)

A1 C1 C2

A1 B1 C1

B1 C1 C2

B1 C1 C2

(ii)

(iii)

(iv)

(v)

Scheduling Table States

C

Transmission Queue

Transmission Queue

A3 A2 A1

B3 B2 B1

RTS (pdl, prh, pfid, pasrc) CTS (pdl, prh, pfid, pasrc)

A1 A2 A3

A1 A2 A3

A1 B1 A2

A1 B1 A2

B1 A2 A3

(i)

(ii)

(iii)

(iv)

(v)

A

B DATA (dl, rh, fid, asrc pdl, prh, pfid, pasrc) ACK (pdl, prh, pfid, pasrc, ffid, fsrc, fcnt)

B1 B2 B3

(i)

Scheduling Table States

A1 B1 B2

A1 B1 B2

B1 B2 B3

B1 B2 B3

(ii)

(iii)

(iv)

(v)

Scheduling Table States

dl

deadline

(i)

rh

remaining number of hops

fid

flow id

(ii) RTS transmitted by node A is received at node B

asrc

actual source

(iii) CTS transmitted by node B is received at node A

pdl

piggy-backed deadline

prh

piggy-backed remaining hops

(iv) DATA packet transmitted by node A is received at node B

pfid

piggy-backed flow id

Initial State

(v) ACK transmitted by node B is received at node A

pasrc actual source of the piggybacked packet ffid

flow id of the flow for which the count information is fed back

fsrc

actual source for flow for which the count information is fed back

fcnt

count of number of packets received for ffid

Fig. 8. ST updation––example.

8 Uniform½0; ð2n  CWmin Þ  1 ð3Þ > > > if r ¼ 1 and n < nmax; > > > > > PDR > >  CWmin > > M > > < þ Uniform½0; CWmin  1 back-off ¼ > ð4Þ if r > 1 and n ¼ 0; > > > ULB  CWmin > > > > > þ Uniform½0; ð2n  CWmin Þ  1 > > > > ð5Þ > > > : otherwise;

where CWmin is the minimum size of the contention window, PDR is the packet delivery ratio of the flow to which the packet belongs, M is the desired packet delivery ratio, and ULB is the uniform laxity budget of the packet. What this translates into reality is that, if the packet is the highest ranked packet in the broadcast region of the node, then it has the lowest back-off period according to Eq. (3). If it is not the highest ranked packet, but it is the first time the packet is being transmitted, then the back-off distribution follows the second scheme as in Eq. (4), where the back-off is more than that for the first

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

case. Finally if the packet does not fit into these two categories, then the back-off value is as per the third scheme in Eq. (5), and is the longest of the three cases. Thus the highest priority packet in the locality faces very less contention, as almost all other nodes will be in their extended back-off periods. In the second case (Eq. (4)), the current PDR of the flow affects the back-off period. If PDR is very less, then the first term would be less, and if it is high, then the first term would be high and the node would have to wait for a longer time. In the third case (Eq. (5)), the ULB of the packet is used. The higher the value of ULB, the longer the back-off period. 3.4.1. An example We use Fig. 9 to illustrate the channel access procedure. Consider that there are three active flows passing through paths, F–E–B–I, G–D–A– B–I, and H–D–C–I. Consider the case where nodes A–E hold back-logged packets of these flows. Ni denotes the ith ranked packet at node N. Let us assume that the highest priority packet in the neighborhood of node A (denoted by the dotted circle) is A1, and the next highest priority packet is B1. Step 1 shows the contents of STs of the nodes at a particular point of time. Now say node A sends an RTS packet to node B. It piggy-backs information regarding the highest priority packet in its transmission queue, i.e., packet A1 here, on the RTS packet. Nodes B–E, that receive this RTS packet, update their STs. Step 2 in Fig. 9 shows the updated contents of the STs. Note that, an entry corresponding to packet A1, occupies the top-most position in all STs of the neighbor nodes of node A. Node B, on receiving the RTS, sends a CTS packet, containing piggy-backed information about its highest priority packet, i.e., packet B1. Nodes A, C, E, and I that hear this packet, update their STs, as shown in Step 3 of Fig. 9. At this point of time, an entry for packet A1 has the highest rank at all neighboring nodes of node A. Hence all nodes, except node A, wait for longer periods as determined by the second (Eq. (4)) and third (Eq. (5)) cases in the back-off distribution scheme. Since node A satisfies the first condition in the back-off scheme (Eq. (3)), it has the lowest

39

waiting time, and also no contending nodes when it tries to access the channel. On receiving the CTS, node A now transmits a DATA packet, A1, which contains piggy-backed information about the highest priority packet in the ST of node A, which is again packet A1 itself in this case. Nodes B–E hear this packet. They now remove entries corresponding to packet A1 from their STs. The new contents of the STs are as shown in Step 4 of Fig. 9. Node B, on receiving DATA packet A1, sends an ACK packet containing piggy-backed information of the highest ranked packet in its ST, which is packet B1 in this example. Nodes C, E, and I, that hear this ACK packet, since they already have an entry for B1 in their STs, do not update their STs. Node A also does not update information regarding B1 in its ST, but since this ACK packet was meant for the DATA packet A1 sent previously by node A, the entry corresponding to packet A1 is deleted from its ST. Now at this point of time, an entry for packet B1 has the highest rank in the STs of all neighbor nodes of node B. Hence node B has the lowest back-off period, and it transmits before the other nodes in its neighborhood. The other nodes may follow the back-off distribution either according to Eq. (4) or according to Eq. (5). Hence the probability that they contend for the channel when node B is trying to gain access to the channel is very less. 3.5. Uniqueness of our scheme We compare the results of simulation studies of our scheme with that of the DPS scheme [8,9]. The major differences between our Distributed Laxitybased Priority Scheduling (DLPS) scheme and the DPS scheme are as follows. Use of laxity information: In DPS, only the delay target/deadline of the packet is taken into consideration for the scheduling decisions. The number of hops to be traversed by the packet is also a very important factor. In DLPS, the uniform laxity budget (ULB) of a packet plays a very important role in determining the priority index of that packet. The remaining number of hops to be traversed by a packet is in the denominator of the expression for its ULB (given with Eq. (2)). This brings in the concept of multi-hop scheduling into

40

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

H G

D F A

C

E

B I

Flow 1 F-E-B-I Flow 2 G-D-A-B-I Flow 3 H-D-C-I Scheduling Tables at, Node A

Node B

Node C

Node D

Node E

A1 A2

B1 B2

C1 C2

D1 D2

E1 E2

Step 1

A1 A2

A1 B1 B2

A1 C1 C2

A1 D1 D2

A1 E1 E2

Step 2

A1 B1 A2

A1 B1 B2

A1 B1 C1

A1 D1 D2

A1 B1 E1

Step 3

A1 B1 A2

B1 B2

B1 C1

D1 D2

B1 E1

Step 4

B1 A2

B1 B2

B1 C1

D1 D2

B1 E1

Step 5

Fig. 9. An example network.

our scheme. The more the number of hops to be traversed by the packet, the lower its priority index, and higher its priority. Because of this the scheduling decisions in our scheme are more appropriate and better than those in DPS. Feedback mechanism: The DPS scheme does not use any mechanism by means of which the current state of the flow could be reflected on the scheduling decisions. In our scheme, we use a unique

feedback mechanism by means of which information regarding the number of packets received at the destination till that point of time is made available to all nodes on the path. This information, which reflects the current state of the flow is used in the scheduling decisions. Piggy-backed priority information carried by the packets: In DPS, the RTS and CTS packets carry the priority index value of the current packet, and

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

the DATA and ACK packets carry priority index of the head-of-line packet. A node distributes priority information of its own packets only. But in DLPS, the RTS and CTS packets carry piggybacked information about the top-most packet in the nodeÕs transmission queue, while the DATA and ACK packets are used for carrying piggybacked information regarding the highest priority entry in the ST of the node. A node thus distributes priority information about high-priority packets queued at its neighbor nodes also. Priority index calculation: In the DPS scheme, priority index of packets is directly piggy-backed on the outgoing frames. Packet ordering at a neighbor node, based on this information, may not be valid and accurate after a certain period of time. But in our DLPS scheme, current packet delivery ratio (PDR) of a flow and the uniform laxity budget of a packet are important parameters used in calculating the priority index of the packet. For a neighbor nodeÕs packet entry in the ST of a node, since values for these parameters are available, a node can independently calculate the priority index for that entry at any point of time. This gives an accurate estimate of the ranks of packets queued at a node. Because of the above reasons, it can be seen that our scheme is very different from the DPS scheme, which we use for comparison in our simulation studies in the following section.

4. Performance study 4.1. Simulation environment We evaluated the performance of our proposed protocol by carrying out extensive simulation studies. The simulation model is GloMoSim

41

developed at the University of California, Los Angeles using PARSEC language [11]. We assumed the free-space propagation model. The radio type model used was radio capture. The Dynamic Source Routing (DSR) [10] protocol was the routing protocol used in the simulations. In our simulation model, the mobility model considered is random way point (WP), and the pause time is taken to be 30 s. The radio transmission range is taken as 250 m. Channel capacity is assumed to be 2 Mbps. Constant Bit Rate (CBR) model is used for data flow, with data packet size as 512 bytes. The various parameters are shown in Table 1. The value of M parameter used for all flows was 0.7 (The protocol can also support flows each having a different value for M). We compare the simulation results of our Distributed Laxity-based Priority Scheduling (DLPS) scheme with that of IEEE 802.11 DCF scheme (which we henceforth refer to as 802.11), and the Distributed Priority Scheduling scheme proposed in [8,9] (which we henceforth refer to as DPS). 4.2. Metrics The performance evaluation metrics we have used in our simulation studies are • Data packet delivery ratio: Packet delivery ratio of a flow is defined as the ratio of the number of packets received at the destination, to the number of packets originated by the source. The packet delivery ratio of the system, is the average of the packet delivery ratios of all flows in the network. • Average end-to-end delay: End-to-end delay for a data packet transmitted from a source node, is the total time taken by the packet to reach its destination. This time is

Table 1 Simulation parameters Routing protocol Mobility model Radio model Propagation model Transmission range

DSR Way point Radio capture Free space 250 m

Packet size Session duration Simulation time Channel capacity Queue size

512 Bytes 200 s 8 min 2 Mbps 100

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

4.3. Simulation results 4.3.1. Impact of network load The system load was varied in this experiment by varying the number of simultaneous connections from 2 to 30. The maximum speed of nodes was taken as 10 m/s and the packet inter-arrival time was taken to be 20 ms. Two sets of experiments were carried out. In the first, the end-to-end delay/deadline target was taken to be 500 ms, and the number of nodes in the network was 40. Fig. 10 shows the percentage of packets meeting their deadline targets when the terrain dimensions are 200 m · 200 m, and Fig. 11 shows the corresponding curves when the terrain dimensions are 1000 m · 1000 m, for the DLPS and the DPS schemes. The curves in Fig. 11 are lower than those in Fig. 10 because, when the terrain area is increased, the nodes are far more distributed, and so the average hop-length increases, which in-turn results in increased packet losses. The comparison between DLPS and DPS schemes of the variation in average end-to-end delay of the packets with network load, for terrain dimensions 200 m · 200 m is shown in

Percentage of Pkts Delivered within Deadline

100

DLPS DPS

90 80 70 60 50 40 30 20 10 0

0

5

10

15

20

25

30

Load (number of connections)

Fig. 10. Packet delivery ratio vs load (terrain area: 200 m · 200 m).

100

Percentage of Pkts Delivered within Deadline

the sum of the propagation time of the packet, and the queuing delays at the intermediate nodes in the path to the destination node. Average end-to-end delay is the average delay of all packets received without error at their destinations.

DLPS DPS

90 80 70 60 50 40 30 20 10 0

0

5

10

15

20

25

30

Load (number of connections)

Fig. 11. Packet delivery ratio vs load (terrain area: 1000 m · 1000 m).

3

Average End-to-End Delay (secs)

42

DLPS DPS

2.5 2 1.5 1 0.5 0 0

5

10

15

20

25

30

Load (number of connections)

Fig. 12. Average end-to-end area: 200 m · 200 m).

delay

vs

load

(terrain

Fig. 12. Since our scheduling mechanism takes into consideration the timeliness of packets (through the ULB factor used in priority calculation and in the back-off mechanism) the performance of our scheme in terms of both the percentage of packets delivered within their end-to-end delay targets and the average end-to-end delay is better than the DPS scheme. In the second set of experiments, the endto-end delay target was fixed at 1000 ms. Figs. 13 and 14 show the corresponding curves for terrain dimensions 200 m · 200 m and 1000 m · 1000 m, respectively when the end-to-end delay target is 1000 ms. Graphs showing the average end-to-end delay comparison between our scheme and the DPS scheme, for terrain dimensions 200 m · 200 m,

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

Percentage of Pkts Delivered within Deadline

100

DLPS DPS

90 80 70 60 50 40 30 20 10 0 0

5

10

15

20

25

30

Load (number of connections)

Fig. 13. Packet delivery ratio vs load (terrain area: 200 m · 200 m).

DLPS DPS

90

100

80 70 60 50 40 30 20 10 0 0

for the 1000 ms delay target case are shown in Fig. 15. It can be seen from Fig. 15 that when the network load is less, the average end-to-end delay for the DPS scheme is less, but at high loads (above 2.25 Mbps) DLPS performs better than DPS. These curves for the 1000 ms case show a similar trend as that of curves corresponding to the 500 ms delay target experiments. Figs. 16 and 17 show packet delivery comparison curves for our scheme with the IEEE 802.11 DCF for the two terrain dimensions 200 m · 200 m and 1000 m · 1000 m, respectively when the delay target is 500 ms. In Fig. 17, it can be seen that our scheme is more effective at high load than at low load. When load is very less, and when the number of hops is more, the

Percentage of Pkts Delivered within Deadline

Percentage of Pkts Delivered within Deadline

100

5

10

15

20

25

30

70 60 50 40 30 20 10 5

10

15

20

25

30

Load (number of connections)

Fig. 16. Packet delivery ratio vs load (terrain area: 200 m · 200 m).

100

DLPS DPS

Percentage of Pkts Delivered within Deadline

Average End-to-End Delay (secs)

80

0

Fig. 14. Packet delivery ratio vs load (terrain area: 1000 m · 1000 m).

2.5 2 1.5 1 0.5 0

DLPS 802.11

90

0

Load (number of connections)

3

43

DLPS 802.11

90 80 70 60 50 40 30 20 10 0

0

5

10

15

20

25

30

Fig. 15. Average end-to-end area: 200 m · 200 m).

delay

vs

load

0

5

10

15

20

25

30

Load (number of connections)

Load (number of connections)

(terrain

Fig. 17. Packet delivery ratio vs load (terrain area: 1000 m · 1000 m).

3

Average End-to-End Delay (secs)

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

Average End-to-End Delay (secs)

44

DLPS 802.11

2.5 2 1.5 1 0.5 0 0

5

10

15

20

25

3.5 DLPS DPS

3 2.5 2 1.5 1 0.5 0 10

30

20

Load (number of connections)

Fig. 18. Average end-to-end area: 200 m · 200 m).

delay

vs

30

40

50

60

70

80

90

100

Packet Inter-arrival Time (ms)

load

(terrain

Fig. 20. Average end-to-end delay vs packet inter-arrival time (terrain area: 200 m · 200 m).

802.11 DCF which has no additional processing overhead, performs better than ours. But, as the load is increased further (above 2.6 Mbps), the benefits of our scheduling scheme become visible and hence it achieves better packet delivery than the 802.11 scheme. Fig. 18 shows the average endto-end delay comparison of our scheme with the 802.11 DCF scheme, for terrain dimensions 200 m · 200 m. Here again, with increasing load our scheme out-performs 802.11 DCF. Load can also be varied by varying the interarrival time of packets. The smaller the interarrival time, the higher the load on the network, and vice versa. The number of nodes here is 60, the number of connections is 20, the delay target is 500

ms, and the nodes move about in a 200 m · 200 m terrain area. Fig. 19 shows the variation in the fraction of packets delivered within their deadlines, with varying packet inter-arrival time, for the DPS and the DLPS schemes. Fig. 20 gives the corresponding end-to-end delay variation curves. The packet delivery of our scheme compared to the 802.11 DCF is shown in Fig. 21, while the corresponding delay comparison curves for varying packet inter-arrival time are shown in Fig. 22. Our back-off scheme ensures that higher the priority of the packet being transmitted is, the lower the contention for the shared channel. This minimizes collisions when the load is high, and hence results in the better performance of our scheme. In

100

100 DLPS DPS

80 70 60 50 40 30 20 10 0 10

20

30

40

50

60

70

80

DLPS 802.11

90

Percentage of Pkts Delivered within Deadline

Percentage of Pkts Delivered within Deadline

90

90

100

Packet Inter-arrival Time (ms)

Fig. 19. Packet delivery ratio vs packet inter-arrival time (terrain area: 200 m · 200 m).

80 70 60 50 40 30 20 10 0 10

20

30

40

50

60

70

80

90

100

Packet Inter-arrival Time (ms)

Fig. 21. Packet delivery ratio vs packet inter-arrival time (terrain area: 200 m · 200 m).

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50 100 DLPS 802.11

3

Percentage of Pkts Delivered within Deadline

Average End-to-End Delay (secs)

3.5

2.5 2 1.5 1 0.5 20

30

40

50

60

70

80

90

DLPS DPS

90 80 70 60 50 40 30 20 10 0

0 10

0

5

10

100

Fig. 22. Average end-to-end delay vs packet inter-arrival time (terrain area: 200 m · 200 m).

both Figs. 19 and 21, when the inter-arrival time of packets is low, the network load is high, resulting in lower number of packets meeting their deadlines. But as the inter-arrival time is increased, resulting in lower network load, the curves have an upward trend, with almost all packets transmitted meeting their deadlines at 60 ms (corresponding to a maximum network load of 1.36 Mbps) and higher inter-arrival times. 4.3.2. Impact of mobility In this experiment, we keep the number of connections (load) constant at 14, and vary the mobility of the nodes from 2 to 30 m/s. The

20

25

30

Fig. 24. Packet delivery ratio vs mobility (terrain area: 1000 m · 1000 m).

deadline is fixed at 500 ms and the number of nodes is 60. The corresponding curves for our DLPS scheme and the DPS scheme, for terrain dimensions 200 m · 200 m and 1000 m · 1000 m, are shown in Figs. 23 and 24, respectively. Variation in average end-to-end delay with mobility, for a 200 m · 200 m terrain area is shown in Fig. 25. Increased hop-length for the larger terrain area in Fig. 24 results in increased packet losses and hence the curves in Fig. 24 are lower than those for the smaller terrain area case in Fig. 23. Figs. 26 and 27 show the performance comparison of our scheme with the 802.11 DCF, under varying mobility when the delay target is 500 ms and terrain dimensions are 200 m · 200 m and 1000 m · 1000 m,

2

Average End-to-End Delay (secs)

100 90 DLPS DPS

80 70 60 50 40 30 20 10 0 0

15

Mobility (m/s)

Packet Inter-arrival Time (ms)

Percentage of Pkts Delivered within Deadline

45

5

10

15

20

25

30

Mobility (m/s)

Fig. 23. Packet delivery ratio vs mobility (terrain area: 200 m · 200 m).

DLPS DPS

1

0 0

5

10

15

20

25

30

Moblity (m/s)

Fig. 25. Average end-to-end delay vs mobility (terrain area: 200 m · 200 m).

46

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50 100

Percentage of Pkts Delivered within Deadline

90 80 70 DLPS 802.11

60 50 40 30 20 10 0

0

5

10

15

20

25

30

Mobility (m/s)

Fig. 26. Packet delivery ratio vs mobility (terrain area: 200 m · 200 m).

100 DLPS 802.11

Percentage of Pkts Delivered within Deadline

90 80 70 60 50 40 30 20 10 0

0

5

10

15

20

25

30

Mobility (m/s)

Fig. 27. Packet delivery rain area: 1000 m · 1000 m).

vs

mobility

(ter-

100

DLPS 802.11

1

0 0

5

10

15 20 Mobility (m/s)

4.3.3. Impact of node density In this set of experiments, the node density is varied by varying the number of nodes, keeping the terrain area unchanged. The terrain dimensions chosen here is 200 m · 200 m, the end-to-end deadline target is 500 ms, the number of connections is 10, and the maximum node speed is 10 m/s. Fig. 29 shows the variation in percentage of packets delivered within their deadline targets, with varying number of nodes, for our scheme and the DPS scheme. The corresponding average endto-end delay variation with number of nodes is shown in Fig. 30. Figs. 31 and 32 compare the variation of packet delivery and the variation in average end-to-end delay, respectively of our scheme with the 802.11 DCF. From the above four figures it can again be seen that our scheme performs better than both the DPS and the 802.11 DCF schemes. As the node density increases, the

Percentage of Pkts Delivered within Deadline

Average End-to-End Delay (secs)

2

ratio

respectively. The average end-to-end delay curves for the DPS scheme and the 802.11 DCF, when the nodes move about in a 200 m · 200 m terrain are shown in Fig. 28. Here again from the graphs it can be seen that our scheme performs better than other two schemes, for both terrain dimensions. Assignment of priorities to packets based on flow characteristics, and a complementary back-off mechanism that ensures channel contention based on these priorities, enable our scheme to perform better.

25

30

Fig. 28. Average end-to-end delay vs mobility (terrain area: 200 m · 200 m).

DLPS DPS

90 80 70 60 50 40 30 20 10 0 10

20

30

40

50

60

70

80

90

100

Number of Nodes

Fig. 29. Packet delivery ratio vs number of nodes (terrain area: 200 m · 200 m).

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

Average End-to-End Delay (secs)

2

DLPS DPS

1

0

10

20

30

40

50

60

70

80

90

100

Number of nodes

Fig. 30. Average end-to-end delay vs number of nodes (terrain area: 200 m · 200 m).

Percentage of Pkts Delivered within Deadline

100

DLPS 802.11

90 80 70 60 50 40 30 20 10 0

10

20

30

40

50

60

70

80

90

100

Number of Nodes

2

20

30

40

50

60

70

80

4.3.4. Impact of terrain area Here, we vary the terrain dimensions from 50 m · 50 m, upto 1000 m · 1000 m. The number of nodes is 60, the number of connections is 20, maximum node mobility is 10 m/s, and the end-toend delay target is 500 ms. Fig. 33 shows the percentage of packets delivered within deadline in our scheme, compared to the DPS scheme. Fig. 34 shows similar performance comparison curves for our scheme against the 802.11 DCF. As the terrain area increases, nodes get more distributed, and hence the average hop-length for a path between any destination and source increases. This can be seen in Fig. 35 where the variation in average hoplength has been plotted against increasing terrain area. This increase in average hop-length results in

100

DLPS 802.11

1

0 10

contention for the channel also increases. Hence the nodes need to wait for long durations before transmitting the DATA packets. This causes an increase in the average end-to-end delay of DATA packets in the DPS and the 802.11 DCF schemes. But since the back-off mechanism in DLPS takes into consideration the uniform laxity budget (ULB) and hence the timeliness of the packets, the average end-to-end delay in our scheme remains low and almost constant when the node density increases. Thus, even when the node density is varied, our scheduling scheme performs consistently well, thereby resulting in better performance.

Percentage of Pkts Delivered within Deadline

Average End-to-End Delay (secs)

Fig. 31. Packet delivery ratio vs number of nodes (terrain area: 200 m · 200 m).

90

100

Number of Nodes

Fig. 32. Average end-to-end delay vs number of nodes (terrain area: 200 m · 200 m).

47

DLPS DPS

90 80 70 60 50 40 30 20 10 0

0

100 200 300 400 500 600 700 800 900 1000

Side (metres) Terrain Area = Side x Side (squaremetres)

Fig. 33. Packet delivery ratio vs terrain area.

48

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50 100

DLPS 802.11

90 80 70 60 50 40 30 20 10 0

0

DLPS DPS

90

Percentage of Pkts Delivered within Deadline

Percentage of Pkts Delivered within Deadline

100

80 70 60 50 40 30 20 10 0

100 200 300 400 500 600 700 800 900 1000

0

0.5

1

Side (metres)

1.5

2

2.5

3

3.5

4

Average Hop-length

Terrain Area = Side x Side (squaremetres)

Fig. 36. Packet delivery ratio vs average hop-length.

Fig. 34. Packet delivery ratio vs terrain area.

100 5

Percentage of Pkts Delivered within Deadline

Average Hop-length

DLPS 4

3

2

1

80 70 60 50 40 30 20 10 0

0 100

200

300

400

500

600

700

800

900 1000

DLPS 802.11

90

0

0.5

1

1.5

2

2.5

3

3.5

4

Average Hop-length

Side (metres)

Fig. 37. Packet delivery ratio vs average hop-length. Terrain Area = Side x Side (squaremetres)

Fig. 35. Average hop-length vs terrain area.

5. Conclusion and future work an increase in packet losses. Hence, when terrain area increases, the overall packet delivery ratio comes down. In Fig. 36, we have plotted the variation in the packet delivery ratio with hop-length for our scheme against the DPS scheme. Fig. 37 shows corresponding plots for our scheme against the 802.11 scheme. We can observe from these two figures that an increase in hop-length, when the load is fairly high, brings about a significant decrease in the overall packet delivery ratio. From the above graphs it can be seen that increasing hop-length does not affect our scheduling scheme, which hence performs better than both the other schemes.

We have, in this paper, presented a distributed laxity-based priority scheduling scheme for timesensitive traffic, for the MAC layer of MANETs. The scheme selects the packet to be transmitted next, based on a priority calculation function that takes into consideration the uniform laxity budget of the flow, current packet delivery ratio of the flow, and the desired packet delivery ratio. We have also presented a back-off scheme, where a node varies its back-off period depending on the rank of its highest priority packet, with respect to other such high priority packets queued at nodes in its neighborhood. By means of simulation studies, we have compared the performance of our scheme

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

with that of the DPS scheme and IEEE 802.11 DCF. Simulation results show that our scheme performs better than the other two schemes. Under very high network load conditions, in the absence of any admission control policy, it is very difficult for any scheduling scheme to provide the required QoS guarantees. Hence an admission control scheme is required where new calls would be accepted only if they do not degrade the performance of the network below acceptable limits. Acknowledgements This work was supported by the Department of Science and Technology, New Delhi, India.

References [1] T.S.E. Ng, I. Stoica, H. Zhang, Packet fair queueing algorithms for wireless networks with location-dependent errors, in: Proceedings of IEEE INFOCOM Õ98, March 1998, pp. 1103–1111. [2] S. Lu, V. Bharghavan, R. Srikant, Fair scheduling in wireless packet networks, in: Proceedings of ACM SIGCOMM Õ97, August 1997, pp. 63–74. [3] N.H. Vaidya, P. Bahl, S. Gupta, Distributed fair scheduling in a wireless LAN, in: Proceedings of ACM MOBICOM Õ00, August 2000, pp. 167–178. [4] H. Luo, P. Medvedev, J. Cheng, S. Lu, A self-coordinating approach to distributed fair queueing in ad hoc wireless networks, in: Proceedings of IEEE INFOCOM Õ01, April 2001, pp. 1370–1379. [5] H. Luo, S. Lu, V. Bharghavan, A new model for packet scheduling in multihop wireless networks, in: Proceedings of ACM MOBICOM Õ00, August 2000, pp. 76–86. [6] C. Barrack, K.Y. Siu, A distributed scheduling algorithm for quality of service support in multiaccess networks, in: Proceedings of IEEE ICNP Õ99, October 1999, pp. 245– 253. [7] M. Andrews, L. Zhang, Minimizing end-to-end delay in high-speed networks with a simple coordinated schedule, in: Proceedings of IEEE INFOCOM Õ99, March 1999, pp. 380–388. [8] V. Kanodia, C. Li, A. Sabharwal, B. Sadeghi, E. Knightly, Distributed priority scheduling and medium access in ad hoc networks, ACM/Kluwer, Wireless Networks 8 (5) (2002) 455–466. [9] V. Kanodia, C. Li, A. Sabharwal, B. Sadeghi, E. Knightly, Distributed multi-hop scheduling and medium access with delay and throughput constraints, in: Proceedings of ACM MOBICOM Õ01, July 2001, pp. 200–209.

49

[10] D.B. Johnson, D.A. Maltz, Dynamic source routing in adhoc wireless networks, in: T. Imielinski, H. Korth (Eds.), Mobile Computing, Kluwer Academic Publishers, Boston, 1996, pp. 153–181. [11] UCLA Parallel Computing Laboratory and Wireless Adaptive Mobility Laboratory, GloMoSim: a scalable simulation environment for wireless and wired network systems, Available from . I. Karthigeyan obtained his B.E. degree in Computer Science and Engineering from University of Madras, Tamilnadu, India, in 2000. He is currently pursuing his M.S. (by Research) degree in Computer Science and Engineering at the Indian Institute of Technology (IIT), Madras, India. His research interests include Wireless Networks and Optical Networks.

B.S. Manoj completed his graduation in 1995 and post graduation in 1998 both in Electronics and Communication Engineering from Institution of Engineers (India) and Pondicherry Central University, Pondicherry respectively. He has worked as a Senior Engineer with Banyan Networks Pvt. Ltd, Chennai from 1998 to 2000. He is currently a doctoral student in the Department of Computer Science and Engineering at the Indian Institute of Technology (IIT), Madras, India. C. Siva Ram Murthy obtained the B.Tech. degree in Electronics and Communications Engineering from Regional Engineering College (now National Institute of Technology), Warangal, India in 1982, the M.Tech. degree in Computer Engineering from the Indian Institute of Technology (IIT), Kharagpur, India, in 1984, and the Ph.D. degree in Computer Science from the Indian Institute of Science, Bangalore, India, in 1988. He joined the Department of Computer Science and Engineering at IIT, Madras as a Lecturer in September 1988, became as Assistant Professor in August 1989, and Associate Professor in May 1995. He is currently a Professor with the same department since September 2000. He has to his credit over 180 research papers in international journals and conferences. He is a co-author of the textbooks Parallel Computers: Architecture and Programming (Prentice-Hall of India, New Delhi, 2000), New Parallel Algorithms for Direct Solution of Linear Equations (John Wiley & Sons, Inc., USA, 2000), Resource Management in Real-time Systems and Networks (MIT Press, USA, 2001), and WDM Optical Networks: Concepts, Design, and Algorithms (PrenticeHall, USA, 2001). He is a recipient of the Best Ph.D. Thesis Award and also of the Indian National Science Academy Medal for Young Scientists. He is a co-recipient of Best Paper Awards from 5th IEEE International Workshop on Parallel and

50

I. Karthigeyan et al. / Ad Hoc Networks 3 (2005) 27–50

Distributed Real-Time Systems and 6th International Conference on High Performance Computing. He is a Fellow of Indian National Academy of Engineering. He has held visiting positions at German National Research Centre for Information Technology (GMD), Sankt Augustin, Germany, University of

Washington, Seattle, USA, University of Stuttgart, Germany, EPFL, Switzerland, and University of Freiburg, Germany. His research interests include Parallel and Distributed Computing, Real-time Systems, Lightwave Networks, and Wireless Networks.

Suggest Documents