Document not found! Please try again

Uplink Scheduler and Admission Control for the ... - Semantic Scholar

2 downloads 1962 Views 148KB Size Report
Abstract—Although the IEEE 802.16 standard defines five service levels to support real-time and bandwidth demanding applications, admission control and ...
Uplink Scheduler and Admission Control for the IEEE 802.16 standard Juliana Freitag Borin and Nelson L. S. da Fonseca Institute of Computing, University of Campinas Campinas 13089-971, SP, Brazil Email: {juliana,nfonseca}@ic.unicamp.br

Abstract—Although the IEEE 802.16 standard defines five service levels to support real-time and bandwidth demanding applications, admission control and scheduling mechanisms are not specified in the standard. This paper introduces an uplink scheduling algorithm and an admission control policy for IEEE 802.16 networks. The proposed solution is fully standardcompliant and can be easily implemented in the base station. Simulation results show that this solution is able to meet the QoS requirements of multimedia applications. Index Terms—802.16, QoS, Scheduling, Admission Control.

I. I NTRODUCTION To support the diversity of multimedia applications available on the Internet, the IEEE 802.16 standard [1] defines four types of service flows and mechanisms for bandwidth requests by connections in the uplink direction. The first service flow, Unsolicited Grant Service (UGS), periodically receives fixed size grants without the need to request them. The second, realtime Polling Service (rtPS), provides periodic unicast bandwidth request opportunities to subscribers; these opportunities ensure latency bound and minimum traffic rate guarantees. The non-real-time Polling Service (nrtPS) offers periodic unicast bandwidth request opportunities, but using more spaced intervals than rtPS, as well as minimum traffic rate guarantee. The fourth type of service flow, Best Effort (BE), shares contention bandwidth request opportunities with the nrtPS service flow. The IEEE 802.16e-2005 [2], an amendment to [1], adds a new service flow called extended real time Polling Service (ertPS). This service uses a grant mechanism similar to the one used to support UGS connections, except that periodically allocated grants can be used to send bandwidth requests to inform the required grant size. Although these five types of service flows furnish the basis for QoS provisioning, the core of the task resides in resource allocation done by the scheduling mechanism. The IEEE 802.16 standard states that both the downlink and the uplink schedulers at the base station (BS) as well as the scheduler at the subscriber station (SS) should be defined by proprietary implementations. The major challenges, however, are related to scheduling in the uplink direction at the BS. Scheduling at the downlink direction and at the SSs are less complex since information about the queues status is locally available. To develop a standard-compliant uplink scheduler, the following aspects should be taken into account: This research was partially sponsored by UOL, through its UOL Bolsa Pesquisa program, process number 20071211114200, and by CNPq.

1) based on the QoS parameters of the connections and on the bandwidth requests of each SS, the BS should distribute the uplink bandwidth such that QoS requirements of each connection are satisfied. 2) the BS should allocate bandwidth not only for data transmission but also for bandwidth requests transmission according to the request mechanism defined for each type of service flow. 3) the uplink scheduler should support all QoS parameters defined by the standard. However, to address the first and the third points, the number of simultaneous connections in the system should be restricted in order to avoid the saturation of the wireless channel. This means that an admission control mechanism should work jointly with the uplink scheduler towards the support of Quality of Service. Although the IEEE 802.16 standard mentions the necessity of an admission control mechanism, it does not define any policy. This paper introduces a standard-compliant mechanism for uplink scheduling and admission control to deal with the above mentioned aspects. The admission control regulates the traffic load in the network such that the scheduler is able to provide the QoS level required by the accepted connections. The admission of new connections is based on the minimum reserved traffic rate parameter. Once this requirement can be guaranteed for the new connection as well as for the accepted connections, the algorithm decides on the acceptance of the connection. The scheduler allocates grants with size up to the maximum traffic burst value so that connections transmit at rates varying between the minimum reserved traffic rate and the maximum sustained traffic rate. Moreover, the maximum latency requirements of real time connections are guaranteed to the traffic flows that do not exceed the minimum reserved traffic rates, as specified by the standard. UGS and ertPS connections receive grants in intervals defined by the unsolicited grant interval parameter. Periodic grants for bandwidth requests by rtPS and nrtPS connections are allocated according to the unsolicited polling interval parameter. The proposed scheduler extends previous work by the authors. The version presented in [9] does not include the maximum sustained traffic rate and the maximum traffic burst policing. Moreover, the scheduler tries to serve all rtPS packets before the expiration of their deadlines, which demands complex admission control schemes in order to guarantee a maximum latency value while achieving high levels of utiliza-

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

tion. In the version presented in [10], the maximum sustained traffic rate policing uses a sliding window technique, while the maximum traffic burst is controlled by checking the size of the allocated grants. In the scheduler introduced in this paper, the policing of both parameters is performed by a dual leaky bucket regulator. This technique simplifies the algorithm and its configuration, given that it is not necessary to investigate the window size that leads to the best performance for the different types of traffic. Both the schedulers presented in [10] and the one presented here guarantee maximum latency to the flows that do not exceed their minimum reserved traffic rates. In this way, standard compliance is achieved. Moreover, the proposed approach yields to a simple rate-oriented admission control algorithm which aims at controlling the load to make available sufficient resources to provide the minimum reserved traffic rates. Therefore, the scheduler can prioritize the real time connections with packets with expiring deadlines and which have not reached the minimum rate. Simulation results show that the proposed mechanisms are capable to provide the connection-level QoS support defined by the IEEE 802.16 standard. The rest of this paper is organized as follows. Section II discusses related work. Sections III and IV present the admission control and the scheduling mechanisms, respectively. Section V describes the simulation environment used to test the mechanisms. Section VI presents numerical results. Finally, Section VII concludes the paper. II. R ELATED WORK Many solutions proposed for the IEEE 802.16 uplink scheduler [5], [8], [12] combine classic scheduling policies developed for wired networks, such as Strict Priority, Weighted Fair Queuing (WFQ), and Earliest Deadline First (EDF), resulting in complex scheduling schemes. More recent work [4], [17] as well as the scheduler proposed in this paper have invested in simple implementable ideas. Given that the uplink scheduler at the BS is executed at each frame and there can be as many as 400 frames per second, simpler solutions become more attractive. Different from the schedulers proposed by Sayenko et al [4] and Bai et al [17], the scheduler introduced here is fully standard-compliant. The scheduler proposed in [4] does not provide maximum latency guarantees, while the one proposed in [17] uses a priority value computed by the SSs to provide rate and latency guarantees. Although this approach leads to a less complex scheduler in the BS, it restricts the interoperability between equipments from different vendors since all the SSs should be able to calculate the priority values. The admission control scheme proposed in this paper as well as other solutions available in the literature [7], [8] use only the minimum rate requirement in the decision process which is sufficient to be standard-compliant (even for the real time services, the maximum latency needs to be guaranteed only for the traffic not exceeding the connection minimum rate requirement). Admission control algorithms which include latency guarantees for all real time traffic tend to be more complex, such as the proposal presented in [6]. The scheme proposed here differs from the others which consider only the minimum rate requirement by the fact that it takes into account the overhead generated by the type of service being

solicited (such as the overhead incurred by the bandwidth request mechanism). Chandra and Sahoo [16] also considered the overhead of different type of services, however, the ertPS service is not included in the mechanism. Additionally, the authors do not detail the algorithm used by the search routine, which is an important part of the proposed mechanism. III. A DMISSION CONTROL ALGORITHM The proposed admission control scheme accepts a new connection when the following condition is satisfied: (Creserved + T Riservice ≤ C)

(1)

T Riservice is the traffic rate that should be guaranteed to the new connection i of service type service; Creserved is the capacity reserved to the connections already admitted in n the system which is equal to i=1 T Riservice ; and C is the capacity available for the uplink scheduler, i.e., the amount of uplink bandwidth that the scheduler can allocate for data transmission and unicast polling. A connection requesting admission provides, among other parameters, the minimum reserved traffic rate (minT Ri ). However, this rate is not sufficient when taking into account the overhead generated by the type of service flow being solicited. Therefore, instead of considering only the minimum reserved traffic rate requirement, the T Riservice value also includes an estimate of the overhead generated by the type of service flow. UGS and ertPS connections receive grants in intervals defined by the unsolicited grant interval (ugi) parameter. The grant size for UGS connections is fixed, based on the maximum sustained traffic rate of the service flow (which is equal to the minimum reserved traffic rate for this service). Although the size of the grants allocated to the ertPS connections varies according to the amount of requested bandwidth, the minimum size that should be guaranteed is based on the minimum reserved traffic rate. In this way, for both types of service flows, the BS should guarantee grants with size equals to the value provided by the following equation: grantSizei = minT Ri ∗ ugii

(2)

Equation 2 gives the grant size in bytes. However, in the ULMAP, grants are described in terms of slots, thus, the number of slots assigned to a connection i is approximated to the smaller integer larger than or equal to grantSizei /slotSize. This normalization may result in bandwidth waste when the grantSizei is not a multiple of the number of bytes that can be transmitted in one time slot (slotSize). Consequently, the rate that will be used by an UGS (ertPS) connection can be larger than the maximum sustained traffic rate (minimum reserved traffic rate) requirement. To account for this overhead, the value of the T Riservice for UGS and ertPS connections is computed as follows:  grantSizei  ∗ slotSize T RiU GS,ertP S = slotSize (3) ugii + tji i In Equation 3,  grantSize slotSize  gives the grant size in slots. Grants should be provided in intervals of duration ugi with maximum jitter tji (defined by the parameter tolerated jitter).

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

Multiplying the grant size in slots by the size of one slot in bytes and dividing this value by the sum ugsi + tji provides the minimum rate that should be guaranteed for either UGS or ertPS connections. Besides the minimum reserved traffic rate, rtPS and nrtPS connections also need periodic grants to request bandwidth. Therefore, the rate that should be guaranteed for these connections is equal to the sum of the minT Ri requirement and the rate used by the unicast polling: T RirtP S,nrtP S = minT Ri +

upSlots ∗ slotSize upii

(4)

where upSlots is the number of slots used by the unicast polling and upii is the value provided by the unsolicited polling interval parameter. nrtPS connections do not provide the upii parameter, however, the standard states that this interval can be chosen by the BS. Best effort connections are always accepted since they do not have QoS requirements. IV. S CHEDULING MECHANISM The proposed uplink scheduler uses three queues: low, intermediate and high priority queues. The scheduler serves the queues according to their level of priority. The low priority queue stores the BE bandwidth requests. The intermediate queue stores the bandwidth requests sent by both rtPS and nrtPS connections. These requests can migrate to the high priority queue to guarantee that their QoS requirements are met. In addition to the requests migrating from the intermediate queue, the high priority queue stores periodic grants and unicast request opportunities that must be scheduled in the following frame. The BS executes the uplink scheduler at every frame, and it broadcasts the scheduling agenda to the SSs in the UL-MAP message. At each frame, the scheduler generates periodic grants and inserts them into the high priority queue at predefined intervals. In this way, UGS and ertPS grants are guaranteed, and rtPS and nrtPS unicast request opportunities are provided as specified by the standard. To guarantee the maximum latency requirement, the BS assigns a deadline for each rtPS bandwidth request in the intermediate queue. Each time the scheduler is executed, requests with deadlines expiring two frames ahead and associated with connections which have not received the minimum reserved traffic rate in a window with duration T migrate from the intermediate queue to the high priority queue. In this way, the scheduler guarantees the maximum latency for the flows that do not exceed the minimum reserved traffic rate, as specified in the standard. It is necessary to know the arrival time of the packets at SS queues to determine the deadline of the requests. Since the BS has no access to this information, it considers that the packet arrived at the queue immediately after the last bandwidth request sent by the connection. Hence, the deadline of a request is equal to the arrival time of the last request sent by the corresponding connection plus the maximum latency requirement of that connection. The scheduler guarantees the minimum reserved traffic rate to the rtPS and to the nrtPS services in windows of duration

T . Every time the scheduler is executed, it computes a priority value for each request at the intermediate queue, considering the per connection: minimum reserved traffic rate, backlogged requests, and traffic rate received in the current window. Low priority values are assigned to the requests of connections which have already received the minimum reserved traffic rate in the current window. For the remaining requests, the lower the rate received by the connection, the higher is the priority value associated to them. Since all the allocations for a single SS are placed in one grant in the UL-MAP, the scheduler guarantees that the sum of the bandwidth allocated to a single connection is less than or equal to the maximum traffic burst requirement. Moreover, the scheduler does not allocate bandwidth for a connection if it results in violation of the maximum sustained traffic rate. A dual leaky bucket (Figure 1) is used for maximum burst and maximum rate policing. The capacity of the first bucket (bucket1) is infinite and its leaky rate is equal to the maximum sustained traffic rate. The second bucket (bucket2) has leaky bucket equal to the maximum traffic burst. Therefore, before allocating a grant, the scheduler checks whether the grant size is less or equal to the bucket2 size. maximum sustained traffic rate

Fig. 1.



bucket1

maximum traffic burst

bucket2

Dual leaky bucket

The Algorithm Scheduling presents the proposed scheme. After inserting periodic grants in the high priority queue, the algorithm checks which rtPS and nrtPS requests should migrate from the intermediate queue to the high priority queue (lines 2 and 3). In line 4, the scheduler distributes the nonallocated bandwidth among the BE connections. At the final step, it serves all the requests at the high priority queue. In the CheckDeadline procedure, the scheduler tries to migrate the rtPS requests from the intermediate queue to the high priority queue using the MigrateBWRequest procedure whenever the following conditions happen: i) there is available

A LGORITHM Scheduling 1. insert, in the high priority queue, the periodic data grants and unicast request opportunities that must be scheduled in the next frame; 2. CheckDeadline; 3. CheckMinimumBandwidth; 4. DistributeFreeResources; 5. schedule the requests in the high priority queue starting from the head of the queue;

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

P ROCEDURES CheckDeadline: 1. for each request i at the intermediate queue do 2. if availableBw = 0 then break 3. 4. if service[CID] = rtPS then frame[i] = (deadline[i] - currentTime)÷ 5. frameDuration; if frame[i] = 3 and TwndTR[CID] < minTR[CID] 6. MigrateBWRequest(i) 7. CheckMinimumBandwidth: 8. For each connection of type rtPS or nrtPS do 9. backlogged tmp[CID] = backlogged[CID] 10. TwndTR tmp[CID] = TwndTR[CID] 11. bucket2 tmp[CID] = bucket2[CID] 12. for each request i at the intermediate queue do 13. if minTR[CID] ≤ TwndTR tmp[CID] or bucket2 tmp[CID] = 0 then priority[i] = 0; 14. else 15. priority[i] = backlogged tmp[CID] − 16. ( TwndTR tmp[CID] − minTR[CID]); TwndTR tmp[CID] = TwndTR tmp[CID] + BR[i] 17. bucket2 tmp[CID] = bucket2 tmp[CID] + BR[i] 18. backlogged tmp[CID] = backlogged tmp[CID] − 19. BR[i] 20. sort the intermediate queue 21. For each request i at the intermediate queue do 22. if availableBw = 0 then break 23. 24. MigrateBWRequest(i) DistributeFreeResources: 25. for each request i at the low priority queue do 26. if availableBw = 0 then break 27. MigrateBWRequest(i) 28. MigrateBWRequest(i): 29. if BR[i] > availableBw then 30. grantSize = availableBw 31. else 32. grantSize = BR[i] 33. if grantSize > bucket2[CID] 34. grantSize = bucket2[CID] 35. if 0 < grantSize < BR[i] then 36. create a new request j for connection CID with BR[j] = BR[i] − grantSize insert request j in the end of the intermediate queue 37. BR[i] = grantSize 38. move request i to high priority queue 39. TwndTR[CID] = TwndTR[CID] + grantSize 40. bucket2[CID] = bucket2[CID] − grantSize 41. backlogged[CID] = backlogged[CID] − grantSize 42. availableBw = availableBw − grantSize 43.

bandwidth ii) the deadline of a rtPS request expires during the frame following the next one, and iii) the corresponding connection has not received the minimum reserved traffic rate (minT R[CID]) in the current window. The MigrateBWRequest procedure checks whether or not the amount of bandwidth solicited by the migrating request (BR[i]) is available in the uplink subframe. Whenever the available bandwidth is less than the requested one, it will be allocated and the allocation of the remaining amount solicited will be deferred to a future time. The allocation of part of the requested bandwidth will also be deferred in case it results in violation of the maximum traffic burst value. Once the amount of bandwidth that can be allocated to the request is defined (grantSize), if this value is less than the BR[i] value, a new request j is created with BR[j] equals to BR[i] − grantSize and it is inserted at the end of the intermediate queue. The bandwidth solicited by the ith request is updated to grantSize and the request is migrated to the high priority queue. Finally, the procedure updates: i) the traffic rate received by the corresponding connection during a window of size T (T wndT R[CID]), ii) the size of the bucket2 of the corresponding connection (bucket2[CID]), iii) the number of bytes requested by the backlogged requests sent by the corresponding connection (backlogged[CID]), and iv) the number of available bytes in the uplink subframe (availableBW ). The CheckMinimumBandwidth procedure first calculates a priority value for each request in the intermediate queue (lines 8-19). Then, it sorts the intermediate queue according to the priority values (line 20). Finally, the scheduler tries to migrate requests to the high priority queue using the MigrateBWRequest procedure. The DistributeF reeResources procedure distributes the available bandwidth among the BE requests by migrating some of them from the low priority queue to the high priority queue. V. S IMULATION E XPERIMENTS To conduct this study, an ns-2 module for IEEE 802.16 networks was developed [11]. The simulated network consists of a BS, with the SSs uniformly distributed around it. The frame duration is set to 5 ms and the capacity of the channel is assumed to be 40 Mbps with a 1:1 downlink-to-uplink TDD split. To eliminate the impact of packet scheduling at the SSs on uplink scheduling, each SS has only one service flow. We consider five different types of traffic: voice, voice with silence suppression, video, FTP and WEB, which are associated with UGS, ertPS, rtPS, nrtPS, and BE services, respectively. The voice model used was an exponential “on/off” model with mean duration of the “on” and of the “off” periods equals to 1.2 s and 1.8 s, respectively. During the “on” periods, 66byte packets are generated at every 20-ms [13]. The voice with silence suppression model used the Enhanced Variable Rate Codec (EVRC) [3], with packets generated every 20 ms employing Rate 1 (171 bits/packet), Rate 1/2 (80 bits/packet), Rate 1/4 (40 bits/packet) or Rate 1/8 (16 bits/packet). Video traffic was generated by real MPEG traces [15]. The WEB traffic was modeled by a hybrid Lognormal/Pareto distribution,

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

1.000

Blocking probability

0.800

0.600

0.400 UGS ertPS rtPS nrtPS

0.200

0.000 0

5

10

15

20

25

30

Connections/minute for each type of service

Fig. 2.

Blocking probability

The following results compare the service parameters measured in the simulations with the corresponding QoS requirements defined for each service flow. The aim is to verify the capability of the proposed admission control and scheduler to support QoS at the connection level. Specially, it is expected that: 1) UGS and ertPS connections receive periodic data grants according to the unsolicited grant interval; 2) traffic not exceeding the minimum reserved traffic rate of an rtPS connection is delayed less than the maximum latency constraint of the connection; and 3) the average throughput of any rtPS or nrtPS connection is no less than its minimum reserved traffic rate and no more than its maximum sustained traffic rate. 0.070

UGS rtPS ertPS

0.060

0.050

Latency (s)

with the body of the distribution corresponding to an area of 0.88 modeled by a Lognormal distribution with a mean of 7247 bytes and the tail modeled by a Pareto distribution with mean of 10558 bytes [14]. FTP traffic was generated using an exponential distribution with a mean of 512 KBytes. The unsolicited grant interval for the UGS and for the ertPS service is 20 ms. The unsolicited polling interval for the rtPS service is 20 ms and for the nrtPS service is 1 s. The maximum latency requirement of the rtPS service is 100 ms and each connection has its own minimum reserved traffic rate and maximum sustained traffic rate requirements (which vary according to the mean rate of the transmitted video). The nrtPS service has minimum reserved traffic rate requirement of 200 Kbps, and maximum sustained traffic rate requirement of 300 Kbps. The BE service does not have any QoS requirement. The lifetimes of connections are exponentially distributed with average equal to 600 s for the rtPS connections and equal to 300 s for the UGS, ertPS, and nrtPS connections. The connection arrival rates are governed by an exponential distribution with the mean varying from 60 s to 2 s for each type of service. In this way, under the lowest load, on average, one connection of each type of service arrives per minute, while under the highest load, an average of 30 connections of each type of service arrive per minute. The number of BE connections is equal to 20 in all the simulated situations. Given that the duration of each simulation was 3600 s and that the BE connections are always admitted, the constant admission of new BE connections would result in an excessive number of connections in the network. Each result was produced by running the simulation ten times with different seeds. The mean values and the 95% confidence intervals are shown in the figures.

0.040

0.030

0.020

0.010

0.000

VI. N UMERICAL R ESULTS

0

The first result (Figure 2) shows the influence of the increase of the connection arrival rate on the blocking probability of the UGS, the ertPS, the rtPS, and the nrtPS services. It also gives an overall overview of the status of the network, in terms of the amount of connections admitted for each type of service flow, under different load conditions which helps in the analysis of the results presented next. Except for the situation with the lowest connection arrival rate, under which all types of service have blocking probability equal to 0, the UGS service has the lowest blocking probability, followed by the ertPS, the nrtPS, and the rtPS services. Moreover, the higher the connection arrival rates, the longer is the distance between the UGS curve and the curves of the other services. This is an effect of the different minimum rate values requested by the four types of service. While the UGS connections request about 26 Kbps, the ertPS and the nrtPS connections request, respectively, 80 Kbps and 200 Kbps, and the rtPS connections request between 200 Kbps and 900 Kbps. In case the service provider wants to support similar blocking probability for the four types of service, the admission control mechanism can be extended to include utilization levels of each type of service to avoid favoring connections with low minimum rate requirements.

Fig. 3.

5

10 15 20 Connections/minute for each type of service

25

30

Average latency of UGS, ertPS and rtPS services

Figure 3 shows the average latency for the UGS, the ertPS, and the rtPS connections. The latency values of the UGS and of the ertPS connections were not affected by the increase of the connection arrival rates, which shows that the uplink scheduler is able to provide data grants at fixed intervals as required by the standard. The latency of the rtPS connections did not surpass the required value of 100 ms. The highest latency values are observed when the connection arrival rate is equal to 5 connections of each type per minute, that is the load under which there is the highest number of admitted rtPS connections. When the connection arrival rate is higher than 5 connections of each type of service per minute, the rtPS connections are more likely to be rejected than the connections associated with the other type of services, as previously explained. The utilization of the intermediate queue is high when the number of accepted rtPS connections is also high, since the rtPS connections send bandwidth requests at every 20 ms while the nrtPS connections send bandwidth requests every 1 s, which yields to high latency values. Figure 4 shows the average throughput of the rtPS con-

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

nections with minimum reserved traffic rate of 200 Kbps, 500 Kbps, and 900 Kbps, and Figure 5 shows the average throughput of the nrtPS connections. In all the simulated scenarios, the average throughput values of the rtPS and of the nrtPS connections were in the range defined by the minimum and the maximum rates requirements as specified by the standard. 800000

minimum reserved traffic rate maximum sustained traffic rate throughput − Ice Age offered load − Ice Age throughput − Lady Tramp offered load − Lady Tramp throughput − Dreamcatcher offered load − Dreamcatcher

700000

Throughput (bps)

600000

500000

400000

300000

200000

100000 0

5

750000

10 15 20 Connections/minute for each type of service

25

30

minimum reserved traffic rate maximum sustained traffic rate throughput − Friends offered load − Friends

700000

Throughput (bps)

650000

600000

550000

500000

450000 0

5

25

30

minimum reserved traffic rate maximum sustained traffic rate throughput − Lecture offered load − Lecture throughput − Baseball offered load − Baseball throughput − Aladdin offered load − Aladdin

2000000

1800000

Throughput (bps)

10 15 20 Connections/minute for each type of service

1600000

1400000

1200000

1000000

800000 0

5

10 15 20 Connections/minute for each type of service

25

30

Fig. 4. Average throughput of rtPS connections with minimum rate requirement of 200 Kbps, 500 Kbps, and 900 Kbps, respectively 350000

Throughput offered load minimum reserved traffic rate maximum sustained traffic rate

Throughput (bps)

300000

250000

200000

150000 0

Fig. 5.

5

10 15 20 Connections/minute for each type of service

25

30

Average throughput of nrtPS connections

The throughput of the nrtPS connections, measured at the MAC layer, was a little higher than the offered load, measured at the transport layer, due to additional load incurred by MAC headers. Although the nrtPS flows were configured to generate an average rate equals to the maximum sustained traffic rate, the joint effect of the maximum rate control done

by the scheduler and the TCP congestion control resulted in an offered load lower than 300 Kbps. Consequently, all the nrtPS traffic generated by the upper layers was served at the MAC layer in all the simulated scenarios. The interaction between the scheduling mechanism and the TCP congestion control mechanism shall be investigated in the future to counteract the resource underutilization side effect. Although not shown here due to space limitations, in all the simulated scenarios the BE connections were able to transmit in the slots not used by the higher priority service flows. VII. C ONCLUSIONS This paper introduced a new standard-compliant scheduling and admission control solution for the uplink traffic in IEEE 802.16 networks. The admission of new connections considers the minimum reserved traffic rate requirement as well as the overhead incurred by the type of service being solicited. The proposed scheduler provides maximum latency and minimum rate guarantees without violating the maximum sustained traffic rate and the maximum traffic burst values. Simulation results show that the proposed solution is able to provide QoS for the different types of service defined by the IEEE 802.16 standard, yet being standard-compliant. R EFERENCES [1] IEEE Standard 802.16-2004, Part 16. Air Interface for Fixed Broadband Wireless Access Systems, 2004. [2] IEEE Standard 802.16-2005, Part 16. Air Interface for Fixed and Mobile Broadband Wireless Access Systems, 2005. [3] 3GPP2 C.S0014-0, Enhanced Variable Rate Codec (EVRC). [4] A. Sayenko, O. Alanen, and T. Hamalainen, Scheduling solution for the IEEE 802.16 base station. Computer Networks, vol. 52, no. 1, pp. 96-115, 2008. [5] D. Tarchi, R. Fantacci, and M. Bardazzi, Quality or Service management in IEEE 802.16 wireless metropolitan area networks. In Proceedings of the IEEE International Conference on Communications, pg. 1789-1794. [6] D. Niyato, and E. Hossain, QoS-aware bandwidth allocation and admission control in IEEE 802.16 broadband wireless access networks: A non-cooperative game theoretic approach. Computer Networks, vol. 55, no. 11, pg. 3305-3321. [7] H. Wang, B. He, and D. P. Agrawal, Above packet layer level admission control and bandwidth allocation for IEEE 802.16 wireless MAN. Simulation Modeling Practice and Theory, vol. 15, no. 14, pg. 266-382. [8] J. Chen, W. Jiao, and H. Wang, A Service Flow Management Strategy for IEEE 802.16 Broadband Wireless Access Systems in TDD Mode. In Proceedings of the IEEE ICC, pp. 3422-3426, 2005. [9] J. F. Borin, and N. L. S. Fonseca, Scheduler for IEEE 802.16 networks. IEEE Communications Letters, vol. 12, no. 4, pp. 274-276, 2008. [10] J. F. Borin, and N. L. S. Fonseca, A Standard-Compliant Scheduler for WiMAX Networks. In Proceedings of the 4th IEEE Broadband Wireless Access Workshop, pp. 1-5, 2008. [11] J.F. Borin, and N.L.S. Fonseca, Simulator for WiMAX networks. Simulation Modeling Practice and Theory, vol. 16, no. 7, pp. 817-833, 2008. [12] K. Wongthavarawat and A. Ganz, IEEE 802.16 based last mile broadband wireless military networks with Quality of Service support. In Proceedings of the IEEE MILCOM’03, pp. 779-784, 2003. [13] P. Brady, A model for generating On-Off speech patterns in Two-Way conversations. Bell System Tech. Journal, vol. 48, pp. 2445-2472, 1969. [14] P. Barford et all, Changes in web client access patterns: Characteristics and caching implications. Tech. Report 1998-023, Boston Uni., 1998 [15] P. Seeling, M. Reisslein, and B. Kulapala, Network Performance Evaluation Using Frame Size and Quality Traces of Single-Layer and Two-Layer Video: A Tutorial. IEEE Communications Surveys and Tutorials, vol. 6, no. 2, pp. 58-78, 2004. [16] S. Chandra, and A. Sahoo, An efficient call admission control for IEEE 802.16 networks. In Proceedings of the 15th IEEE Workshop on Local & Metropolitan Area Networks, pg. 188-193. [17] X. Bai, A. Shami, and Y. Ye, Robust QoS control for single carrier PMP mode IEEE 802.16 systems. IEEE Transactions on Mobile Computing, vol. 7, no. 4, pg. 416-429.

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.