Wireless call admission control using threshold access ... - CiteSeerX

4 downloads 19629 Views 548KB Size Report
of traffic requesting admission, it is more able to adapt to the limited wireless link. A. Algorithm description. In the TAS policy three levels of call admission are ...
Wireless Call Admission Control Using Threshold Access Sharing Jay R. Moorman

John W. Lockwood

University of Illinois

Washington University

Abstract— This paper presents a call admission control (CAC) algorithm that aids in the Quality of Service (QoS) support in a wireless network base station. The Threshold Access Sharing (TAS) scheme provides improved performance over generalized methods of current CAC algorithms. The TAS algorithm relies on a prioritization of admitted traffic through bandwidth thresholds. This is particularly vital for high priority real-time calls and preapproved QoS handover calls. In addition, the CAC algorithm must rely on some advanced levels of channel estimation to most effectively use the limited channel resources.

II. T HRESHOLD ACCESS S HARING This call admission policy is the proposed policy for prioritized real-time access. Since the admission of flows is based not only on the current state of the channel, but also on the type of traffic requesting admission, it is more able to adapt to the limited wireless link. A. Algorithm description

I. I NTRODUCTION The CAC policy is used to admit flows into the system in order to meet QoS guarantees. It must rely on both specific knowledge of the individual traffic requirements, as well as a broader knowledge of the current channel conditions. Guarantees on QoS can then be met by the base station scheduler with each flow having different traffic requirements. Most work in this area has not addressed wireless specific issues or has focused on only voice traffic. Our work is an attempt to make the most efficient use of the wireless channel for a wide range of traffic types and requirements.

In the TAS policy three levels of call admission are enabled. A diagram of this CAC policy is shown in Figure 1. At the lowest level, when the wireless channel is highly underutilized, all incoming calls that can be satisfied are admitted. After a certain threshold (Tlo ), when channel allocation is becoming more highly utilized, all calls are still admitted, but lower priority calls are admitted conditionally. These calls are aware that

Utilization Bound Capacity

Work by Ho and Lea [1] attempts to maximize the normalized channel throughput when there are defined constraints on new call and handoff blocking probabilities. It focuses on a few calls with a single type of handoff in a mobile network.

A hybrid cutoff priority scheme for call admission is proposed in [4] by Li et al. Here multiple classes of traffic are considered where each level has its own cutoff threshold at which point only handoff calls are accepted and new calls are blocked. However, calls are allowed to be queued while waiting for access. Despite the differences in the focus of much of this work, many of the applied methods can be used for our situation. However, we will show that for diverse traffic in a wireless system it is best to apply traffic specific admission requirements in order to best utilize the channel capacity.

Priority Access

T high

Conditional Access

T low

Equal Access

An adaptive reallocation algorithm for call admission of multimedia traffic under variable channel conditions is presented by Kwon et al. in [2]. The scheme adaptively redistributes bandwidth but only focuses on a single type of traffic, and can cause all currently connected traffic to receive reduced service. Kakani et al. present a wireless network CAC framework in [3] that supports QoS requirements for two different levels of traffic. The work mostly focuses on traffic scheduling and can produce overall performance degradation by allowing a reduced throughput.

                                                                                                                                              Time

Fig. 1. Threshold Access Sharing.

the channel is heavily used, and know that they may lose their connection or have their service reduced if more traffic arrives. Exceeding the upper threshold (T i ) puts the CAC policy into a state of prioritized admission. The lower priority traffic will then receive reduced service. This reduced service will be targeted at the conditionally admitted flows. In effect, the higher priority traffic will be given limited bandwidth at the expense of additional delay in the lower traffic classes. The channel will be in a state of complete utilization, or even over-utilization. Only new high priority calls will be admitted, while lower priority calls already admitted will receive reduced service. Note that the amount of traffic over-admitted into the system must be bounded in order to provide a stable system over time. This bound is placed on the amount of high priority traffic that can be admitted during the prioritized state. The goals of the CAC policy were first introduced in [5]. These are shown below along with the corresponding rules for

0-7803-7206-9/01/$17.00 © 2001 IEEE

3698

threshold access sharing. 1. Goal: Maximize channel utilization while providing fair access to traffic flows. Rule: FCFS channel access while: (U til  Tlo ). Conditional access while: (Tlo < U til < T i ). Prioritized access while: (U til  T i ). 2. Goal: Minimize the call dropping probability. Rule: P = 0. (Will not drop any calls even when (T i ) has been exceeded.) 3. Goal: Minimize any reduction of service for calls. Rule: Low priority calls (nrt) may receive reduced service if they were conditionally admitted. 4. Goal: Minimize the blocking probability of new calls. Rule: P = 0 for high priority traffic. P increases towards 1 for low priority traffic under high traffic conditions. The threshold access sharing state transition diagram is shown in Figure 2. The initial state (state 0) is shown on the left as the equal access state. Here all flows are admitted if there are enough given resources. As the utilization increases above the lower threshold (Tlo ), the system transitions into state 1, the conditional access state. The higher priority realtime and handover calls are then admitted if resources are available. Lower priority non-real-time calls are admitted under a conditional circumstance. These calls must be tagged through a tracking queue that contains the conditional flow information. Once utilization exceeds the high threshold (T i ), the system transitions to state 2, the priority access state. High priority calls are still admitted if resources are available so as to not exceed a predetermined bound. Low priority incoming calls will be blocked. Finally, low priority calls in the conditional queue

decision process, a flow chart is shown in Figure 3. The decision flow is initiated when a new call requests admission into the system. The base station must first determine the current state of operation: state 0 (Equal Access), state 1 (Conditional Access), or state 2 (Prioritized Access). For requests during state 0, the base station determines if current resources will enable the call to be accepted. These would typically focus on bandwidth levels and delay bounds, but might also include such items as jitter and cell loss. If the system resources are not available, the call must be blocked. Otherwise the call can be admitted, system resources updates, and any necessary state transition performed. For requests made during state 1, the decision process is separated into high priority and low priority flows. High priority calls follow the same resource checking path as in the equal access state. If it is a low priority call and resources are available, the call is admitted and entered into the conditional flow queue. System resources are then updated and any necessary state transitions performed. Finally, if the admission request is made during state 2, low priority calls are immediately denied access. High priority calls will force a resource check to determine if the call can be admitted under a predefined upper bound. If the low priority resources are available, the call is admitted and entered into a priority flow queue. System resources are again updated as needed. The base station call admission decision, triggered by a new call admission request, will return with either an admission accept or an admission deny for the particular flow. Admission Request

Equal Access

No

                                                  Equal Access

    

Util > T Low

Yes

Util > T High

Conditional Access

state 0

Reduced Service

No

Resources Available

State 2

Yes

Yes

Resources Available

No

Resources Available

Yes

Update Resources

Queue of Priority Flows

No

No

Low Priority

Admit Flow

state 2

High Priority Flow

Yes

High Priority Flow

Yes

Util < T High Queue of Conditional Flows

No

High Priority

Prioritized Access

state 1

Util < T Low

Conditional Access State 1

State 0

Yes

No

Admit Prioritized

Reduced Service

Admit Conditionally

Update State

Fig. 2. Call Admission State Diagram.

Admission DENY Admission ACCEPT

will begin to receive reduced service through added packet delays. These flows will be forced to give up scheduled slots to the higher priority packets. However, this reduction in service can be tracked by the scheduler. When the higher priority calls have completed, or additional bandwidth is available, the traffic flows that are behind in their service will receive additional compensation. These over-admitted calls are tracked in a priority flow queue as shown on the right of Figure 2. This enables the conditional flow slots to be swapped with the priority flow slots. If at any time the network utilization falls below the respective threshold level, the system will transition back to conditional access (state 1) or equal access (state 0), respectively. To provide a better idea of the base station call admission

Fig. 3. Call Admission Flow Chart.

B. Parameter determination The threshold access sharing policy relies on a number of parameters that must be used in the implementation details. These include the thresholds themselves as well as the prioritized bound in the system. These values were originally derived and presented in [6]. The first parameter chosen was the high threshold T i . The optimal value is equal to channel capacity (T i = Capacity). Priority admitted flows receive additional service

3699

3000

at the expense of lower priority flows without any bandwidth being wasted in an underutilized system. The second parameter chosen was the lower threshold value Tlo . This parameter is more arbitrary then the upper threshold, and might well depend on the expected network traffic type and expected traffic loads for a particular network. As an initial target, twice the excess bound traffic is suggested. This is shown in Equation 2. Tlo

 Capacity 2(Bound Capacity)  3  Capacity 2  Bound

run1 (92%) run2 (96%) run3 (100%) run4 (104%) run5 (108%) run6 (112%) run7 (116%)

2500

Delay

2000

1500

1000

500

0

(1)

0

5000

10000 Cells

15000

20000

Fig. 4. Utilization Bound from Low Priority Delay.

The final parameter value chosen was the upper bound on the amount of prioritized traffic over-admitted into the system. Similar to the Tlo parameter, this value will be highly dependent on the traffic profile for the particular network in question. However, some initial results were simulated in order to gain a better understanding of the approximate location of the bound as the utilization increased from 92% to 116% for a system with four rtVBR flows and one nrtVBR flow. Delay results from this simulation are shown in Figure 4 for the nrtVBR flow. From the curves it can be seen that as the amount of over-utilization increases, the delay bounds for the nrtVBR flow increase significantly. This is expected according to the design of the threshold access sharing policy. For the first four runs, the delay bound can be seen to drop to zero as the flow clears its buffer. These scenarios handle the utilization allocation without incidence. As the utilization increases further in runs 5 and 6, the delay does not return to zero levels. However, the delay does stabilize around a plateau level. The high priority calls continued to send during the entire simulation, so the low priority traffic did not have a chance to be compensated after over-admitted calls left the system. Since these runs did stabilize at a certain level, the system can maintain this scheduling for a short period of time. In the last simulation, run 7, the delay is seen to increase in an unstable and unbounded manner. This leads to an approximate rule of thumb for choosing the upper bound parameter on the system utilization. Bursts of over-utilization around 5% are tolerable, and even as high as 10% over-utilization are stable and recoverable. However, utilization beyond this amount will tend to overrun the system and cause great backlog and dropping of packets. This amount of over-utilization might also be adjusted according to the dynamic nature and expected length of calls in the system. The complete results for parameter determination are shown in Table I. Notice that the values for both Tlo and Bound are only approximate. These are best used as the initial estimate and adjusted as needed according to the specific traffic profile of the network. III. CAC S IMULATIONS Simulations were completed with varying input rates for each traffic class in order to evaluate the performance of the call admission policy. This evaluation was accomplished using a steady state simulator in UltraSAN and a custom written

TABLE I T HRESHOLD ACCESS S HARING PARAMETERS .

Parameter T i Tlo Bound

Value Capacity  3  Capacity - 2  Bound  5% - 10% Over-Utilization

packet simulator. The incoming traffic intensities and service times were chosen to approximate a typically wireless LAN scenario. The first generalized policy allows equal access to the channel for all flows in the system. Once the channel is fully utilized all flows are blocked. The second policy allows equal access admission until a defined utilization level is reached below full capacity. This reserves some bandwidth for high priority or emergency calls. The third generalized policy has a similar equal access admission for all flows until full capacity. It then allows high priority calls into the system by dropping low priority calls already admitted. Finally, the fourth policy in the simulation is TAS. A. CAC simulation comparisons The initial simulation comparison looked at all CAC policies in context of the original goals, including channel utilization, probability of call blocking, probability of call dropping, and probability of service reduction. A few of these results presented in [5] are shown here. One of the most important goals in the call admission policy should be the channel utilization that can be achieved. In Figure 5, the normalized utilizations are given as the arrival rates varied. In the figure, the incoming rate  was varied. It can be seen that both policy 1 and policy 3 have nearly 100% channel utilization. The utilization of policy 2 suffers due to the reserved bandwidth. The best channel utilization occurs with policy 4 since the channel can actually be over-utilized. This over-utilization is only slightly over 100% until the incoming CBR rate exceeds the CBR service rate at 0.05 and the system becomes unstable. The number of CBR calls will then accumulate as they are not being serviced as quickly as they are arriving. Recall that when the channel is being over-utilized

3700

25000

the lower priority traffic will suffer through additional delay and reduced service.

20000

CAC policy 1 CAC policy 2 CAC policy 3 CAC policy 4

1.05

Utilization

Packet Number

Utilization with different CAC policies 1.1

run1 (92%) run2 (96%) run3 (100%) run4 (104%) run5 (108%) run6 (112%) run7 (116%)

1

15000

10000

5000

0.95

0

0

5000

10000

15000 Time (Packets)

20000

25000

30000

0.9

Fig. 6. Low Priority Rate with Over-Utilization. 0.85

0

0.2

0.4

0.6

1.2 1 0.8 Incoming nrt rate

1.4

1.6

1.8

2

Fig. 5. Utilization vs. nrt .

Minimizing the probability of dropping currently connected calls is also a key goal in the CAC evaluation. It is much better to deny the admission of new calls rather then drop a connected call with which a traffic contract has already been agreed upon. This is the key component leading to the TAS algorithm. The reduction of service to lower priority traffic is a much more desirable outcome then the outright dropping of those calls. When considering the CAC goals, TAS is the best candidate for a policy with QoS support. The advantages in utilization, blocking probability, and dropping probability far outweigh the tradeoff in reduced service to lower traffic classes. This is a key reason why a call admission policy that treats individual traffic classes uniquely is a very important step for ubiquitous wireless computing.

ity non-real-time flow takes the brunt of the additional utilization. Delay value increase significantly, and maximum delay grows exponentially. The maximum delay values only reflect results from traffic removed from the queues and transmitted on the wireless link. Packets that remained in the queue were not counted against the flow. This tends to bias the results towards a somewhat smaller delay than would actually be the case over a greater time. TABLE II T HRESHOLD ACCESS S HARING D ELAY R ESULTS .

Flow rtVBR1

Delay Maximum Average rtVBR2 Maximum Average nrtVBR Maximum Average Low Priority Buffer Size Channel Utilization

B. TAS delay simulation Simulation results were also obtained to analyze the effect of the TAS algorithm on packet delay. The system was simulated with an over-allocation of traffic in order to determine the effects on delay of both high priority real-time flows, and low priority non-real-time flows. The simulation was run under seven different scenarios with increasing channel utilization (92%-116%). The graph in Figure 6 shows the rate of service for the low priority flow during the seven different simulation runs. The flow’s packet numbers are on the y-axis, and the time that the particular packet was serviced is on the x-axis. There is not a significant reduction in the rate as the channel becomes overutilized, however, there is a clear and pronounced shift in the curve as the channel becomes more highly used. This shift displays the reduced rate and additional delay that the low priority flow is receiving with increased rates in the higher layer flows. Delay values from a few of the simulation runs above are shown in Table II. Only two rtVBR flows and the nrtVBR flow have been included. Note that the maximum delay values seen by the real-time traffic are not affected by the additional traffic utilization. Even the average delays only see a slight increase due to the increased rate in the rtVBR flows. The low prior-

1 2 0.04 4 0.10 72 16.8 64 92%

Simulation Run 7 5 3 3 3 2 0.15 0.13 0.09 4 4 4 0.21 0.19 0.14 1596* 3593* 357 775.4 2039.6 97.9 2775 1147 286 100% 108% 116%

*Values are from sent packets not stuck in packet queue

The next set of graphs in Figure 7 show the cumulative delay distribution for four of the simulation runs as the utilization increases beyond capacity. Utilization values increase from left to right in the graphs. The leftmost graph is for the 92% utilization scenario. The next graph is the 100%, full capacity utilization. Note that the delay of the rtVBR flows is barely affected even though there was a rate increase. The nrtVBR flow experiences additional delay as seen by the downward shift in the distribution curve. Moving right to the next graph with 108% utilization (8% over-utilization), the nrtVBR delay spreads out even more. The delay of the rtVBR flows again experience little change, however, a significant portion of the nrtVBR traffic experiences large delays. This includes over nearly 50% which experiences delays greater than 1000 packet slots. The final graph, shown on the right of Figure 7, is the distribution for the case with 116% channel utilization. Even though the rtVBR flows experience almost no change in the distribu-

3701

0.8

0.8

0.6

rtVBR1 delay rtVBR2 delay rtVBR3 delay rtVBR4 delay nrtVBR delay

0.2

0

1

100 10 Delay Time (Packets)

0.6

0.4

rtVBR1 delay rtVBR2 delay rtVBR3 delay rtVBR4 delay nrtVBR delay

0.2

0 1000

1

100 10 Delay Time (Packets)

1

rtVBR1 delay rtVBR2 delay rtVBR3 delay rtVBR4 delay nrtVBR delay

0.6

0.4

0.6

0.4

0.2

0.2

0

0

1

1000

rtVBR1 delay rtVBR2 delay rtVBR3 delay rtVBR4 delay nrtVBR delay

0.8

Probability of Delay

0.8

Probability of Delay

1

Probability of Delay

Probability of Delay

1

0.4

Simulation Delay Distribution (116% Utilization)

Simulation Delay Distribution (108% Utilization)

Simulation Delay Distribution (100% Utilization)

Simulation Delay Distribution (92% Utilization) 1

100 10 Delay Time (Packets)

1000

100 10 Delay Time (Packets)

1

1000

Fig. 7. Cumulative Delay Distribution as Over-Utilization Increases.

tion, the nrtVBR traffic has reached a point where very little is able to be sent in a reasonable manner. In the simulation results shown, traffic was not dropped due to buffering constraints. This forces additional delay which might be alleviated by dropping packets that have been in the queues for a long time. However, since the non-real-time traffic tends to be loss sensitive, this is not a reasonable approach for providing reduced delay. Instead the upper utilization bound must be set to avoid the scenario shown in the rightmost figure. IV. DYNAMIC C HANNEL E STIMATION In addition to managing the admission of flows, the CAC algorithm must also track the channel capacity. This global channel knowledge is fundamental in achieving the most effective use of the limited bandwidth. For the wireless communication, the bandwidth may vary significantly, even over small time intervals. If the base station is unaware of this variation, it may over-admit traffic into the scheduler, and thus agree to contracts that it is incapable of fulfilling. In order to admit new calls with guarantees, the base station must have an approximation to the amount of traffic it is capable of sending. In our scheme, this is done by traffic estimates over time. The estimate itself can be based on the amount of traffic sent during the time interval, the amount of traffic received during an interval, or a more complex means such as the statistical multiplexing algorithm in [7] based on a target cell loss ratio and the queue length decay of the buffer size. Our current method employs a simple measurement scheme based on both sent and received traffic. This allows measurements that are made to be based not only on valid packets, but also to take into account the lost packets that occur on the wireless channel. Since the base station acts as a central control, this also enables accurate estimation and scheduling of both the uplink and downlink directions. The key to maintaining an accurate bandwidth estimate without excessive overhead relies on estimates made over a changing time window. A dynamic window is used to calculate a moving average of the channel bandwidth. The size of the window is based on a dimensionless parameter ( ) which allows the estimate size to grow or shrink over time. This parameter depends on the interarrival time of connected calls. As more fluctuation in the schedule occurs, the time window will shrink; forcing more instantaneous traffic estimates to follow the dynamics of the admission policy. As the flows in the system stabilize to less active changes, the estimate will grow to encompass a larger time

window and a more average traffic estimate, with less overhead involved. The window size is limited to a maximum size (W x ), such that the average estimate does not become too decoupled from the actual traffic being sent. The calculation for this window estimate can be found in Equation 2. W i = minf W

x

,d (W

i 1

+ 1)eg

(2)

The value of in the moving window bandwidth estimates in Equation 2 can be calculated from Equation 3. This value of is based on the volatility in the interarrival of cells over the time period ( T ) since the window was last computed. An example of this time period for the window W i 1 is shown in Figure 8. From this figure, the parameters used for the calculation of Wi-1

t1

... t2 t t4 3

tk

t5

T i-1

Ti

time

Fig. 8. Example of Interarrival Times.

are more readily seen. The time period T is from the beginning of the window at Ti 1 to the end of the window Ti . During this time period, the average interarrival time (ti ) is computed by summing the individual interarrival times t1 , t2 , ..., tk and dividing by the number of measurements k that were taken. This can then be used in conjunction with the previous window average interarrival time ti 1 to compute the absolute value of the change in the average interarrival time | ti |. In addition, the time period T x over the maximum window size W x is used to keep the parameter dimensionless and normalized. The value of itself can then be found using these parameters from the previous window, along with the current number of connected calls (). The result is a new calculated value of used to compute the size of the window in the next iteration of the estimation.

3702

=

T x | ti |    T + T

(3) x

where

T = (Ti

Ti

1)

ti = ti 1 ti P k ti = k1  j=1 tj

size curve. All curves were plotted with an assumed value of 1 sec for T x .

W i = minf W

x

, d jt

(T 1

(1

P

)( 1

1 +1)

t )j (T

T

1 )+T

eg

(4) The new window calculation has been performed and plotted as shown in Figure 9. The change in the average interarrival time was varied from 0 to 5 msec. The various curves represent newly calculated window values sizes for scenarios with varying window size and connected calls. Notice again that the new window size does not vary greatly for situations with a small number of connected calls. The fractional value of can also be seen to reduce the size of the window much quicker for larger window size scenarios. The values presented are not the actual window size variation as time increases, but the new window size calculation as the interarrival average increases. This is important to note since each new window calculation will actually be changing to an entirely new window

110 100

Calls=1, W=100 90

New Window Size (packets)

As more activity occurs, will asymptotically decrease towards 0, while no activity will result in = 1. In Equation 2, the maximum window size is limited. If the window is below W x , it will either be fractionally reduced by in the case of traffic changes, or will increase linearly in the case of no changes. Note that will decrease more quickly with larger window sizes. The value of will also decrease more quickly when more calls are currently in the system. This is based on some key assumptions. First that a small change in a system with a large number of calls is more drastic then in system with a small number of calls. There also must be many changes in the interarrival times to change an average over a larger window size. When the change in the interarrival average occurs under these conditions, it is important to cause a quick reduction in the window size to reevaluate the bandwidth estimate. Secondly, it is assumed that larger changes in the interarrival average under small windows and/or a small number of connected calls are more likely to occur due simply to variations in the arriving traffic. These expected changes need not force a small window with increased overhead from the additional bandwidth estimates. The combined formula for the window size calculation is shown in Equation 4. This result is obtained by substituting Equation 3 into Equation 2 from above. The new calculation relies on certain state variables being maintained by the base station during scheduling and call admission. Information on the current window size must be maintained. This includes the window size itself (W i 1 ), the average interarrival time from the previous window estimation (ti 1 ), and the actual start and finish times of the window measurement (Ti 1 and Ti respectively). The scheduler must also maintain a running calculation of the current average interarrival time (ti ), and the largest window size time period T x must also be known. Finally, the base station must keep the total number of connected calls in the system (), and the call admission algorithm must update this value as new calls are admitted or terminated.

Calls=10, W=100

80

Calls=100, W=100

70 60

Calls=1, W=50

50

Calls=10, W=50

40

Calls=100, W=50

30 20

Calls=1, W=10 Calls=10, W=10 Calls=100, W=10

10 0

0

5

4 3 2 1 Change in Average Interarrival Time (ms)

Fig. 9. Variation in Window Size vs.

ti .

The dynamic windows estimation increases the window linearly as the system remains somewhat stable. The window is exponentially reduced with increases in the number of calls, change in average interarrival time, or the change in window size. This is based on the assumption that traffic changes in a network with a small number of calls will cause greater overall turbulence to the system then changes in a heavily loaded network with a large number of calls. V. C ONCLUSION In this paper the Threshold Access Sharing (TAS) call admission algorithm was developed to provide more efficient use of the wireless channel during the call admission phase. The admission algorithm operates by a method of thresholds according to the current level of channel utilization. This new approach allows the admission of calls to be made based on the requirements of the call itself. Through simulations, it is shown that this scheme provides enhanced network utilization and the ability to admit more calls into the system. A new method of channel estimation is also provided to aid in the performance of the call admission algorithm. R EFERENCES [1] Chi-Jui Ho and Chin-Tau Lea, “Improving call admission policies in wireless networks,” Wireless Networks, vol. 5, no. 4, pp. 257–265, Aug. 1999. [2] Taekyoung Kwon, Yanghee Choi, Chatschik Bisdikian, and Mahmoud Nagshineh, “Call admission control for adaptive multimedia in wireless/mobile networks,” in WOWMOM ’98, Dallas, TX, Aug. 1998, pp. 111–116. [3] Naveen K. Kakani, Sajal K. Das, Sanjoy K. Sen, and Mathew Kaippallimalil, “A framework for call admission control in next generation wireless networks,” in WOWMOM ’98, Dallas, TX, Aug. 1998, pp. 101–110. [4] Bo Li, Chuang Lin, and Samuel T. Chanson, “Analysis of a hybrid cutoff priority scheme for multiple classes of traffic in multimedia wireless networks,” Wireless Networks, vol. 4, no. 4, pp. 279–290, July 1998. [5] J. R. Moorman, J. W. Lockwood, and Steve Kang, “Real-time prioritized call admission control in a base station scheduler,” in WOWMOM ’00, Boston, MA, Aug. 2000, pp. 28–37. [6] Jay R. Moorman, Quality of Service Support for Heterogeneous Traffic Across Hybrid Wired and Wireless Networks, Ph.D. thesis, University of Illinois, Mar. 2001. [7] Andras Racz, “Cell loss analysis for some alternative priority queues,” in RTAS ’00, Washington DC, June 2000, pp. 248–257.

3703

Suggest Documents