opportunistic scheduling, HSDPA, multi-user diversity. I. INTRODUCTION. Mobile video streaming is a service that is becoming increasingly popular. With the ...
Video Streaming in 3.5G: On Throughput-Delay Performance of Proportional Fair Scheduling Vladimir Vukadinović and Gunnar Karlsson Laboratory for Communication Networks KTH, Royal Institute of Technology 100 44 Stockholm, Sweden {vvuk, gk}@s3.kth.se
Abstract—In this paper, we study the performance of the proportional fair scheduler, which has been proposed for the emerging 3.5G radio access technologies. It maximizes systems’ spectral efficiency, which is a strong incentive for network providers to use it. Our goal is to investigate whether the proportional fair scheduler also provides benefits to the users. Focus is on the throughput-delay trade-offs associated with video streaming over HSDPA. Special attention is devoted to defining appropriate performance measures and creating a realistic simulation environment. Our results indicate that the opportunistic scheduling faces major difficulties to ensure the user-level performance in cases where streaming flows constitute a significant share of traffic load. Index Terms—video streaming, proportional fair scheduling, opportunistic scheduling, HSDPA, multi-user diversity.
I. INTRODUCTION Mobile video streaming is a service that is becoming increasingly popular. With the deployment of advanced mobile systems, operators have the possibility to provide multimedia services with improved quality. Examples of such systems include the CDMA 2000 1xEV-DO [1] and the UMTS High-Speed Downlink Packet Access (HSDPA) [2]. These emerging technologies are capable of supporting downlink data services with significantly higher transmission rates. In order to improving the spectral efficiency, they employ shared channels where multiple users are multiplexed in a time-slotted fashion. Therefore, suitable scheduling policies are required to achieve a desired performance. These systems also employ a set of advanced features, such as adaptive modulation and coding (AMC) and short time-slot duration, which enables the introduction of opportunistic schedulers capable of exploiting the multi-user diversity. Multi-user-diversity is inherent in wireless networks, where channel conditions vary over time due to the fading and shadowing. Different users experience different channel conditions at a given time. Hence, at any time, there is a high probability that some of the users will have a strong channel. By scheduling only those users to transmit, the shared channel is used in the most efficient manner and the total system throughput is maximized. Such scheduling mechanisms are called opportunistic because they take advantage of favorable channel conditions in assigning time slots to the users. If the service requirements of all the users are flexible, the opportunistic scheduling mechanisms can
result in higher spectrum utilization and increased system throughput. Nevertheless, to implement opportunistic scheduling in a real system, several issues need to be addressed. In reality, channel statistics of different users are not identical and, therefore, a scheme designed only to maximize the overall throughput could be very biased. For example, allowing only users close to the base station to transmit may result in very high throughput, but would sacrifice the transmission of other users. Therefore, the opportunistic scheduling algorithms should provide fairness among users. Also, they should not be concerned only with maximizing the long-term average throughputs because, in practice, applications may have different utilities and service constraints. For instance, major concern for real-time applications is latency: if channel variations are too slow, a user might have to wait for a long time before it gets a chance to transmit. A major challenge when designing a scheduling algorithm is to address these issues and to exploit the multi-user diversity at the same time. An algorithm that attempts to addresses this problem in the proportional fair (PF) scheduler [3]. It has been implemented in 1xEV-DO/IS-856 and proposed in the HSDPA standard. The PF scheduler has received considerable attention in the research community due to some interesting properties [4][5]. It has been shown that, among all scheduling strategies, PF maximizes the product of throughputs provided to different users (the set of achieved throughputs is proportionally fair). Hence, the PF is not an ad-hoc algorithm, but it corresponds to the solution of a concrete optimization problem [4]. The problem that we address in this paper is the throughput-delay trade-off that the PF scheduler has to deal with when the offered traffic contains a mix of elastic and non-elastic (delay-sensitive) applications. Namely, opportunistic schedulers exploit the channel fluctuations by delaying the transmission of a user’s data until its channel reaches a peak. If the channel fluctuations are slow compared to the latency requirements of an application, the scheduler is faced with two choices: either to schedule the user for transmission even though its channel is weak, thus reducing the throughput gain that could be achieved by scheduling another, stronger user; or to violate the latency requirements of the application. Therefore, there is a trade-off between the
throughput gain (spectral efficiency) and latency requirements of delay sensitive applications. We consider a specific case where the offered traffic is a mix of web browsing (elastic) and video streaming (nonelastic) sessions. Streaming applications typically do not require as strict delay bounds as conversational services, for example. Playout buffering at the receiver makes the streaming applications, to a certain degree, resistant to latency and jitter. However, it also makes the streaming performance dependent on detailed rate statistics and traffic characteristics of the system, which makes the exact analysis rather involved. In this paper, we resort to detailed simulations to study the video streaming performance in a HSDPA system that employs PF scheduling. Our simulations are based on an enhanced UMTS/HSDPA simulator [6] and detailed channel and mobility models. Our goal is to investigate whether the PF scheduler, apart from increasing the system’s spectral efficiency, provides significant benefits for the users. The reminder of this paper is organized as follows: a system description in Section 2 provides a brief overview of PF scheduler and HDSPA service. In Section 3, we describe the metrics that we use to describe video streaming and overall system performances. Finally, simulation setup and results are presented in Sections 4 and 5, respectively. II. SYSTEM DESCRIPTION Our system under study assumes video streaming over an HSDPA radio access network that employs opportunistic PF scheduling. In this Section we give a brief overview of the PF scheduler and HSDPA architecture. A. Proportional fair scheduler Suppose that there are N users in the cell and Ri (k ) is the achievable rate for user i at the transmission interval k, which depends on the user’s current channel conditions. Suppose that the scheduler keeps track of the running average of the achieved rate Ti ( k ) for each user. Then, according to the proportional fair scheduling policy, user J k ∈ {1,..., N } is chosen for transmission in time-slot k if:
J k = arg max 1≤i ≤ N
Ri (k ) , k = 1, 2,... Ti ( k )
(1)
The running averages of achieved rates Ti ( k ) are updated at each time-slot as:
⎧⎛ 1⎞ 1 ⎪⎜ 1 − ⎟ Ti ( k ) + Ri ( k ), i = J k tc ⎠ tc ⎪⎝ Ti ( k + 1) = ⎨ 1⎞ ⎪⎛ i ≠ Jk ⎪⎜ 1 − t ⎟ Ti ( k ), c ⎠ ⎩⎝
, (2)
where tc is the memory of the averaging filter. Thus, the algorithm schedules a user when its instantaneous channel
quality is relatively high compared to its own average channel conditions over the time-scale tc. The constant tc is related to the maximum time that a user can be starved and it is often referred to as a latency window. It is also the timescale over which the scheduler aims to provide proportionally fair bandwidth allocation. When tc is sufficiently large, the PF scheduler behaves as a max C/I scheduler: at every time instance, the user with the strongest channel is scheduled for transmission. On the contrary, when tc → 0, PF neglects the users’ channel conditions and performs round-robin scheduling. Hence, by choosing the latency window tc, we are able to tune the PF scheduler to exploit the multi-user diversity more or less aggressively and, therefore, trade the cell throughput for shorter delay and vice versa. B. High-speed downlink packet access In this paper, we consider PF scheduling in HSDPA, but results hold for any similar radio access technology that employs channel quality feedback and fast scheduling, such as 1xEV-DO/IS-856. HSDPA is a high-speed data service introduced in the Release 5 of the UMTS standard, which aims to increase user peak data rates and improve spectral efficiency. This is accomplished by introducing a fast and sophisticated channel adaptation mechanism instead of power control, which is disabled. The link adaptation is based on channel quality indicator (CQI) feedbacks from the mobile terminals. The CQI specifies modulation, channel code rate, and number of codes that a mobile terminal is capable of supporting with a block error rate below 10%. Hence, at each transmission interval, the CQI corresponds to the current achievable rate Ri (k ) for the user. This mechanism is known as adaptive modulation and coding (AMC) and its objective is to optimize data rates for actual channel conditions. To facilitate fast scheduling on a very short time-scale of 2 ms, HSDPA-related MAC functionality is moved to the Node-B. Thus, much of the packet processing previously performed in the radio network controller (RNC) has been moved closer to the air interface to allow timely reaction to varying channel characteristics, which gives rise to the opportunistic scheduling. III. PERFORMANCE METRICS High-speed downlink technologies in 3.5G primarily aim at enhancing system’s spectral efficiency. Therefore, focus is on maximizing the system’s utilization rather than on providing desired performance to network users. In this Section, we define a utility measure that describes the userlevel performance in case an offered traffic mix is comprised of elastic and non-elastic flows. While the elastic flows do not have strict delay requirements and emphasis is laid on achieved throughput, the performance of non-elastic flow depends greatly on network delay and delay jitter. In order to cope with these,
streaming clients employ playout buffers where video frames are stored before being displayed. The playout of a video starts after initial buffering, which is supposed to compensate for expected delay jitter. However, if the transmission rate that the network provides to the streaming session is lower than the decoding rate at the client, the playout buffer may underflow. In that case, the playout of the video is interrupted and rebuffering occurs. The playout will resume after the number of video frames in the playout buffer reaches a predefined threshold. We define rebuffering delay Drb as the delay that the playout suffers due to the rebuffering. It is expressed as a percentage of the total playout time. Since the gaps during the playout represent a major annoyance for users, we argue that the rebuffering delay is a good measure of streaming quality. It captures both delay and throughput requirements of a video session. In a system with opportunistic scheduling, there is a fundamental tradeoff between throughput and delay. A utility measure that is often used to reflect such a tradeoff is power: the ratio of throughput to packet delay [7]. The power is usually defined (slight variations can be found throughout the literature) as:
P=
Tβ , d
(3)
where the exponent β is chosen based on the relative emphasis placed on throughput versus delay. The operating point where the power is maximized is a “knee” in the power curve. It is desirable that the operating point of a system be driven towards the knee. In order to describe the system performance in the presence of streaming traffic, we modified (3) to include the rebuffering delay of video sessions instead of the packet delay, which a rather vague indicator of streaming quality. We define a new utility measure as:
U=
T 1−α , α Drb
(4)
where α is the share of streaming traffic in the offered traffic mix. In order to develop an intuition for the dependence between the utility (4) and throughput-delay tradeoff introduced by the PF scheduler, observe that at the point of maximum utility (e.g., at the “knee”):
dU =0 ⇒ U
(1 − α )
dDrb dT =α . T Drb
(5)
Hence, the weighted relative increase in throughput is equal to the weighted relative increase in rebuffering delay. The “knee” of the utility function corresponds to the optimum latency window tc of the PF scheduler. By choosing a constant tc that is larger then the optimal, we obtain the points on the utility curve for which:
(1 − α )
dDrb dT ,