A Tuning System for Distributed Multimedia ... - Semantic Scholar

4 downloads 113187 Views 321KB Size Report
Distributed multimedia applications such as Video-On-Demand (VOD) will ..... During the call set up phase, the tuning system runs a probe (a test video clip which ...... Internal technical memo, AT&T Bell Laboratories, Murray Hill, NJ, January.
A Tuning System for Distributed Multimedia Applications Klara Nahrstedt and Lintian Qiao University of Illinois e-mail: klara,[email protected]

Abstract

Distributed multimedia applications such as Video-On-Demand (VOD) will need user-friendly graphical interfaces to specify and control compressed continuous media and their service behavior according to the quality perceived by the human user. This requires a new extension of the Quality of Service (QoS) concept to incorporate the perceptual quality. In this paper, we investigate services for supporting perceptual quality and propose a userfriendly and robust tuning system. Our tuning system extends adaptive and tuning services currently used at the application subsystem level and it enables the speci cation, monitoring and user-controlled adaptation of perceptual QoS. Furthermore, all services in the tuning system have the goal of (1) modularity so that they can be plugged and played in a distributed multimedia application, and (2) graceful degradation of perceptual QoS. The paper discusses in detail the QoS concepts, services and protocols, which are the basis of our tuning system design and implementation, and architectural issues. Two algorithms are presented: the Probe-based Algorithm used at the beginning of the call set up phase for QoS speci cation, and the Adaptation Algorithm used to adjust to user-speci ed parameter changes while taking into account dynamics and non-determinism caused by the OS and underlying network protocols. Our experimental results show the e ectiveness of the probe and adaptation mechanisms, good results for perceptual QoS provision, fast response to user requests, and the feasibility of our services to accommodate user speci ed changes with respect to non-deterministic behavior of underlying system.

1 Introduction Audio/video information in various multimedia applications, such as Video-on-Demand (Figure 1), is evaluated by the user based on the perceptual quality1 of the presented media. The perceptual quality of visual information very much depends on the services provided by the underlying system. We investigate services for support of perceptual quality in the application subsystem for the current underlying OS and network system which do not provide guaranteed services. In this environment, the user will need a user-friendly and robust tuning system to specify, control and adapt the temporal behavior of continuous media streams (audio, video) to non-deterministic underlying This work was supported by the Research Board, University of Illinois, Urbana-Champaign under Agreement RES BRD 1-2-68115. 1 Perceptual quality speci es quality of audio/visual media which the human user can hear or see (e.g., television quality of video, telephone quality of audio). 

1

Multimedia Storage Server

Multimedia Workstation

System Architecture Application Server

System Architecture END-TO-END QoS GUARANTEES

Application Client

(System Layer)

(System Layer)

Operating System Communication Protocols

Operating System Communication Protocols

Network Adapter

Network Adapter

Local Area Network

Multimedia PC

Multimedia Workstation

Figure 1: Video-On-Demand system consists of multimedia workstations/PC (VOD clients) and multimedia storage server (VOD server) connected by local area network. The internal system architecture of clients and servers includes three logical layers: application, system and network layers. The application layer comprises the client/server software of the VOD application, system layer includes the operating and communication system support and network layer involves low level software/hardware support for networking. OS and network services. There are several objectives of such a system: (1) provide the user with the best possible and most realistic perceptual QoS which the VOD architecture and platform can sustain, (2) give the user control to adapt and change the perceptual QoS, (3) provide graceful degradation of perceptual QoS, and (4) design the services in a plug-and-play fashion. To achieve these objectives we have designed and implemented a tuning system consisting of Graphical User Interface (GUI) with perceptual QoS representation and an underlying run-time system services in the application subsystem. The tuning system allows the speci cation, monitoring and adaptation of perceptual and application QoS. Such adaptive applications are needed due to the fact that guaranteed services in Internet protocols and UNIX platforms are not and will not be commonly available any time soon. There is currently a good deal of research being conducted in di erent directions for providing guaranteed services and good perceptual QoS:  Services such as admission, negotiation, renegotiation, scheduling, and bu er management provide network QoS guarantees within real-time communication systems (e.g., [Fer90, FV90, CCH93, KS95, CSZ92]), and real-time operating systems (e.g., [TM89, TNR90, LLSY91, OT93, CCR+ 95]). These services are integrated in QoS architectures (e.g., [ACH96, NS96, BFM+ 96]) to provide end-to-end QoS guarantees. These services, however, are still not common in general purpose networks and workstations.  Adaptive services are considered at the end points and in the networks to correct the variance of the network bandwidth. At the network level, QoS adaptation is mostly considered as a congestion control mechanism. The source starts to send trac and the network monitors it. If congestion is observed at a switch, feedback information is sent via di erent protocols to 2

the source to slow the trac [KMR93, KBC95]. Congestion at the network level, and hence QoS adaptation, can be avoided if sources negotiate a rate contract with the network and the sources obey the negotiated contract using trac shaping mechanisms [SK94]. At the application level, the existing QoS adaptation services/protocols monitor aggregate bandwidth, and adapt to changes in network and CPU aggregate bandwidth variations. Examples of such protocols are Video Datagram Protocol in Vosaic [CTCL95] and audio protocol vat [JM92]. Through a closed feedback loop the application clients request that the server slow down.  Tuning services which allow the user at the GUI to adjust the perceptual quality of video or audio are provided for uncompressed continuous media. Examples can be found in many workstations that support multimedia devices and provide GUIs that allow the user to adjust and control individual multimedia device properties such as the volume of the audio signal (e.g., HP, SUN workstations). Our tuning system extends adaptive and tuning services currently used at the application subsystem level to a complex tuning system which o ers the user the opportunity to control continuous streams and adjust them as the importance of each individual stream increases or decreases in the multimedia application. Our tuning system also includes re ned QoS speci cation, where through our probe-based algorithm we determine the actual and realistic QoS parameters possible given the client/server con guration. We monitor not only the aggregate bandwidth at the client side of the VOD application, but also monitor the adjustment requirements from the user (GUI) and apply our adaptive algorithm. This algorithm utilizes the compressed structure of the streams to adapt and propagate the change in end-to-end fashion. Our experiments show good results for perceptual QoS provision, fast response time to user requests for change and hence, the feasibility of our services to accommodate user changes with respect to the non-determinism that arises from the general purpose OS and network services/protocols. The paper is organized as follows: Section 2 discusses the tuning system design and outlines the basic concepts regarding QoS and the architectural issues of the tuning system, as well as describes the individual services and protocols used. Section 3 describes the implementation software structure and methodology of our experimental analysis. Section 4 shows the results of our prototype. The implication of design choices and achieved results are presented in the conclusion, Section 5.

2 Tuning System Design The objective of the tuning system is to specify realistic quality of continuous media, such as MPEG compressed video, and quickly respond to user-speci ed changes regarding the perceptual quality of the media during their presentation. In this section we rst present the concept of quality of service and the architectural basis of our tuning system. Second, we describe the individual services and their design embedded in the tuning system.

2.1 QoS Concept

User/application requirements on Multimedia Distributed Systems (MDS) are mapped onto computing and communication services which satisfy the requirements. Various distributed multimedia applications have di erent requirements, so the services in the multimedia systems must be parameterized. Parameterization allows for exibility and customization of services, so that each new 3

application does not result in the implementation of a new set of services. Service parameterization is de ned in ISO (International Standard Organization) standards through the notion of Quality of Service (QoS). Traditional QoS (as in the ISO standards) referred to measures at the network layer of the communication system. QoS enhancement was achieved by introducing QoS into transport services. For MDS, however, the notion of QoS must be further extended, as many other services contribute to the end-to-end service quality. To discuss QoS then, we need a layered model of the MDS with respect to QoS [NS95]. We assume throughout this section the model shown in Figure 2. The internal architecture of MDS consists of three layers: the application, system (including communication services and operating system services), and device (network and multimedia devices) layer. Above the application at the client side resides a human user. This layered structure implies the User (Perceptual QoS)

Application (Application QoS)

System (Operating and Communication System) (System QoS)

(Device QoS)

(Network QoS)

MM Devices

Network

Figure 2: QoS-layered model comprises the extension of system architecture in Figure 1 towards the user and the perceptual QoS. introduction of perceptual QoS, application QoS, system QoS , network QoS and device QoS. Device QoS parameters typically specify timing and throughput demands for media data units (application

QoS), hence we will not discuss them in detail.

2.1.1 Perceptual QoS Parameters

Perceptual QoS parameters specify the service quality which the user sees or hears (e.g., TV quality of video, telephone quality of audio). It is hard to quantify and its evaluation is subjective and user-dependent. The perceptual QoS parameters can be characterized according to the temporal perceptual quality where the user perceives (listens/views) the playback rate of the audio/visual media, and spatial perceptual quality where the user perceives the spatial details of the individual audio sample or image frame. We represent perceptual QoS with the Graphical User Interface (GUI) playing visual clips and slide bars to allow the user tune the temporal sense-perceptive quality. Our tuning system GUI is shown in Figure 3. The video quality is shown in the video window. The tuning knob for user's adjustment of frame rate is the user rate slide bar. The GUI also has other slide bars showing 4

system rate and frame drop rates which result from the user-initiated rate change (user rate slide

bar in Figure 3).

Figure 3: GUI for Representation of Perceptual QoS Parameters. Video window shows the movie, slide bars provide either control for the user (user rate) or report the status of the actual QoS (system rate, frame drop rates).

2.1.2 Application QoS

The application QoS parameters describe requirements for application services possibly speci ed in terms of media quality, which includes the media characteristics and their transmission characteristics, and media relations, which specify the relations among media. The media quality consists of an stream speci cation and an component speci cation. The stream speci cation gives the media characteristics of a homogeneous media stream such as sample size, sample rate and priority/criticality (importance). If the individual samples in the stream di er in quality, component speci cation must occur, meaning that each subsample is speci ed by the user/application in the stream structure using component speci cation. The parameterization also includes an application-oriented speci cation of the required transmission characteristics for end-to-end delivery (e.g., end-to-end delay bounds). Table 1 shows examples of possible QoS parameters common for MPEG compressed video characteristics [NS95]. The media relations specify relations among the media streams. Synchronization skew represents an upper bound on time o set between two streams in a single direction. This information can be used for a ner granularity scheduling decision of multimedia streams than a sample rate information of periodic streams provides. If no skew is speci ed, the system uses the sample rate of each stream for schedulability decisions. Precedence relation speci es a time o set between two streams in di erent directions. For example, in a remote telesurgery application, there may exist a precedence relation between the sensory stream carrying position information from an operator (doctor) to a robot, and the sensory stream carrying feedback information from a robot to an operator (doctor). 5

Medium Type

QoS Parameter

Range

Characterization of Quality Video (app. QoS) Frame Rate 30 fps NTSC format Frame Width  720 pixels video signal MPEG Frame Height  576 pixels Vertical size Color resolution 8 bit/pixel grey scale resolution 16 bits/pixel 65536 possible colors Aspect Ratio 4:3 NTSC format Decoder Bu er  376832 bits MPEG parameters Bandwidth  1.86 Mbit/s MPEG encoded video Packet Loss  10?11 compressed Video End-to-End Delay  250 ms Table 1: Examples of Video Media Quality Parameters. Communication relation de nes the communication topology, such as unicast (peer-to-peer), multicast (peer-to-group), or broadcast (peer-to-all). Conversion relation speci es transformations of a

medium (e.g., conversion from audio to text in a speech recognition application). Table 2 shows examples of media relations [NS95]. Medium Type QoS Parameter Range Audio/Video Sync Skew +/- 80 ms Audio/Pointer Sync Skew +750 ms -500 ms

Description lip synchronization (+) audio ahead pointer (-) pointer ahead audio

Table 2: Examples of Media Relations QoS Parameters.

2.1.3 System and Network QoS Parameters

We describe only brie y these parameters for completeness. They are not controlled and manipulated in the underlying system hence they are not explicitly part of our algorithms. However, implicitly they in uence the performance and decisions within our speci cation, monitoring and adaptation services in major way. A more detailed description can be found in [NS96]. System QoS parameters describe requirements on the communication services and OS services resulting from the application QoS. They may be speci ed in terms of both quantitative (e.g., number of errors) and qualitative criteria (e.g., inter-stream synchronization). Network QoS parameters describe requirements on network services. They may be speci ed in terms of: (1) throughput speci cation (e.g., burstiness); (2) ow speci cation (e.g., intermediate delay-jitter); and (3) performance speci cation (e.g., ordering). An example of a network QoS data structure is shown in Figure 4. The network QoS parameter structure describes the QoS of data 6

Network QoS for a Connection

Throughput Spec

Flow Spec

Performance Spec

Packet Size

Connection Id

Ordering

Throughput

Intermediate Delay

Error Correction

Burstiness

Packet Service Time

Fragment/Reassembly

Packet Loss Rate

Cost

Communication Type Packet End-to-End Delay Priority

Figure 4: Network QoS Parameters. over a single network connection.

2.1.4 QoS Parameter Values and Types of Services

The speci cation of QoS parameter values determines the types of service, of which there are at least three distinct types: guaranteed, predictive and best-e ort services. Note that various systems may provide di erent classi cations of services. Guaranteed services provide QoS guarantees, as speci ed through the QoS parameter values (bounds) in either a deterministic or statistical representation. The deterministic bounds can be given through a single value (e.g., average value, contractual value, threshold value, target value), a pair of values (e.g., minimum and average value, lowest quality and target quality) or an interval of values (the lower bound is the minimum value and the upper bound is the maximum value). For example, delay parameters are speci ed as a range: < expected value; worst value >. Jitter can be accommodated by specifying task processing times as a triple: < best processing time; average processing time; worst acceptable processing time >. Thus, the task processing time will be accepted if it is in the interval bounded by the pair of values < best value; worst value >. Guaranteed services may also deal with statistical bounds of QoS parameters [FV90], such as statistical bound on error rate, etc. In our tuning system we aim towards guaranteed services working with rst order moments (average values) of frame services times. A predictive service (historical service) is based on past network behavior; hence, the QoS parameters are estimates of past behavior which the service tries to match [CSZ92]. Best-e ort services are services based on either no guarantees or on partial guarantees. There is either no speci cation of QoS parameters required, or some bounds in deterministic or statistical forms are given. Most current network protocols have best e ort services.

2.2 Architectural Issues

VOD application is a distributed application (VOD client and VOD server), hence it is embedded in the application layer of the communication architecture. This architectural issue in uences the tuning system design as follows: 7

 Our tuning system includes services and protocols for manipulation of application and per-

ceptual QoS at the application and user level (see Figure 2)2.  Operation of our tuning system is divided into two phases according to the connection-oriented paradigm for provision of end-to-end QoS guarantees in multimedia distributed applications: call setup and transmission phases.

{ Call Setup Phase

During the call set up phase, the tuning system runs a probe (a test video clip which can be, in practice, a couple of seconds at the beginning of the movie) to determine the realistic video frame rate without any loss (drop) of data. The speci ed application QoS parameter is presented to the user in terms of a slide bar position in the GUI and playback of the video clip at the frame rate. This frame rate is then negotiated between the client and the server.

{ Transmission Phase

During the transmission phase, the tuning system monitors the display frame rate at the client side depending on the load of the OS, network, and user requests for frame rate changes. Once the change is detected, the algorithm begins adaptation by dropping frames in the order of B frames rst, P frames second, and I frames last (see Section 2.5). Note that the user-initiated change can not only decrease the frame rate, but also increase it. If a request to speed up comes from the user, i.e., the user wants a higher frame rate3 than maximal frame rate the system can provide with no loss, then the adaptation algorithm determines which frames to drop rst at the client side and informs the server through renegotiation protocol which frames to drop at the server side to achieve the `higher rate'. The GUI (see Figure 3) is active during both phases so the user can specify/control and change the QoS parameter (video frame rate) during the call set up and transmission phases. In the following subsections 2.3, 2.4 and 2.5 we describe the QoS algorithms, mechanisms and protocols used in our tuning system.

2.3 QoS Speci cation

Current audio/video applications have services and protocols with QoS negotiation and renegotiation capabilities which assume that the user knows the actual QoS parameter. This QoS can be acquired from the device speci cation or due to o -line testing. We rst brie y describe these two known methods for the QoS speci cation and then introduce the probe-based algorithm as a method for overcoming the short-comings these algorithms have. 2 The tuning system currently assumes no QoS support at the underlying system, device and network levels (we deal with general purpose UNIX and Internet environments) and multimedia distributed applications are adaptive. 3 Higher frame rate is meant in the sense of number of displayed frames plus number of dropped frames in certain time interval versus number of displayed frames with no dropped frames in the same time interval. The system cannot display more frames than speci ed at the call setup, but when increase request is issued, certain frames can be dropped and this behavior gives the user the perception of higher rate.

8

2.3.1 Device Speci cation

A video card and its accompanying software provide a description of the possible frame rates and frame sizes which the card and the driver can support (e.g., XVideo 700 Parallax Video provides 640x482 pixels frames at 30 frames/second). The VOD application may take these parameters as the QoS speci cation. However, as many experiments show, these parameters might not be realistic QoS parameters the VOD service can sustain because the end-to-end QoS depends on many di erent factors: (1) the application software running on the client/server sides, (2) the transport protocol stack used by the VOD service, (3) CPU utilization by other applications during runtime of the actual application, and (4) the underlying network.

2.3.2 O -line Testing

Another approach to QoS speci cation is to run extensive o -line tests to pre-determine the QoS parameters [Nah95]. The QoS parameters are then stored in con guration les and retrieved when the negotiation phase begins. The problem with this approach is that it does not take into account the actual load when a VOD service runs. The load of the system layer might dynamically change; hence, o -line testing may not provide a realistic estimation of the measured QoS parameter.

2.3.3 Probe-based Algorithm

In the environment of adaptive applications on top of shared OS and networks, the two QoS speci cation methods mentioned above do not provide a good estimate of a realistic and possible QoS for the application negotiation protocols. Hence, we introduce a probe-base algorithm which is an on-line algorithm for QoS speci cation. This algorithm is based on probes done at the beginning of the call set up phase and (1) determines the application QoS of a continuous medium as a statistical guarantee, (2) determines the degradation point at the client side when system performance starts to severely degrade due to bu er problems and mismatched rates between the server and the client, and (3) provides QoS suggestions for the negotiation and transmission phase to avoid severe degradation. A more formal description of the algorithm is as follows: Let X1; X2; :::; Xk?1; Xk be measured QoS parameters. Let  be the di erence between the new measured value Xk and a past measured value.  is the maximum allowable di erence of degradation in QoS. a is the accumulated di erence between the rst and the last measured values. Note that depending on the measured QoS parameter,  , a , and  can be positive or negative. For example, given the measured display frame rate of Xk , if the frame rate decreases (Xk < Xk?1 ), then there is performance degradation, (; ; a) have negative values and j j < jj. On the other hand, if Xk represents measured task processing time, then Xk > Xk?1 represents a degradation in performance and  < . Let c be the counter of occurrences  = . Let T be the time interval upper bound of the probe, i.e., the probe runs in interval I = (0; T ). Let i be the counter of previous measurements where Xk ? Xk?i =  We will store each measured value in the background until the degradation point is found or the probe interval I expires. In the algorithms described below we show only the storage of measured values which contribute to nding the degradation point. Note that our algorithm aims to bound the di erence between measured values, which is applicable to time series [Cha89]. For example, when a video stream shows a frame rate of 10-12 9

frames/second, we want to bound the frame rate di erence (degradation) to 5 frames/second because degradation between 12 frames and 6 frames/second is noticeable and may disturb the viewer (Figure 5). application QoS (X) X1 X0 X2 X5

degradation point

X 9 X

11

...

I

Time t

Figure 5: Example of the Probe-based Algorithm. A simpler algorithm, which is often used in network monitoring and congestion control algorithms, would use a minimum allowable value of a QoS parameter (Xmin ), and the service would compare each measured value Xk against Xmin . If Xk degrades below Xmin , a feedback is triggered to slow down the source. In the probe-based algorithm described below, we assume that Xk > Xk?1 means degradation and Xk < Xk?1 means improvement. Our probe-based algorithm is as follows:

 Initialization step measure X , set k=1; set timer which runs until current time t = T ; c = 0; i = 1;  = 0; a = 0; specify ;  Execution step { within t 2 I 1. measure Xk ; compute  = Xk ? Xk? ; a := a +  ; 2. if f( < ) ^ (a < )g then store Xk ; k := k + 1; f if (  0) then i := i + 1; else i := 1 g; goto step 1. /* no QoS degradation or only a small degradation*/ 3. if f( == ) _ (a == )g then (a) if (c == 0) then c := c + 1; , store (Xk ; Xk?i ); k := k + 1; goto step 1; (b) if f(c > 0) ^ ( == ) g then i := i + 1; c := c + 1; compute  = Xk ? Xk?i ; i. if ( < ) then store (Xk ; Xk? ), Xk?i is already in the list; k:=k+1; goto 0

1

1

step 1; ii. if ( == ) then Xk?1 = Xk?i , store Xk , k := k + 1; goto step 1. iii. if ( > ) then Xk?i is the degradation point, return( Xk?i ; k; i; t) 4. if f( > ) _ (a > ) g then Xk?i is the degradation point, return (Xk?i ; k; i; t) { time expired after interval I, and the loop did not end with degradation point, then return (Xk?1 ; k; i = 1). In this case we don't have a degradation point. 10

The runtime of the algorithm is O(k) where k is a nite number of measurements in the interval I . The QoS speci cation is computed after the algorithm ends. The speci ed QoS parameter is P j =k ?i 1 the expected value X = ( k?i+1  j =0 Xj ). The system level can sustain this value as a QoS and the application can use it for negotiation of network guarantees. The output of the degradation point with the speci cation of k and t is needed for QoS negotiation and adaptation. Note that the degradation point can be translated internally to other parameters (e.g., distance between ring bu er pointers indicating head and tail) to control the QoS.

2.4 QoS Monitoring

QoS monitoring is an important part of our tuning system. Since monitoring can add overhead during multimedia transmission, it is preferable that it is exible. This exibility means that most of the monitoring variables should be optional and monitoring should be able to turn on and o [WH94]. In general, QoS monitoring consists of two modes: an query mode and a report mode. The former requests a status report about resource utilization and QoS guarantees; the latter regularly reports the QoS and resource status. In our tuning system, we use the report mode . The human user gets the status report (slide bars for frame drop rates - see Figure 3) and the monitoring service at the client application level regularly reports the status. Monitoring at the client side includes a supervisor function to continuously observe that the processed QoS parameters do not exceed their negotiated values. If change is detected, QoS adaptation service takes place.

2.5 QoS Adaptation

In continuous media communication, it is important to support a framework capable of dynamically changing the QoS of each session, particularly for an environment which includes system components with non-deterministic behavior. There are two important factors which must be present to achieve this goal: (1) noti cation (feedback) and renegotiation of QoS parameters, i.e., a protocol for reporting QoS changes; and (2) adaptive schemes to respond to and accommodate the changes coming from the user, host system or network. Renegotiation is a process of QoS negotiation when a call is already set up. In general, the renegotiation and adaptation request can come either from the user, who wants to change the quality of service, from the host system, due to workstation overload (multi-user, multi-process environment), or from the network, due to overload and congestion. In our tuning system we consider only an explicit request for change from the user. The host/network system changes are included implicitly in frame service times. Our QoS adaptation service and renegotiation protocols reside within the application subsystem layer on top of network protocols. Note that if a protocol such as Resource Reservation Protocol (RSVP) exists in the system layer (Figure 2), then the integration of our approach with the underlying network protocols might provide the desired application-to-application QoS guarantees. Furthermore, our VOD application QoS services can utilize the underlying multi-casting support (multiple sources to multiple targets) if it exists. However the tuning service itself is meant to serve only one client. 11

2.5.1 Utilization of Degradation Point for QoS Adaptation

The degradation point derived from the probe-based algorithm as the speci ed QoS can be utilized in two ways. First, we do not negotiate the degradation point with the server but use the degradation point at the client as a mark | that is to say, an adaptation point for feedback to the server to slow down. We have tested this approach and found we can sustain a stable and good performance at the client when adaptation feedback is sent within a speci ed interval before the degradation point is approached. The goal of this approach is to give the user statistically guaranteed video quality without user-initiated changes. This means that the client side adapts to the system load caused by the uncontrolled server. Sending adaptive feedback from the client to the server at di erent time intervals around the degradation point is discussed in [NHK96]. The results show that the server load heavily uctuates under adaptation and the adaptation overhead is high. Hence, this is not a good scheme to incorporate into a tuning system when the tuning system should accommodate user-initiated adaptation changes of QoS. Under this situation, a second approach is utilized. Namely, we negotiate the degradation point (QoS speci cation - frame rate) with the server and the server behaves according to this QoS unless a renegotiation request comes from the user. Once the renegotiation request arrives, the adaptation algorithm discussed in the next section is applied.

2.5.2 Adaptation Service and Renegotiation Protocol Our adaptation algorithm (adaptation service and client/server application renegotiation pro-

tocol) adjusts to the user-speci ed changes and takes into account dynamics and non-determinism caused by the UNIX and underlying Internet communication protocols. The adaptation service and renegotiation protocol utilize the knowledge of the application about the compressed video stream structure, i.e., the video application can access and process the intraframe structures of MPEG or MJPEG (Motion JPEG) video streams which consist of I, P, B frames and I frames, respectively 4 . We divide the media stream into groups which consist of a mixture of I, P, B frames. The number of frames in one group varies and depends on the IPB pattern in video clips and the host software/hardware setup5 . In general, for MPEG video one group starts with an I frame and ends before the next I frame, which is the same as the concept of GOP (Group Of Pictures) in the MPEG de nition. Although each group starts with an I frame, more than one I frame can be included, especially in the case of MJPEG where only I frames are available. This also means that we can include more than one MPEG-1 GROUP in our de nition of group, allowing us to easily extend our algorithm to MPEG-2 format because MPEG-2 does not include the concept of GOP. To describe our algorithm in more detail we use the notation shown in Table 3. We assume that the transmission protocol in the application layer at the client side of the VOD application schedules the following tasks: Display Video Frame, Decode Compressed Video Frame, and Receive (Reassembly) Video Frame. The transmission protocol at the server side schedules appropriately the following tasks: Retrieve Compressed Video Frame and Send (Segment) Video Frame. We measure and monitor the processing times of these tasks for each frame (tjdis ; tjdec ; tjrecv ). rs is the rate which the OS (at the client and server side) and network can support under the current A more detailed description of MPEG compression can be found, for example, in [SN95]. For example, UDP queue bu er size on client host is an important factor in determination of the number of frames in one group 4 5

12

Notation Meaning rs ru tw tjdis tjdec tjrecv tI tP tB nI nP nB ng

system rate user rate waiting time display time of frame j (processing time of Display Video Frame task) decoding time of frame j receiving (reassembling) time of frame j from transport packets average of task processing times for I frames at the client side average of task processing times for P frames at the client side average of task processing times for B frames at the client side number of I frames in one group of pictures number of P frames in one group of pictures number of B frames in one group of pictures number of frames in one group of pictures

Table 3: Basic Notation of Variables within the Adaptive Algorithm. load without any loss (rs = Pk (tj1000+tj k +tj ) , where k is total number of frames displayed so dis dec recv j far, and processing times are measured in ms) . ru is the rate which the user speci es through the GUI perceptual QoS speci cation. tI , tP , tB are calculated dynamically because they are functions of client load, size of frame and processing times of decoding and display tasks, i.e., tI = f (ClientLoad; FrameSize; tdec; tdis; trecv ). We currently consider f as follows: =1

tI

k tj I + t j I X dis dec = ( )

( )

k

j =1

j (I ) + trecv

where k is the total number of I frames decoded and displayed when calculated tI . The processing times for frame j (I ) (i.e., frame j is an I frame) tdec ; tdis ; trecv implicitly include the parameters ClientLoad and FrameSize (e.g., tdec = g (ClientLoad; FrameSize)). In the Section 4.2 we show experimental results of this dependency because it is dicult to analytically quantify these relations (g-function) due to the non-deterministic behavior of the CPU and network utilization. The processing times for decoding, displaying and receiving video frames are monitored and the average time is adapted after certain synchronization points (after each frame). The times tP and tB are computed similarly to tI and a similar discussion applies to these processing times with respect to the client load as well as the sizes of P and B frames. The adaptation algorithm needs two steps: The rst step is performed during the negotiation phase; the second step during the adaptation phase. Both steps (protocols and service computations) are shown in Tables 4 and 5, respectively. Table 4 shows the negotiation protocol, which distributes during the call setup the information necessary for adaptation. Table 5 shows the adaptation service at the client side and the renegotiation protocol between client and server. The adaptation service has two parts based on the calculation of the variable tw (wait time). tw represents the di erence between the time required for group display (ng ) when user rate(ru) is speci ed, and the time required for group display when system rate (rs) is speci ed. 13

Server

< ?? Send File List ?? > Receive File Name < ?? Send ng ; nI ; nP ; nB ?? > Send rst group ?? >

Client

Open TCP Connection for Feedback Info Receive File List Send File Name Receive ng ; nI ; nP ; nB

Table 4: Negotiation during Call Establishment.

 tw  0

In this case, the user requests lower quality than the system can actually provide, meaning that the system has enough power/bandwidth to process the user-speci ed frame rate (ru). The parameter, tw , is sent to the server. The server sends frames without dropping any of them and waits the speci ed time between sending out two consecutive frames. The waiting period is decided as shown in Table 5.  tw < 0 In this case, the user requests `higher rate'(ru) than the system can provide (rs ) and adaptation is enforced. As we pointed out in Section 2.2, perceptually higher rate can be achieved by dropping certain frames in the compressed stream. The adaptation is executed by selective dropping of individual frames. This type of adaptation provides the tradeo between achieving `higher frame rate', hence an improvement of temporal perceptual quality guarantees, and achieving a graceful degradation of spatial perceptual quality with respect to the individual image detail. At rst the client program will pace up with speci ed user rate by dropping some B frames. The variable diff speci es how much time the client can be allowed to display B frames in one group in addition to displaying all the I and P frames. The diff tB represents nB c indicates how to drop the number of B frames that can be displayed. b nB allowed B frames n B as evenly as possible, i.e, send out B frames at the rate of one every b nB allowed c frames. If diff < 0, then B frames cannot be displayed. All B frames are dropped and then the same strategy is applied to see how many P frames can be displayed (or how many P frames must be dropped). If dropping B and P frames is not sucient to achieve `higher rate' speci ed by the user in the case of MPEG, or `higher rate' is speci ed in the case of MJPEG, I frames will be dropped as well. Because each group usually contains only one or two I frames, we combine several groups into a large group to apply a similar strategy described above; otherwise all I frames will be dropped. The number of groups which will be combined together is de ned as nngI . In the case of MJPEG, this number is 1 because ng == nI . (

)

(

)

3 System Architecture and Implementation A VOD application, including the tuning system services, consists of a client/server architecture. In our experimental testbed, both components are supported on two platforms | SGI Indys and HP 14

Server

Clientn

receive tw for (every frame l in one group) retrieve frame l send frame l wait( ntwg ? tlsend ) l++

nB c receive nBskip = b nB allowed retrieve frame l send frame l l = l + nBskip (also send all I,P frames) (

)

tw = rug ? nrsg < ?? if tw  0 then send info:tw

?? > if tw < 0 then if ((diff = nrsg ? tI  nI ? tP  nP )  0) f nB(allowed) = diff tB n B display every b nB allowed c-th B frame nB c-th B frames < ?? send request for sending only b nB allowed (

(

(

)

)

g else if ((diff = nrsg ? tI  nI )  0) f nP allowed = diff tP (

nP c receive nPskip = b nP allowed retrieve frame l send frame l l = l + nPskip (also send all I frames)

)

)

drop all B frames nP c-th P frame display every b nP allowed nP c-th P frame < ?? send request for sending b nP allowed (

)

(

g else

)

drop all B, P frames in nngI groups display every b ntIg c = b tIngrs c-th I frame rs < ?? sending requestn for sending I frames (now involves nIg groups) Table 5: Adaptation Service/Renegotiation Protocol.

15

-725. The SGI platform supports the MPEG compressed video streams, the HP platform supports MPEG and MJPEG compressed video streams. MPEG streams are decoded using software MPEGdecoder from Berkeley's MPEG-Player. MJPEG streams are displayed using the Parallax Video MJPEP Compression Board. Both platforms are connected via 10 Mbps Ethernet. The VOD client system utilizes video stream bu ers, video decoder (software decoder, a graphical user interface (see Figure 3) for user control and a display device. A video server system uses RAID for retrieval of video clips. The ow of communication is shown in Figure 6. Client

Tuning Control Control Information

Pannel Control Information Control

NETWORK

feedback

Video Stream packets frame

Video Server

buffer

decoder

Image

display

Figure 6: Communication Flow.

3.1 Software Implementation

The implementation of the tuning system GUI, services and protocols is plugged into the call setup code and transmission protocol code. The system software structure is shown in Figure 7. Details are discussed below.

3.1.1 Graphical User Interface

With the user control panel (see Figure 3), the user has the following options:  Select di erent video formats, such as MPEG or MJPEG. (We have tested our tuning system on these two formats. However, the algorithm can be easily extended to other formats.)  Choose videos available on the server by retrieving the le list information from the server.  Control the video display rate behavior by dynamically setting the user rate (using the slide bar for User Rate). As feedback to the user, the tuning system provides the actual system performance information in the form of slide bars: the best rate the system can achieve (System Rate rs ) and the B, P, or I frame drop rate. The slide bars give the user the perceptual QoS feedback of the analytical application QoS. The video window gives the user the perceptual QoS feedback of the display quality. The user-interface is implemented using Motif and Xlib functions which provide us with the necessary window management, image handling, and display functionalities. 16

Network

VOD Server

Wait for Request

VOD Client

Negosiation Information

Setup New Channel

File List File Name

New Process

Video Data and Structure Data

Send Initial Probe Data

Select Video

QoS Spec. Probe

Software Structure for Call Set Up Phase

Video Data QoS Spec. Control

Retrieve Frame

QoS Spec. Control GUI Input QoS Monitor

Receive data Buffering

Adapt QoS Send Frame

Control Feedback

Monitor

Decode Frame

Adapt QoS Tuning System

Software Structure for Transmission Phase

Figure 7: System Software Structure.

17

Display Frame

3.1.2 Call Setup and Transmission

During the call set up the user must specify the server address in order to set up the necessary communication channel. During this phase, when the user chooses the desired video, the beginning of the movie is used as a probe using our new probe algorithm and the system rate rs is speci ed, meaning that the server starts to send video frames at its maximal rate speed and the client receives the video. As Figure 8 shows, the performance starts to degrade after a certain point (the degradation point | see Section 2.3.3 Figure 5). The degradation point at the client side is then

25

18 Compressed Frames (MJPEG) Per Second (Server) Frames (MJPEG) Per Second (Client)

Compressed Frames (MPEG-1) Per Second (Server) Frames (MPEG-1) Per Second (Client) 16

20

14

12 15 10

8 10 6

4

5

2

0

0 0

100

200

300 Time, secs

400

500

600

0

20

40

60 Time, secs

80

100

120

Figure 8: Performance of client/server during probe-based algorithm (Left side: MJPEG probe. Right side: MPEG probe. X-axis: time in seconds, Y-axis: frames per second). The graphs show that the degradation point of the frame rate occurs at time 140 seconds in MJPEG case and 35 seconds in MPEG case. taken as the QoS speci cation and the negotiation protocol sends the determined QoS frame rate to the server. For the MJPEG streams the negotiated rate is 18 fps, for the MPEG stream it is 8 fps. The server uses the system rate control as the starting point for the transmission phase. Note that at the beginning of the setup phase, the setup parameters ng ; nI ; nP ; nB for grouping of video frames during the transmission are exchanged (see Table 4).

3.1.3 Transmission

During transmission the client receives data from the server. The video data will be put into the local bu er rst (prefetch data). The use of the local bu er allows the system to balance the network jitter and delay. The length of the necessary initial bu er would be decided by the length of the frame group, which normally consists of one or two cycles of I-to-I frames (in the case of all I frame video or MJPEG, the group length varies and depends on the average size of the I frames and the available system software/hardware support). The software based MPEG decoder heavily depends on the MPEG-PLAYER developed at Berkeley and takes the data stream from the local bu er and sends the decoded frames to the display device. 18

Name bus.mpg

ower.mpg coaster.mpg orincsa.mpg spaceshuttle.mpg qinyong.mpg water.mpg

Size Ave Frame Size I:P:B frames Pattern 352x240 4790 bytes 10: 40:100 IBBPBBPBBPBBPBBI... 320x240 4644 bytes 10: 40:100 IBBPBBPBBPBBPBBI... 288x192 5423 bytes 80: 40:160 IPBBIBBIPBBIBBI... 320x240 2311 bytes 240:120:480 IPBBIBBIPBBIBBI... 160x128 1471 bytes 647: 0: 0 III...I... 160x128 3517 bytes 1750: 0: 0 III...I... 160x128 4246 bytes 111: 0: 0 III...I... Table 6: Characteristics of tested video clips.

The video data is transferred using UDP sockets on a per-frame basis. Each frame is further divided into xed-length packets (the length depends on the machine and OS setup). The received data is put into the local bu er and reassembled into one compressed frame. The QoS control information (negotiation/renegotiation information) is sent to the video server through a TCP connection. At the server side, the server creates a new process for each new VOD client request and selectively loads the requested video frames from the server le system. The selection depends on the negotiated QoS and its adapted value according to the adaptation algorithm (see Section 2.5). The server then sends the frames to the client with the frame fragmented into xed-length packets. After sending out one frame, the server checks if there is new feedback. In positive case, the server adapts its QoS and uses the updated QoS (frame rate) to send the following frames.

3.2 Methodology

In investigating the user-initiated tuning behavior of our system, the question is one of how the user-initiated QoS change impacts the QoS degradation, i.e., how does the frame drop rate change under di erent user requests and system loads and how does it in uence the perceptual QoS. The measured frame drop rates re ect the resulting frame rate from our adaptation algorithm. We tested two classes of video clips. The rst class consists of clips with all I frames (MJPEG videos) and the second class includes clips consisting of I, P, B frames (MPEG videos). The tested video clips with their characteristics are shown in Table 6. For each class of clips we experimented with two scenarios: (1) a single client and a single server and (2) multiple clients and a single server. Each scenario ran under various host loads: (1) a single stream displayed at the host (client side), (2) multiple streams displayed at the host (client side), and (3) additional local load (computation of factorial) added to the host (client side). The user at the host (client side) watched one xed movie under the various scenarios and loads, and changed the user frame rate during playback. The measured variable of interest (which is quantify-able) was the frame drop rate of the xed video stream. The ultimate perceptual QoS of the display rate is very subjective variable and dicult to quantify, hence our statements about the perceptual quality are based on our perception. Figure 9 shows Scenario 2 with additional loads. 19

Factorial() UDP/IP Video Data

TCP/IP

Server Process 1 on host 1

UDP/IP

TCP/IP

Feedback

Client 1

Client 2

Video Data

Feedback host 3

Server Process 2 on host 1 or host 2

Figure 9: Scenario 2. It shows an additional load for measured client 1 such as client 2 VOD application or local factorial program. Clients have their equivalent server processes running either at the same host or at di erent hosts.

4 Experimental Results In this section, we describe a set of experiments designed to evaluate the performance and behavior of our adaptation algorithm. Two major video clips are used in all our experiments (under various loads in the two scenarios): the MPEG video stream bus.mpg and the MJPEG video stream spaceshuttle.mpg (see Table 6). Scenario 1 with a single stream load was tested for all video clips in Table 6 to compare our main tested videos with other samples. We selected average complexity representatives with respect to frame size and duration (number of frames) from the two classes of video clip sets.

4.1 Single Stream Load under Scenario 1

We ran a set of experiments at each of the user-speci ed frame rates (e.g., sampling rates: 18 fps, 24 fps, 30 fps). Figure 10 shows the results of MJPEG and MPEG movies, respectively. For the MJPEG movie (spaceshuttle.mpg), the system can provide 18 fps with no frames loss and will have an average of 23% frames lost when the user wants playback at 30 fps. The interval between 18 and 30 fps in Figure 10(left side) shows the MJPEG case when I frames start to get dropped to achieve `higher user rate' (see case tw < 0 from Section 2.5.2.). In the case of MPEG (bus.mpg), the frame loss rate slows down at 20 fps. The reason is that around this point the system starts to drop P frames and there are less P frames than B frames in the compressed stream, hence the total number of dropped packets decreases. Note that the drop of P frames frees more computational power because (1) a P frame is larger than a B frame; and (2) P frame decoding requires more processing time than B frame decoding. Figure 10(right side) 20

Figure 10: Single stream load under Scenario 1 (Left side: MJPEG movie. Right side: MPEG movie.). Both graphs show the uctuation of the measured frame drop rate between the upper and lower bound over several experiments. The curve of interest in the average case which is used for comparison with other movies. also shows the case of tw  0 from Section 2.5.2 within the interval between 0 and 8. The interval between 8 and 20 in Figure 10(right side) shows the behavior of dropped B frames (tw < 0 from Section 2.5.2.). Figure 11 shows the results of running di erent video clips for MJPEG and MPEG under Scenario 1, respectively (see Table 6). The resulting curves have the same shape for MJPEG and

Figure 11: Comparison of di erent videos under Scenario 1 (Left side: MJPEG movies. Right side: MPEG movies.). Both graphs show a similar shape during adaptation, however, note the di erent in uence of additional load. Factorial program causes lower frame drop rate than additional VOD client for MJPEG movie, but higher frame drop rate for MPEG movie. MPEG streams, respectively which indicates that our adaptive algorithm behaves consistently and enforces the similar behavior on any of the compressed video streams. The di erence which exists among the various videos is caused by the di erent frame sizes and frame patterns (combination of IPB frames). Hence, the various curves show the dependencies between task processing times (tdec ; tdis ; trecv ) and frame sizes (g function in Section 2.5.2). Note that in the case of MPEG, the 21

Factorial fps Total % 3 0 5 6 6 20 8 33 10 57.3 12 67.2 15 73 16 75.7 20 79.2 25 83 30 86.4

B 0 10 30 50 86 100 100 100 100 100 100

P 0 0 0 0 0 2 24 34 47 61 74

Two Clients Total % 0 0 3 28 40 51.3 60 68.3 73.9 80 85

B 0 0 5 43 60 77 96 100 100 100 100

P 0 0 0 0 0 0 0 6 27 53 68

Single Client Total % 0 0 0 0 24 45 55 62 72 75 81

B 0 0 0 0 36 67 82 93 100 100 100

P 0 0 0 0 0 0 0 0 20 31 54

Table 7: Detailed information of B and P frame loss rate for MPEG movie bus.mpg. video clips with the same frame pattern (IPB pattern) have similar frame loss behavior even when they have di erent frame sizes (coaster.mpg versus orincsa.mpg). This also shows that the adaptive algorithm applies in the same fashion.

4.2 Scenario 2

Figure 12 shows the comparison of the frame drop rate parameter for the observed movie under additional loads (second client and factorial program) and Table 7 shows a more detailed view of one MPEG video clip under various loads, especially the boundaries when B and P frames start to be dropped to achieve user's perceptually higher frame rate.

Figure 12: Various Loads under Scenario 2 (The viewed movie is on the left side: MJPEGspaceshuttle.mpg; on the right side: MPEG-bus.mpg.). The additional load at the client side to the viewed movie is an additional client/server application and a local factorial program. 22

First, we ran two clients at the same host. Both clients play the same movie clip (bus.mpg or spaceshuttle.mpg). We then monitor the performance (frame loss rate) of one of the clients. Figure 12 shows the frame loss rate comparison between the single client in Scenario 1 and two clients running at the same host. The video streams come from a single server which can serve multiple requests. We also tested multiple servers sending streams to two clients at the same host and the results are very similar (shown in Figure 12). Note that the frame drop rate increases rapidly (40 %) in MJPEG when two clients run simultaneously at the same host. This is much higher drop rate increase than in MPEG case. The reason for the former case is that both clients with I frames require full CPU bandwidth all the time. In the latter case the dropped B and P frames in one client free the CPU load for the other client, hence the CPU load is more balanced. Second, we ran a factorial program as an additional CPU load at the same host where the VOD client resides. We monitor the frame loss rate of the VOD client. The comparison result of the frame loss rate between a single client and aggregate load with factorial load (Figure 12) is similar to the previous case, i.e., higher frame loss rate for the observed movie clip with factorial load in the background. The factorial function has more e ect on MPEG than on MJPEG. The system starts to drop P frames at 12 fps compared to 16 fps for two clients and 20 fps for a single client (see Figure 12 right side and Table 7). For MPEG streams it is important to point out that in Scenario 2 the utilization of freed computation power (when dropping P frames) by other applications can be observed. Figure 11 and Table 7 show that when additional load is applied, the curves come closer together passing the 20 fps bound.

4.3 Evaluation of Perceptual QoS

All experiments show that there is a clear tradeo between temporal and spatial perceptual quality at a high request user rate. We evaluate both perceptual qualities from our subjective point of view. We test only user rates until 30 fps because this is the TV frame rate, a human user is accustomed. For the user frames rate above the system rate rs , there are several zones where the spatial perceptual quality degradation is (1) not noticeable, (2) noticeable, but not annoying, and (3) annoying. For the MJPEG movie between 18 and 24 fps user rate, the degradation of spatial perceptual quality is not noticeable. Between 24 and 28 fps the degradation of spatial perceptual quality is noticeable, but not annoying. Between 28 and 30 fps the degradation starts to be annoying, but still acceptable. One explanation of the high spatial degradation acceptance is that the MJPEG movies have small frame size (160 x 128 pixels). In small videos, the details are not important to the human eye. The user concentrates on large objects and the temporal quality becomes much more important than the spatial quality. For MPEG movie (bus.mpg) between 8 and 11 fps user rate, the degradation of spatial perceptual quality was not noticeable. In this case the adaptive algorithm drops < 50% of B frames. Between 12 and 20 fps the degradation is noticeable and annoying. The behavior we experience is smooth jump - smooth - jump. It means during certain time intervals the display rate is smooth, and during some intervals the display rate is such that certain frames are noticeably missing and cause a jump e ect. This situation happens when 50 to 100 % of B frames drop. Similarly, between 21 and 25 fps, the degradation is annoying. The adaptation algorithm drops P frames. However, interesting result is between 25 and 30 frames where the perception QoS noticeable, but not annoying, hence 23

acceptable. For detailed frame drop rate of this video, see Table 7 For MPEG movie (orincsa.mpb) between 8 and 15 fps user rate, the degradation was not noticeable (0-8 % B frames dropped). Between 15 and 25 fps, the degradation is noticeable, annoying and not acceptable (30 % P frames dropped at 20 fps). The behavior is smooth - jump - smooth - jump. Between 25 and 30 fps, the degradation is also not acceptable. The I frames start to be dropped (4-21 %) which causes jump behavior. Overall, MPEG movies have much lower spatial quality acceptance in comparison to MJPEG movies because (1) the frame size is larger (352 x 240 pixels) than in MJPEG case, and (2) the overall system rate is much lower than in MJPEG case (longer decoding). The user pays more attention to the image detail and recognizes losses in spatial resolution much faster during the playback. When comparing MPEG movies, their spatial quality acceptance varies. It depends on the IPB compression patterns which, when starting to drop frames, cause the smooth - jump - smooth jump annoying behavior during the viewing process.

5 Summary and Conclusion This paper presents the design of our tuning system, its implementation and the results achieved. This system is currently embedded in a VOD application, and we assume that the underlying OS and network behave in non-deterministic fashion. With services such as QoS speci cation, QoS monitoring and QoS adaptation it aims towards a plug-and-play service structure and provision of graceful degradation for visual media streams. The QoS speci cation service is based on probes to determine a realistic system rate for VOD streams which can be utilized for negotiation of frame rates between servers and clients. The QoS monitoring and adaptation services are oriented towards accommodating user-speci ed frame rate changes (perceptual QoS changes) at the graphical user interface. Our adaptation algorithm with its renegotiation protocol quickly reacts to user requests for frame rate changes. It also provides the user with the control necessary to gracefully adapt perceptual QoS because the least important frames (such as B frames) to most important (I frames) are incrementally dropped. This approach guarantees graceful QoS degradation | an important system feature in VOD applications.

References [ACH96] C. Aurrecoechea, A. Campbell, and L. Hauw. A review of quality of service architectures. Multimedia Systems Journal,to appear, 1996. [BFM+96] A. Banerjea, D. Ferrari, B.A. Mah, M. Moran, D.C. Verma, and H. Zhang. The Tenet Real-Time Protocol Suite: Design, Implementation and Experiences. ACM Transaction on Networking, 4(1):1{10, February 1996. [CCH93] A. Campbell, G. Coulson, and D. Hutchison. A Multimedia Enhanced Transport Service in a Quality of Service Architecture. In Workshop on Network and Operating System Support for Digital Audio and Video '93, Lancaster, England, November 1993.

24

[CCR+ 95] G. Coulson, A. Campbell, P. Robin, G. Blair, M. Papathomas, and D. Shepherd. The Design of a QoS-Controlled ATM-Based Communications Sytem in Chorus. IEEE JSAC, 13(4):686{699, May 1995. [Cha89] C. Chat eld. The Analysis of Time Series - An Introduction. Chapman and Hall, fourth edition, 1989. [CSZ92] D.D. Clark, S. Shenker, and L. Zhang. Supporting Real-Time Applications in an Integrated Services Packet Network: Architecture and Mechanism. In SIGCOMM'92, pages 14{22, Baltimore, MD, August 1992. [CTCL95] Z. Chen, S-M. Tan, R.H. Campbell, and Y. Li. Real Time Video and Audio in the World Wide Web. In WWW 95, 1995. [Fer90] D. Ferrari. Client Requirements for Real-Time Communication Servoces. Technical Report 90-007, International Computer Science Institute, Berkeley, CA, March 1990. [FV90] D. Ferrari and D. C. Verma. A Scheme for Real-Time Channel Establishment in WideArea Networks. IEEE JSAC, 8(3):368{379, April 1990. [JM92] V. Jacobson and S. McCanne. vat, Video Audio Tool. UNIX manual page, 1992. [KBC95] H.T. Kung, T. Blackwell, and A. Chapman. Credit Update Protocol for Flow-Controlled ATM Networks: Statistical Multiplexing and Adaptive Credit Allocation. In ACM SIGCOMM, pages 101{115, London, UK, 1995. [KMR93] H. Kanakia, P. P. Mishra, and A. Reibman. An Adaptive Congestion Control Scheme for Real-Time Packet Video Transport. In Proceedings of SIGCOMM '93, Baltimore, MD, August 1993. [KS95] S. Keshav and H. Saran. Semantics and Implementation of a Native-Mode ATM Protocol Stack. Internal technical memo, AT&T Bell Laboratories, Murray Hill, NJ, January 1995. [LLSY91] J. W. S. Liu, K.-J. Lin, W.-K. Shin, and A. C. Yu. Algorithms for Scheduling Imprecise Computations. IEEE Computer, pages 58{68, May 1991. [Nah95] K. Nahrstedt. An Architecture for End-to-End Quality of Service Provision and its Experimental Validation. PhD thesis, University of Pennsylvania, August 1995. [NHK96] K. Nahrstedt, A. Hossain, and S. Kang. A probe-based algorithm for qos speci cation and adaptation. In Proceedings of 4th IFIP Workshop on Quality of Service, pages 89{100, Paris, France, March 1996. [NS95] K. Nahrstedt and R. Steinmetz. Resource management in networked multimedia systems. IEEE COMPUTER, pages 52{63, May 1995. [NS96] K. Nahrstedt and J. M. Smith. Design, Implementation and Experiences of the OMEGA End-Point Architecture. Journal on Selected Areas in Communication, Special Issue on Distributed Multimedia Systems and Technology, (to appear), 1996. 25

[OT93]

S. Oikawa and H. Tokuda. User-Level Real-Time Threads: An Approach towards High Performance Multimedia Threads. In Proceedings of the 4th International Workshop on Network and Operating System Support for Digital Audio and Video, pages 61 {71, November 1993. [SK94] R. Sharma and S. Keshav. Signalling and Operating System Support for Native-Mode ATM Applications. In ACM SIGCOMM, pages 149{157, London, UK, September 1994. [SN95] R. Steinmetz and K. Nahrstedt. Multimedia:Computing, Communications, and Applications. Prentice Hall, Inc., 1995. [TM89] H. Tokuda and C. W. Mercer. ARTS: A Distributed Real-Time Kernel. ACM Press, Operating Systems Review, 23(3):29{53, July 1989. [TNR90] H. Tokuda, T. Nakajima, and P. Rao. Real-Time Mach: Towards a Predictable RealTime System. In Proceedings of the USENIX Mach Workshop, October 1990. [WH94] L. Wolf and R. G. Herrtwich. The System Architecture of the Heidelberg Transport System. ACM Operating Systems Review, 28(2), April 1994.

26

Suggest Documents