An Adaptive Distributed Multimedia Streaming Server in ... - ISYS

2 downloads 1193 Views 507KB Size Report
We present an adaptive distributed multimedia server architecture (ADMS) that ... Network distance and host resource metrics - obtained from network and host ...
An Adaptive Distributed Multimedia Streaming Server in Internet Settings Roland Tusch1 , Christian Spielvogel1 , Markus Kr¨opfl1 , L´aszl´o B¨osz¨orm´enyi1 1 Institute

of Information Technology, Klagenfurt University, Austria ABSTRACT

We present an adaptive distributed multimedia server architecture (ADMS) that builds upon the idea of offensive adaptivity, where the server proactively controls its layout through replication or migration of server components to recommended hosts. Proactive actions are taken when network or server resources become critical when fulfilling client demands. Recommendations are provided by a so-called “host recommender” which represents an integral part of Vagabond2 - the middleware used for component distribution. Recommendations are based on measured or estimated server and network resource availabilities. Network distance and host resource metrics - obtained from network and host resource services respectively - may be communicated as MPEG-21 DIA descriptors. Finally we evaluate our architecture in a real-world streaming scenario. Keywords: offensive server adaptation, component-based streaming server, resource aware applications, resource capacity measurement and estimation, MPEG-7, MPEG-21

1. INTRODUCTION In general, adaptivity is the capability to respond to changes of the environment by the change of some own characteristics, without losing the own identity. The identity of a multimedia streaming server is certainly strongly influenced by its popularity, thus the number of clients it is able to serve. Most existing distributed multimedia servers are monolithic and performance optimized to cope with thousands of simultaneous client requests. Typically, they are designed for local area networks and incorporate special hardware devices like multiprocessor engines, raid systems and high-speed inter-host connections. These systems work fine as long as most clients reside in the same LAN. But the more clients requests originate outside the LAN while wishing to perceive the same presentation quality as if they would reside in the same LAN, existing servers begin to “lose their identity”. Clients do not perceive the expected presentation quality and hence start to renege or even do not request streams from these servers anymore. Surely, stream-level adaptation can help the server rescue identity by reducing the stream’s temporal or spatial dimensions in order to serve clients with lower quality demands. But in case of high quality demands it is not the server, but the network that becomes the bottleneck. In this case, existing servers have to reject the client requests and lose identity. One solution to this problem is the usage of proxy-servers which have better link bandwidths to the clients. However, this approach has only limited power as the servers do not have explicit control over the locations of client-side proxies. ADMS tries to keep identity in these cases as well. It is composed by a variable number of server components, whose number and locations may change depending on client demands. Built on top of an underlying middleware enabling for proactive server adaptations (Vagabond21 ) it defines a strongly limited number of so-called adaptive server applications which represent its building components.2 The combination of the components defines a virtual streaming server, whose size may grow and shrink on demand. The optimum size of the server is determined by a central component which has knowledge about the distribution of media streams, the number and locations of clients to serve, network distance metrics between all available server hosts, as well as resource usage metrics on all server hosts. While the former two parameters are managed by the central component itself, the latter two parameters are obtained from the middleware, which also provides recommendations regarding a proper placement of adaptive servers. Component adaptation is finally achieved by using Vagabond2’s management services. E-mail: {laszlo, roland}@itec.uni-klu.ac.at, {cspielvo, mkroepfl}@edu.uni-klu.ac.at

2. RELATED WORK It is shown that distributed multimedia servers have benefits over single server architectures regarding scalability and server-level fault tolerance.3 We discussed in earlier papers2, 4 that existing distributed server architectures like e.g. the Berkeley Distributed VoD System,5 the Tiger Video Fileserver,6 or the EURECOM VoD Server7 are monolithic and performance-optimized to one main goal: serving thousands of simultaneous client requests. However, in heterogenous environments it is usually not the server, but the network that becomes a bottleneck. This is especially the case if a certain level of quality of service is to be guaranteed in terms of minimum available bandwidth, maximum packet delay, jitter, and packet loss. An offensive server architecture requires a sort of QoS-aware middleware providing active support for adaptation steps. Extensive work has been done in the area of QoS-aware middleware for ATM-based networks and also for systems using RSVP.8–10 These systems are hardly appropriate for the Internet, where resource reservation is still rather theory than practice. On the other hand, considerable work has been done on end-to-end distance monitoring and estimation in the Internet.11–15 Such bandwidth and delay measurement and estimation algorithms can be used to approximate QoS awareness for a middleware as needed for ADMS. Although certain middleware systems support dynamic replication or migration of services and components, like Jini16 or Symphony,17 they do not provide measurements and estimations of network distances and server resources.

3. VAGABOND2 - A MIDDLEWARE FOR ADAPTIVE SERVERS Vagabond2 is a CORBA-based mobile agent system enabling the implementation of adaptive servers in internet settings.1 An adaptive server can either be a single service as e.g. an adaptive web server that is moved between server hosts, or a distributed service consisting of a dynamic number of components, as in the case of ADMS. In each case an adaptive server represents a kind of a virtual server whose identity is not bound to a fixed physical address. Moreover, in the case of a distributed service the server may spawn a variable number of server hosts depending on client demands. Although the concept of virtual servers was already addressed by other systems like Symphony,17 there are major differences between Vagabond2 and Symphony since Symphony was not designed for soft-real-time requirements. Figure 1 illustrates Vagabond2’s system architecture, both from the view of service distribution and service layering. The major part of Vagabond2 is written in Java in order to facilitate code movement and remote execution. Operating system dependent operations like low-level resource monitoring and measurements are implemented in C and C++.

Figure 1. The Vagabond2 System Architecture

3.1. Host Service The basic service of Vagabond2 is the host service. It is part of the central service Center and allows to register and deregister harbours. A harbour represents a runtime environment for adaptive server applications (ASAs) on a certain host. Once a harbour is started on a host, it registers itself as a CORBA object with the host service to become a member of the Vagabond2 system. A harbour provides operations to load an ASA on it, and to evacuate an ASA from it. To incorporate the loading of an ASA, Vagabond2 defines an ApplicationInfo interface (see figure 2), which serves as a meta information descriptor.

Figure 2. Core Structures and Interfaces of Vagabond2 (in CORBA IDL)

The number of registered harbours defines the maximum possible expansion for a distributed server application, thus the maximum size of a virtual server.

3.2. Application Service The application service represents a central service for managing ASAs on Vagabond2 harbours. It allows to load an ASA to a host registered by the host service, to evacuate an ASA from a host registered by the host service, and to locate all hosts where a certain ASA runs on. Consequently, the application service also enables for ASA migration and replication. Since ASAs are also CORBA objects, the application service may also provide direct access to an ASA on a remote host, if necessary.

3.3. Resource Broker The resource broker is a service that can be used by resource aware server applications (like ADMS) to figure out network conditions between all harbours, as well as resource usage conditions on all harbours. It provides the two services network resource service and host resource service. 3.3.1. Network Resource Service The network resource service performs active measurements of network distance metrics between Vagabond2 harbours and stores the results into the database. It implements a four layer architecture consisting of a discovery layer, a data collector layer, an estimation layer, and a propagation layer - from bottom to top respectively. The discovery layer calculates the network path from the source to the destination harbour by exploiting the Internet Control Message Protocol (ICMP). The discovery is realized by repeatedly sending UDP packets with consecutively

increased TTL values to the destination host. Based on resulting ICMP error messages a path is computed and divided into network areas, which are bounded by the source host, destination host, or subnet gateways. If a number of harbours is connected by the same network paths, network areas from previous discoveries can be reused and distance metric measurements for these areas are omitted. The data collector layer measures network distance metrics (bottleneck bandwidth, available bandwidth, delay, jitter, packet loss rate) on the routes provided by the discovery layer. While measuring delay, jitter and packet loss are simple timestamp computations on ICMP echo replies, bandwidth is more difficult to compute since it is influenced by both network bottlenecks and competing network traffic. The bottleneck bandwidth is defined as the base bandwidth reduced by the hop with the largest processing time (bottleneck). To determine it, two packets with the smallest possible delivery time are sent into the network. The arrival of the packets at the bottleneck results in a processing gap which can be measured when the second packet finally returns to the sending host.18 The available bandwidth is determined by sending a series (n) of 64KB sized packets (with n >> 2) into the network and calculating the gap between the arrival of the first and the last packet of the series. Packet fragmentation and error handling is also implemented, since measurements are based on the IP layer directly. Forecasting future network metrics on Vagabond2 routes is the task of the estimation layer. Since Vagabond2 enables for proactive server adaptations, it provides this layer to predict future network conditions between Vagabond2 harbours. Predictions are done by applying time series models on the collected data from the previous layer. The models used are similar to those used by the network weather service (NWS)15 (e.g. running mean, exponential smoothing and various median windows). To determine the best model for the next forecast, all models are applied to the target data set and the one with the least mean squared error is used. In contrast to NWS, the forecast is based on a confidence interval, and not on a single value. Finally, the propagation layer provides access to measured and forecasted network metrics, by implementing the network service interface of Vagabond2. Network metrics for routes and areas are communicated using MPEG-21 DIA descriptors.19 Figure 3 shows structures of Vagabond2’s network module, which carry such descriptors. An MPEG21RouteCharacteristic structure describes the end-to-end network capability and network condition between two Vagabond2 harbours. For each subnet between the harbours an area descriptor (MPEG21AreaCharacteristic) may be specified, which contains network conditions within this subnet. A sample network characteristic descriptor between two harbours connected by a 100 Mbit/sec LAN is presented in figure 4.

Figure 3. CORBA Structures for Communicating Network Characteristics using MPEG-21 DIA Descriptors

3.3.2. Host Resource Service The host resource service actively measures host resource metrics on Vagabond2 harbours and also stores the results into a database. Similar to the network resource service (see section 3.3.1), it implements a layered architecture - not including the discovery layer.

Figure 4. MPEG-21 DIA Descriptor for a Vagabond2 Route

The data collector layer periodically measures host resource metrics (available cpu, memory, disk capacity and disk bandwidth) on each Vagabond2 harbour. Resource monitors are implemented for both Linux and Windows, which read the corresponding counters from the /proc filesystem and the registry, respectively. Additionally, on harbour startups it runs CPU benchmarks whose results reflect the computing power of the harbour. Furthermore, it allows to benchmark ASAs by executing them on a harbour and measuring the maximum number of clients it can serve per second, using 500 kbit/sec read/send data rates per default. The resulting number gives an ASA performance index for that host. The estimation layer tries to forecast resource availabilities on a given harbour based on data collected by the previous layer. The same time series models are used, as in the case of the estimation layer of the network service. Additionally, it allows to estimate ASA benchmarks on harbours, where the ASA has not been executed yet. Benchmarking an ASA on each Vagabond2 harbour is a time-consuming and expensive task and hence should be avoided. Estimation is achieved by taking the following values as input: the ASA benchmark on the reference host, host resource information of the reference host before executing the ASA, and the host resource information on the target host. Evaluation of estimation algorithms are currently in progress. Finally, the propagation layer allows to query measured and estimated host resource and application benchmark metrics by implementing Vagabond2’s host resource service interface. Similar to Vababond2’s network module, the host module defines structures including MPEG-21 DIA descriptors for terminal characteristics. However, the MPEG-21 DIA schema had to be extended with customized types to fully reflect the harbour metrics of interest. An example of a terminal characteritic for a Vagabond2 harbour is shown in figure 5.

Figure 5. Extended MPEG-21 DIA Descriptor for a Vagabond2 Terminal

3.4. Adaptation Service The adaptation service represents the top-most service layer of Vagabond2. Using historical and/or predicted network and host resource metrics from the resource broker, it suggests the optimum set of harbours for running a certain ASA, also taking into account the clients’ request parameters (see structure RequestInfo in figure 2). Recommendations are based on a certain adaptation policy, which can either be a) minimum load imbalance, b) minimum network delay, or c) minimum network consumption. Each adaptation policy has the goal to maximize the number of servable clients by the ASAs. Since ASAs may have different network requirements (e.g. an adaptive web server vs. an adaptive streaming server), different policies may be used for different ASAs. In case a) all harbours running the same ASA are tried to be kept load-balanced. This policy is suitable for ASAs without real-time requirements, i.e. which are delay and bandwidth insensitive. In case b) the total packet delay from all serving harbours to all requesting clients is minimized. This strategy is especially suitable for adaptive audio conferencing servers, where delay strongly effects the perceived presentation quality. Finally, case c) tries to minimize the overall network load between harbours running the same ASA or same category of ASA (as in the case of a distributed server like ADMS). This strategy is especially suitable for bandwidth sensitive server applications such as VoD servers. The adaptation policies and adaptation service are part of Vagabond2’s adaptation module, which is illustrated in figure 6.

Figure 6. The Adaptation Module of Vagabond2

The adaptation service provides access to a so called host recommender, which recommends optimum configurations of Vagabond2 harbours. The adaptation service can also be configured to automatically adapt an ASA based on the configured policy. This is achieved by using the application service for loading, evacuating, migrating or replicating the corresponding ASA. Thus, the adaptation service allows for recommended or automatic offensive adaptation by distributing ASAs among the registered Vagabond2 harbours.

4. ADMS - AN ADAPTIVE DISTRIBUTED MULTIMEDIA STREAMING SERVER On top of Vagabond2 we have developed a distributed multimedia streaming server as a special ASA. It takes advantage of Vagabond2’s ability to migrate and replicate ASAs on demand, resulting in an adaptive distributed multimedia streaming server (ADMS). During the design of ADMS four basic services have been identified that are necessary for the composition of a dynamically reconfigurable streaming server. Each service is derived from Vagabond2’s AdaptiveApplication interface and represents a service-based component that can be reused and dynamically combined with other ADMS components.2 Consequently, the combination of ADMS components results in a virtual streaming server which can grow and shrink depending on client demands. In figure 7 a sample combination of ADMS components is given, including the protocols used to exchange information between them. Each component fulfills one substantial logical task in the media food chain. Substantial tasks are data acquisition, data streaming, data storage management and global control of stream and ASA distribution. Following these tasks, the four basic components of ADMS are data distributors (DDs), data managers (DMs), data collectors

Figure 7. Components and Protocols Used in an ADMS Environment

(DCs), and cluster managers (CMs) respectively. The components may be implemented using different technology, since interoperability is achieved by using CORBA as communication infrastructure.

4.1. Data Distributor A data distributor is responsible for striping of media data received from a production client to a selected set of data managers. There can be n data distributor instances in an ADMS environment, with n ≥ 0. In case of 0, the data distributor functionality must be built in the production client itself. While a distributor obtains input media data by RTSP/RTP, HTTP or direct CORBA method calls, it distributes data always by invoking CORBA methods of the data managers. The references to the target data managers can be given as input or advised by the cluster manager (see section 4.4). Data distribution is performed on the level of elementary streams (and not system streams) for mainly three reasons. First, it enables for various combinations of elementary streams during stream playbacks. For example, one video stream with both an English and German audio stream can easily be recomposed to video and English audio only. Second, it reduces the amount of media data to retrieve if only a subset of the elementary streams is needed (e.g. only an audio stream). And third, reassembling of system layer streams does not result in a correct system stream in case of segmentbased stream retrieval. Thus, a data distributor is a coding aware component that needs to demultiplex system-layer streams into its elementary streams. In the current implementation only MPEG-1 system streams are supported. The unit of distribution is a so-called stripe unit, which can be either of constant data length (CDL), or of constant time length (CTL). Each elementary stream can be distributed using either CDL or CTL mode. In CDL mode, data units of a configurable fixed size are generated and distributed using a distribution strategy like round robin. Like in RAID level 5 systems parity units may also be generated to cope with data manager failures. In CTL mode, the distributor parses the elementary streams, extracts their access units and distributes the access units representing a configurable fixed amount of time (which defaults to one second). In this mode parity unit generation is not supported due to unequal lengths of stripe units. In cases of a non-live media input, the distribution process can also be driven by MPEG-720 metadata which describe a temporal decomposition of the media stream. Using an MPEG-7 descriptor, the accompanying stream can be decomposed into a number of segments, organized in an arbitrary number of levels. In figure 8 a sample MPEG-7 descriptor is shown which temporarily decomposes a video stream into two segments.

Figure 8. Temporal Decomposition of a Video Stream using MPEG-7

Temporal decompositions are taken into account during stripe unit distribution by performing a per-segment distribution, and not a per-stream distribution. Leaf segment definitions are used to distribute stripe units of these segments. Non-leaf segments definitions are used to compose a segment tree with the target data managers (see section 4.2).

4.2. Data Manager Data managers provide a CORBA interface for efficient storage and retrieval of stripe units of elementary streams or elementary stream segments. In an ADMS environment there can exist n data managers, with n ≥ 1. Thus, a data manager cannot be built into a client application since it is a pure servant. Recalling that a data distributor stripes an elementary stream (or segments of it) among a number of data mangers, each data manager only stores a portion of the stream. If the distributor uses the CDL striping approach, the target data managers remain coding unaware regarding the distributed media stream. In the case of CTL network striping, they become coding aware since one stripe unit contains access units for exactly the time length. While CDL stripe units have the advantage of keeping the effected data managers load balanced, retrieving data collectors have to care about access unit fragmentation. The CTL striping approach may cause unequal load distribution among data managers, but facilitates media stream buffering and play-out for data collectors. Figure 9 shows how a data manger internally organizes its stored media streams.

Figure 9. Internal Storage Organization of a Data Manager

Basically, a set of partial media streams is managed, which themselves consist of a set of leaf and compound media segments. Leaf media segments store stripe unit data in segment files. Compound media segments represent logical combinations of leaf and/or compound media segments. With compound segments a data manager allows to organize

segments in a segment tree, without duplicating segment data. In general, stream segmentation is supported to perform more efficient media data buffering in cases of a per-segment access to stream data, as well as to enable for a segmentbased media stream migration or replication.

4.3. Data Collector The data collector performs inverse operations to a data distributor. Given a set of elementary stream identifiers (and optionally segment identifiers), it collects stripe units from a given set of data managers for the corresponding elementary streams or segments. After a period of stripe unit pre-buffering, the collector re-sequences the units for each elementary stream/segment, synchronizes the streams, and sends them to the requesting client via RTP21 connections. The client may control the playback using the RTSP22 control protocol. In cases of CDL striping of an elementary stream, the collector provides server-level fault tolerance by exploiting parity units in case of unavailable data managers. The collector may also incorporate a caching component for reduced client startup latencies and bandwidth consumption. In particular, it can play the role of a proxy-server which is dynamically assigned to serve a group of clients. The assignment is performed by the cluster manager (see section 4.4). This is the major difference to usual proxies, which are selected by the clients themselves. There can exist n data collectors in an ADMS environment, with n ≥ 0. Again, in the case of 0 data collector functionality must be built into the retrieval client (implementing the proxy-at-client model following Lee’s framework3 ). Typically, n is ≥ 1 and ADMS implements the proxy-at-server architecture.

4.4. Cluster Manager The cluster manager is the central component in the ADMS architecture. Usually, exactly one cluster manager instance exists in an ADMS environment. It is named cluster manager, since it manages the number and locations of data distributors, data managers, and data collectors - resulting in a dynamic ADMS server application cluster. Furthermore, it represents the central point for handling client connections, although it does not serve client requests by itself. Instead, it redirects a request to an appropriate distributor or collector respectively. Consider figure 10, where the request of an MPEG-1 system stream (Sample.mpg) is redirected to data collector dc1, which itself retrieves elementary stream units from the data managers dm1 and dm2. In this scenario, mdm represents a meta data manager which stores and retrieves MPEG-7 metadata from an MPEG-7 database. A client uses it as a directory service for retrieving a list of available media streams stored in an ADMS, and to query meta data for a certain media stream. A client may additionally send an MPEG-21-based usage environment descriptor which describes its terminal capabilities and network conditions (similar to the descriptors shown in figures 4 and 5). This descriptor is then carried with the RTSP describe message to the cluster manager cm. The cluster manager then checks which data managers store stripe units of the requested stream, translates the request into network constraints (bandwidth, delay, loss rate), and selects the optimum data collector to serve the request. Hence, it solves the dynamic server selection problem in case of existing data collectors, which meet the network constraints. If there is no such data collector, it has to solve the capacitated facility location problem and check, whether a new data collector can be opened on an idle Vagabond2 harbour, in order to serve the request. This is achieved by using Vagabond2’s adaptation service in particular the host recommender.23

5. SYSTEM EVALUATION We have evaluated a prototype implementation of our offensive server in the ADMS test bed illustrated in figure 11. In particular, the effects of replicating data managers (including the requested media streams) closer to data collectors have been measured. The test bed consists of two LANs (one in Klagenfurt/Austria (I-LAN), one in Budapest/Hungary (BLAN)) connected by the Internet. The geographical distance of 500 km between the two LANs assured a real Internet setting. The retrieval client - located in the I-LAN together with the serving data collector - ran five repeated retrieval performance tests. In each run, a sample media stream of a certain size was retrieved from the data manager instances. In test run 0, all DMs ran in the remote B-LAN. In run 1, the DM from host 8 (including the requested media stream) was replicated to host 4 - meaning that it was moved ”closer” to the data collector. In each further test run, one additional data manager was replicated from the remote B-LAN to the local I-LAN.

Figure 10. A Stream Retrieval Scenario from an ADMS

In figure 12(a) a head-to-head comparison of throughput is given between replication of four data managers from the B-LAN to the I-LAN, and replicating them inside the I-LAN. Obviously, a replication inside a LAN does not make much difference, but a complete replication from the B-LAN to the I-LAN results in a total throughput gain of a factor 20. The figure also shows that the throughput gain is not growing linearly - due to unequal stripe unit distributions. Figure 12(b) models this behavior and estimates the relative gain on throughput if a certain amount of stripe units is replicated from one LAN to another. It tells that if e.g. the host recommender of Vagabond2’s adaptation service wants to reach 50% throughput gain between the two LANs, it has to replicate 95% of the stripe units (thus all data managers) to the LAN of the target data collector. The relative throughput gain rtg for a given unit distribution (x) and throughput ratio (tr ) is defined by equation 1. tr = TTBI LAN parameterizes the curves. LAN rtg(x, tr ) =

1 (1 − x) ∗ tr + x

(1)

Evaluations regarding algorithms for dynamic server selection and component placement are currently in progress and initial results have already been published.4, 23

6. CONCLUSION An adaptive distributed multimedia streaming server (ADMS) in internet settings, enabling for offensive, location-based adaptation of its components has been presented. It introduced the four basic ADMS components data distributor, data manager, data collector and cluster manager, which all fulfill one substantial logical task during media acquisition or media retrieval. Each component has to be derived from Vagabond2’s AdaptiveApplication interface, which allows the

Figure 11. ADMS Test Bed and Test Scenario

(a) Head-to-head Throughput

(b) Relative Throughput Gain

Figure 12. Measured Throughput in Comparison to Estimated Throughput Gains

component to be migrated or replicated on demand. Moreover, this enables ADMS to be composed of a dynamic number of components, resulting in a virtual streaming server. Vagabond2 is used as the underlying middleware for component management and adaptation. It enables to implement adaptive servers with or without soft-real-time requirements by offering layered services to its clients. Central services used by the ADMS cluster manager are the application and the adaptation service. With the application service, ADMS components are loaded to or evacuated from Vagabond2 harbours. The adaptation service is used for querying an optimum distribution of a certain ADMS component using a given adaptation policy. The adaptation service thereby uses the resource broker, which provides measured and estimated host distance and host resource metrics for network routes between Vagabond2 harbours and Vagabond2 harbours themselves, respectively. Finally, we have evaluated our ADMS prototype implementation in a testbed consisting of servers connected by the internet. Our future work focusses on evaluating adaptation algorithms that also take into account host resource metrics of clients and host distance metrics between data collectors and clients. We also plan to compare the results of our active measurements with streaming feedback received from the clients by exploiting the RTCP protocol. Finally, we are in-

vestigating different stream distribution and re-distribution strategies which may be applied for media streams which are available in different qualities.

REFERENCES 1. B. Goldschmidt, R. Tusch, and L. B¨osz¨orm´enyi, “A Mobile Agent-based Infrastructure for an Adaptive Multimedia Server,” Parallel and Distributed Computing Practices, Special issue on DAPSYS 2002 , 2003. Paper is also available as technical report TR/ITEC/03/2.05. 2. R. Tusch, “Towards an Adaptive Distributed Multimedia Streaming Server Architecture Based on Service-oriented Components,” in Joint Modular Languages Conference (JMLC), August 2003. 3. J. Y. Lee, “Parallel Video Servers: A Tutorial,” IEEE Multimedia 5(2), pp. 20–28, 1998. 4. R. Tusch, L. B¨osz¨orm´enyi, B. Goldschmidt, H. Hellwagner, and P. Schojer, “Offensive and Defensive Adaptation in Distributed Multimedia Systems,” Tech. Rep. TR/ITEC/03/2.03, Institute of Information Technology, Klagenfurt University, February 2003. 5. D. W. Brubeck and L. A. Rowe, “Hierarchical Storage Management in a Distributed VOD System,” IEEE Multimedia 3(3), pp. 37–47, 1996. 6. W. Bolosky, J. Barrera, R. Draves, R. Fitzgerald, G. Gibson, M. Jones, S. Levi, N. Myhrvold, and R. Rashid, “The Tiger Video Fileserver,” in 6th International Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV), pp. 97–104, 1996. 7. J. Gafsi, U. Walther, and E. W. Biersack, “Design and Implementation of a Scalable, Reliable, and Distributed VODServer,” in Proceedings of the 5th joint IFIP-TC6 and ICCC Conference on Computer Communications, 1998. 8. J. Zinky, D. Bakken, and R. Schantz, “Architecture Support for Quality of Service for CORBA Objects,” Theory and Practice of Object Systems 3(1), 1997. 9. D. C. Schmidt, D. L. Levine, and S. Mungee, “The Design of the TAO Real-Time Object Request Broker,” Computer Communications, Elsivier Science 21(4), 1998. 10. A. Hafid and G. Bochmann, “An Approach to QoS Management in Distributed Multimedia Applications: Design and Implementation,” Multimedia Tools and Applications 9(2), 1999. 11. P. Francis, S. Jamin, V. Paxson, L. Zhang, D. F. Gryniewicz, and Y. Jin, “An Architecture for a Global Internet Host Distance Estimation Service,” in IEEE INFOCOM, pp. 210–217, 1999. 12. W. Theilmann and K. Rothermel, “Dynamic Distance Maps of the Internet,” in IEEE INFOCOM, 2000. 13. D. Andersen, H. Balakrishnan, M. Kaashoek, and R. Morris, “Resilient Overlay Networks,” in Proceedings of the 18th ACM Symposium on Operating Systems Principles, pp. 131–145, 2001. 14. T. S. E. Ng and H. Zhang, “Predicting Internet Network Distance with Coordinates-based Approaches,” in IEEE INFOCOM, pp. 170–179, 2002. 15. R. Wolski, N. Spring, and J. Hayes, “The Network Weather Service: A Distributed Resource Performance Forecasting Service for Metacomputing,” Future Generation Computing Systems 15(5-6), pp. 757–768, 1999. 16. J. Waldo, “The Jini Architecture for Network-centric Computing,” Communications of the ACM 42(7), pp. 76–82, 1999. 17. R. Friedman, E. Biham, A. Itzkovitz, and A. Schuster, “Symphony: An Infrastructure for Managing Virtual Servers,” Cluster Computing 4(3), pp. 221–233, 2001. 18. R. L. Carter and M. E. Crovella, “Measuring Bottleneck Link Speed in Packet-switched Networks,” in Performance Evaluation, 27-28, pp. 297–318, 1996. 19. A. Vetro, A. Perkis, and C. Timmerer, “Text of ISO/IEC 21000-7 FCD - Digital Item Adaptation.” ISO/IEC JTC1/SC29/WG11/N5845, July 2003. http://www.chiariglione.org/mpeg/working documents.htm#MPEG-21. 20. P. van Beek, A. B. Benitez, J. Heuer, J. Martinez, P. Salembier, Y. Shibata, J. R. Smith, and T. Walker, “MPEG-7 Multimedia Description Schemes XM (Version 7.0).” ISO/IEC JTC1/SC29/WG11/N3964, March 2001. http://www.chiariglione.org/mpeg/working documents.htm#MPEG-7. 21. H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson, “RTP: A Transport Protocol for Real-Time Applications.” IETF RFC 1889, January 1996. 22. H. Schulzrinne, A. Rao, and R. Lanphier, “Real Time Streaming Protocol (RTSP).” IETF RFC 2326, April 1998. 23. B. Goldschmidt and Z. L´aszl´o, “A Proxy Placement Algorithm for the Adaptive Multimedia Server,” in Euro-Par 2003 - International Conference on Parallel and Distributed Computing, August 2003.