International Journal of Applied Engineering Research ISSN 0973-4562 Volume 10, Number 2 (2015) pp. 4973-4990 © Research India Publications http://www.ripublication.com
A Study on Proxy Caching Methods for Multimedia Streaming Ponnusamy S. P. Professor, Department of Computer Applications, Adhiparasakthi Engineering College, Melmaruvathur, Tamilnadu, 603319, India
[email protected]
Abstract In recent years, the Internet users are very much interested on watching videos in the areas of news, sports, education, agriculture, games, etc.,. Hence, it is essential to focus on multimedia streaming. The streaming of multimedia has been a big challenge since the size is very large and it requires high bandwidth between an origin server and local clients, to provide best Quality of Service (QoS). To avoid excessive bandwidth consumption and reduce traffic in the network, Proxy Caching schemes are developed to deliver the cached content directly to the client and fetch the noncached content at the same time from the origin server. To create more research thrust on this area for the researchers, we have made a survey on various proxy caching methodologies for multimedia streaming. In this paper, different categories of proxy caching methods are discussed such as scalable, nonscalable, cooperative and random seek support caching based on the environment, QoS, latency and dynamic seek. Index Terms— Multimedia Streaming, Proxy Caching, Cooperative Caching, Quality of Service, Cache Replacement.
Introduction The explosive growth of Internet and its various applications recently multimedia applications like Video on Demand (VoD), live games, movies, online tutorials, webinars, live sports is also increasing. This kind of multimedia streaming is challengeable to network service providers due to the requirement of high bit rate data transfer and long playback with good QoS. The high bit rate multimedia streaming has increased the Internet traffic, huge consumption of bandwidth, and maximum latency between clients and server.
4974
Ponnusamy S. P.
The streaming media can be delivered to the user in different ways, such as using streaming protocols, using Content Delivery Networks (CDN) and using proxy caching. The CDNs are replicated with Medias and placed nearest to the users. CDNs are complex with respect to the environment, many distributed components collaborating to deliver video content across different network nodes. The implementation of a CDN is highly expensive and difficult to replicate the content in all the places. The replication of CDN servers needs to be coordinated and synchronized to provide the same content and QoS to any client. Thus, the research is focused on proxy caching to deliver the VoD to clients. The major problems of CDN lead to develop proxy caching schemes to deliver multimedia streaming to the clients with least delay. To reduce client perceived access latencies as well as server/network loads, an effective means is to cache frequently used data on proxy server closer to clients. Caching schemes are developed and implemented to reduce the traffic congestion and the latency. Caches reduce latency by responding to the user requests from a closer location compared to the original servers. They also reduce the network traffic, since each object is requested from the original server only once (during lifetime of the object), after which the cache satisfies all future requests for the same object. Another advantage of proxy caching is the rapid deployment of content. As the proxy servers get widely deployed, the content providers distribute their information through these caches. The caching can be divided into three categories such as client side, server side and proxy caching according to the location of the caches. The client side caches store the most popular contents on the client’s computer normally done by browser. The server side caching used to reduce the overload of the server. In this, an accelerator cache is located in front of one or more web servers. If the requested object is found in the cache, the accelerator returns the object; otherwise the request is routed to a back end (origin) server. Both client side and server side caching provide little help in alleviating the Internet congestion and server load. For this reason, a third type of caching, the proxy caching is introduced. The proxy server is implemented by the content providers, the enterprises or the Internet Service Providers (ISP) to reduce the network bandwidth without degrading the performance. Initial proxy servers were implemented and acted as the mirrors of the origin servers, where complete content was replicated. Now days, the proxy server cache the popular objects partially. In order to provide certain qualities in media streaming, the proxy server should fulfill the parameters such as fast access, transparency, scalability, efficiency, adaptability, and simplicity.
Proxy Caching Environment and Architecture The proxy caching environments is classified into two types based on the client types and network connection. They are homogenous or non scalable environment and heterogeneous or scalable environment.
A Study on Proxy Caching Methods for Multimedia Streaming
4975
Homogeneous Environment In homogenous network environment, same type or equivalent capacity clients are connected to the network as shown in Fig. 1.(a). So, the media server keeps a single quality of video with same frame size and frame rate and called as constant bit rate streaming. It is served to all client requests through proxy caching. The main requirement for this environment is all clients must have sufficient bandwidth and processing power. The advantages of this environment are less complex on processing the videos, require less storage space and simple media delivery to clients. Heterogeneous Environment In heterogeneous network environment, different type of clients are connected to the network as shown in Fig. 1.(b) in terms of display size, processing power and bandwidth between proxies to clients. So, the media server keeps a different quality of videos in terms of video sizes, qualities and frame rates to meet different client requests. The multiple versions of single video need to be stored individually on the media server and cached in the proxy server also. It makes more storage consumption in both places. To avoid this, layered encoded video schemes and transcoding enabled proxy caching schemes are developed.
(a). Homogenous Environment Low Display Size Low Processing Power
Low Good Display Size Bandwidt Good Processing h Power Normal Bandwidth
High Display Size High Processing Power Good Bandwidth
(b) Heterogeneous Environment Figure 1. Proxy Caching Environment
4976
Ponnusamy S. P.
In layered encoding, a video is prepared as various layers to meet the different client capacities. The proxy server identifies the client capacity and delivers the exact quality video. In the transcoding enabled proxy, the proxy transcode the requested (and possibly cached) video into an appropriate format and delivers it to the user. The advantage of these schemes is the clients are received the matched quality of videos at all times. However, in other side, the schemes have many challenges such as complexity of processing the videos, more storage space and complexity in media delivery to clients. Proxy Cache Architecture Types To realize many of the properties of a proxy server, the proxy caching designs usually include the possibility for the proxy server to interconnect. The different architectures are determined based on how information is exchanged between the nodes and how they are laid out in the network. The proxy caching architecture is classified as follows: 1) A standalone proxy caching architecture is caching the videos independently and served to the nearby clients. It directly contacts the media sever in case the requested content is not cached. The advantage of this system is very simple to implement but reduces the network load considerably. 2) A Hierarchical proxy caching architecture divides the caching in various levels. A request will traverse up the hierarchy until the requested content is found; the response will then traverse back down the hierarchy, leaving a copy of the content at each subsequent level along the path. The drawbacks of this architectural scheme are that many replicas of objects are left down through the hierarchy. Also higher level caches become key access points that might go down or become bottlenecks. Lastly, a negative effect is that processing delay is added at each level of the hierarchy. 3) Distributed or Cooperative caching only operate with one level of caches. These caches distribute meta information on where to fetch cache misses. The distribution of the meta information may be hierarchical, but fetching a cached object is only done within the same level. This model may experience high connection times and high bandwidth usage when used in large scale. System Architectures The proxies have own storage space and keeps the copies of the recently requested objects and have higher bandwidth connection to origin server. A proxy server should have some units such as cache storage, cache analyzer, stream fetcher to achieve the proper caching scheme, transfer the media to clients and fetching the uncached parts of media from the origin server. The Fig. 2 shows the internal architecture of a proxy server. The cache storage unit keeps portions of the cached objects. The cache analyzer unit determines which portion of the object to be cached based on the proxy caching mechanism used. It also checks whether the user requested part is available in the cache or not. If the part is available, send it to the client; otherwise inform the unavailability of the portion to the stream fetcher unit. The stream fetcher unit fetches
A Study on Proxy Caching Methods for Multimedia Streaming
4977
the uncached portion from the media server sends to cache storage or directly send to clients via the cache analyzer unit. When a client requests for an object, the proxy server verifies whether the object is in cache, called Hit, and then the proxy returns the cached object to the client. If the object is in the cache but it is not valid, fetch the object from the origin server and send the object to the client. If the object required by the client is not cached, called Miss, the proxy server has to look for it in the origin server, fetch it and deliver.
Figure 2. System Architecture Streaming media could also benefit significant performance improvement from proxy caching, given its static nature in content and highly localized access interests. Some important and unique features of streaming media are listed with their implications in the proxy cache design [1]. 1) Huge size of the media 2) Reduce client latency 3) Intensive bandwidth use by reduction of traffic 4) High interactivity provided like VCR functions 5) Reduced load on the server 6) Reduced Cost to Internet Service Providers The media object has a high data rate and longer playback duration, combined to yield a huge data volume. Caching the object entirely at a proxy is clearly impractical, as several such large streams would exhaust the capacity of the cache. One solution is to cache only portions of an object. Consequently, to cache which portions of which objects have to be carefully managed, such that the benefit of caching outweighs the synchronization overhead of the joint delivery. The Fig. 3.a illustrates that more bandwidth consumption between web clients and media servers when no proxy server has used. The Fig. 3.b illustrates that bandwidth consumption is reduced by placing a proxy server between web clients and media servers.
4978
Ponnusamy S. P.
(a). Bandwidth consumption without proxy server
(b). Bandwidth consumption with proxy server Figure 3. Bandwidth consumption between web clients and media servers Performance Metrics The performance of the proxy caching is determined by various primary metrics such as startup latency, proxy jitter, cache space consumption and byte hit ratio. 1) Startup latency is defined as the time required sending the request and fetching the requested video objects from the server and starting play. 2) Byte hit ratio is defined as the percentage of requested bytes that can be served from proxy caches. 3) Cache space consumption is defined as the amount of cache space used for an object in the proxy server. 4) Random seek is defined as the user can seek any location from any location randomly in the media player to watch the video at any time. 5) Cache space optimization is defined as the freeing the cache space for caching new objects when the cache space is drained using cache replacement policies. In addition to the primary metrics, some additional metrics such as random seek support and cache replacement policies have to be considered to increase the
A Study on Proxy Caching Methods for Multimedia Streaming
4979
performance of the proxy caching. Normally a proxy server caches the parts of the objects in the cache to reduce the startup latency, because, the proxy server is closer to the clients. If the user is allowed for random seeks, the user can jump into any location and the proxy server must immediately deliver the content from the current hit location to the client. So, it is very essential to predict the user access pattern and applying the caching scheme accordingly. To meet the all the user random seek, the proxy server has to cache more parts of the object. Due to this, the startup latency is reduced and byte hit ratio also increased. But, it consumes more cache space which allowed minimum objects in the proxy server. So, it is necessary to find the optimum caching mechanism to consume less cache space. It can be achieved by the popularity based segment caching, but it is a challenging task for the new objects. However, the admission of more objects in the proxy cache drains the proxy cache space even the optimum caching mechanism is adopted. To overcome this limitation, cache replacement techniques are applied to remove the unwanted cached video objects from the proxy server. The proxy caching schemes for multimedia objects can be classified into two broad categories such as the scalable proxy caching approach and the nonscalable proxy caching approach. The selection of the exact proxy caching is based on the client’s properties such as display size, processing power, existing bandwidth and current network traffic.
Scalable Proxy Caching The scalable proxy caching approach is developed to support heterogeneous environments. In the heterogeneous environment, the bandwidth of access networks, the client’s access model, the access pattern, and the network bandwidth for a given link vary greatly from time to time. So, the proxy server must handle variable bit rate streams of the same multimedia object based on network bandwidth requirements. Many proxy caching schemes are developed to support scalable proxy caching schemes in heterogeneous environments. The scalable proxy caching scheme divided into three categories such as QoS adaptive proxy caching layered encoding proxy caching and transcoding enabled proxy caching. QoS adaptive Caching It is developed to meet the characteristics and request patterns of different types of media, and the varying network conditions of client to proxy and proxy to the media server. In this type of caching, both media server and proxy server must keep multiple QoS versions of media objects in terms of frame rate, display size and qualities. The appropriate version of the object is selected and served based on the current network conditions of clients. Reza Rejaie et al developed Mocha [2], a quality adaptive multimedia proxy cache for Internet streaming on top of Squid. Mocha caches popular streams and adaptively adjusts the quality of cached streams based on the stream popularity and available bandwidth to the interested clients. Fang Yu [3] et al proposed a QoS adaptive caching scheme for mixed media. The scheme enlarges the size of the cached part of
4980
Ponnusamy S. P.
the media with high hit ratio and decrease the one with high miss ratio after periodical interval. Layered Encoded Caching The layered encoded caching is developed based on a layered encoding of the data, that is, the video object is split into several layers. The most significant layer, called the base layer, contains data representing the most important features of the object, whereas additional layers, called enhancement layers, contain data that refine the quality. The higher the layer is from the base, the more fine grained is the quality. Layered caching usually caches the lower layers, as they are relevant to all clients requesting the object. Bhofeng Liu [4] et al and Reza Rejaie [5] et al developed a layered encoding video based on popularity of segments. Niu Xianlong [6] et al presented a cache scheduling scheme based on layered coding VoD system. Data is organized as base layer units and enhancement layer units in the local cache, which are used as units for caching and replacing. Transcoding Enabled Caching (TeC) The TeC is maintaining a single video object and providing an appropriate video format to the heterogeneous clients using the attached transcoding unit in the proxy server. The transcoding unit converts the constant bit rate videos into variable bit rate videos. One potential advantage of TeC is that the origin server need not keep or generate different bit rate versions. Moreover, heterogeneous clients with various network conditions will receive videos that are suited to their capabilities, as content adaptation is more appropriately done at the proxy server. Bo Shen [7] et al developed caching strategies in transcoding enabled proxy systems for streaming media distribution networks. The scheme has three caching algorithms for TeC which take into account that variant of the same video. The first two algorithms cache at the most one version of a video object. They operate differently when a user requests a video version that is coded at a lower bit rate than the one cached in the proxy. The third algorithm may cache multiple versions of the same video object to reduce the processing load at the transcoder. Yoohyun Park [8] et al proposed a hybrid segment based transcoding proxy caching of multimedia streams. In this proposal, it is assumed that an object has multiple versions at variable bitrates.
Non Scalable Proxy Caching The majority of the current multimedia content on the Internet is coded in nonscalable, single layered format due to the implementation complexity of scalable and transcoding techniques on the proxy server. The nonscalable approach requires the proxy server to handle single bit rate or single layered streams. The nonscalable stream is encoded using a traditional video coder as a single layer. That is, the whole single layer stream is needed by the receiver to decode it, since the nonscalable stream is not partially decodable. Nonscalable streams provide high coding efficiency, but offer limited support for receivers with heterogeneous processing, screen, and
A Study on Proxy Caching Methods for Multimedia Streaming
4981
bandwidth capacities. The nonscalable proxy caching is classified into sliding interval caching, prefix caching, segment caching, popularity based proxy caching. Prefix Caching The prefix caching methodologies cache the start of an object, called the prefix. When serving a request, the proxy immediately deliverers the prefix to the client and, meanwhile, fetches the rest of the object, called the suffix from the server [9]. When using prefix caching the size of the prefix is chosen based on available resources and optimization objectives. It can significantly reduce the start up latency of a stream, but it does not reduce the network load or server load to some extent and does not support random seek.
Figure 4. Prefix/Suffix parts of a video segment [9] Dongliang Guan [10] developed optimal prefix cache allocation which is calculated approximately in direct proportion to the square root of the video’s total length. Li Zhu [11] et al presented a prefix Proxy Caching used the bandwidth skimming scheme to deliver video streams and each client can receive data from maximum two streams at the same time. Dakshayini M [12-13] et al proposed a prefix caching scheme in which the size of prefix to be cached at proxy server are determined based on the frequency of user requests to any video. Wei Tu [14], Lian Shen [15] proposed a prefix segment mechanism for a flexible starting point. The size of the prefix is adjusted to the round trip time and the transmission capacity between the remote server and the proxy. The size of the prefix is always less than or equal to suffix. The prefix algorithms are used the traffic, popularity and size of the videos to determine the size of the prefix to be cached for a video. Segment Caching This is a generalization of prefix caching. It partitions a media object into a series of segments and set different caching values on each segment. Usually, the first few segments are viewed at initial than those at the end, as they may be prefetched. However, if the media object is popular enough, it should also be possible to cache the later segments in a stream. If the popularity of an object decreases the proxy may discard a large block of the object, but it have the possibility of leaving the first segments, for example, the prefix of the object. This approach has about the nearly the same savings on start up latency and network load as prefix caching. The segmentation is classified into uniform segmentation, exponential segmentation, sky
4982
Ponnusamy S. P.
scrapper segmentation, hybrid segmentation and dynamic segmentation. In uniform segmentation, objects are segmented according to a uniform length as shown in Fig. 5.
Figure 5. Fixed segment structure Kun-Lung Wu [16] et al studied three media segmentation approaches to proxy caching: fixed, pyramid, and skyscraper. Songqing Chen [17] et al developed a segment based proxy caching which adopted an active prefetching method along with segment caching. Songqing Chen [18] et al designed and implemented a segment based streaming media proxy, called SProxy. James Z. Wang [19] developed a Fragmental Proxy Caching (FPC) for streaming multimedia objects by divinding the objects into tiny fragments than segments and caching the prefix part in uniform manner. Ponnusamy SP [20] et al developed HPProxy caching method in which the objects the divided using the hot points and the size of the caching of sector is determined based on the sublevel hot point values. The caching methods in [18-20] used the uniform segmentation and constant caching methods. Wei Tu [14] et al, Xiaoling Li [21] et al, and Lei Guo [22] et al proposed a dynamic segment based caching algorithm which adjusted the segment size according to its hitting rate and popularity of the objects. Songqing Chen [23] et al proposed an adaptive and lazy segmentation based proxy caching which delayed the segmentation as late as possible and determined the segment length based on the client access behaviours in real time. Popularity Based Proxy Caching It is a generalization of the segment caching dynamic. The portion of the segments is cached variably depends on the popularity of the segments. The popularity is determined using the number of times the user viewed over a period of time. Most popular segments cache more portion and poor popular segments cache minimum portion in the proxy cache. This caching scheme reduces startup latency than the prefix and segment caching. The challenging task is maintaining the popularity at segment level and this scheme is applicable to older objects, because the popularity can be obtained after some period of time from the initial loading time. Dakshayini M [13] et al, Jiang Yu [24-25], Beomgu Kang [26] et al, and GopalaKrishnan Nair T R [27] et al proposed popularity caching algorithms based on the popularity of the segments. The popularity of the segments is calculated separately for each segment is collectively for the entire objects. The segment based maintains the segments with high popularity for longer time in the cache, but it is complete to keep and count the popularity of the each segment in an object. The collective popularity based algorithm caches more segment part when the popularity increases. The popularity based caching algorithms either calculates the popularity from the
A Study on Proxy Caching Methods for Multimedia Streaming
4983
zero level and keep increasing when more hits exists on the segments or estimated the internal popularity of each video based on k-transformed Zipf-like model [28-29] and chosen the appropriate segments to cache As a summary, the nonscalable proxy caching methods are very easy to implement at proxy server due to the nature of single layered CBR videos. The proxy jitter is very low and complexity of caching process is very less. It is also easy to main the popularity of segments in the proxy server and in the origin server and applying the cache replacement policies on the objects also easy.
Cooperative Proxy Caching The cooperative proxy caching schemes [9] [12], [30-33] are introduced to utilize the cache space effectively, reduce the bandwidth consumption and latency. The media objects are segmented into equal sized segments and stored across multiple proxies, where they can be replaced at a granularity of a segment. There are several local proxies called as home proxies responsible to answer client requests by locating and relaying the segments. The cooperative proxy caching schemes are used to cache the segments either in cooperated proxies or both client and proxies [33] and illustrated in Fig. 6. The major challenges in these schemes are to allocate appropriate cache space in the proxies and to find the corresponding proxy which cached the segments. The client storage space is also used to cache in addition to proxy caching system. These schemes increased the additional responsibility to the clients to extend the cache allocation and lookup process at the client level and proxy level.
Figure 6. Cooperative proxy caching without use of client storage[33] The overhead for the proxy cooperation system is determined by three separate cache functions such as discovery, dissemination, and delivery. Discovery refers to how a proxy locates cached objects. Dissemination is the process of selecting which objects are to be cached and transferring those objects from the origin servers to the caches. Delivery defines how objects are delivered from the caching system to the
4984
Ponnusamy S. P.
client at the time of the request. Alan T.S. Ip [9] et al, Zeng Zeng [30] et al and CehnLung Chan [31] et al provided the prefix/suffix based copartive proxy caching whereas M. Dakshayini [32] et al and Yoshiaki Taniguchi [33] et al QoS support cooperative proxy caching. These schemes proved that well recognized proxies grouped together can achieve better performance than independent standalone proxies. Cooperating proxies must trust that the objects they accept from other caches have not been modified, and that cooperating partners will not act in a manner contrary to the common good. But cooperative caching that is not administered by a central authority would be difficult to implement. In that case if the central administrative machine failed, the cooperative proxy caching model will also fail. So, the implementation of cooperative proxy caching is very complex.
Random Seek Supported Caching In streaming multimedia applications, some users want to perform VCR functions, such as fast forward, rewind, and random access, before finishing the playback of the entire object. Old media data are drained out and new media data is filled into the buffer continuously during streaming. When a user clicks the play button or drags the playback progress cursor on the media player to start a streaming session or to specify a seeking position, the user usually has to wait for a period of time before the playback begins. Distributing the cached parts across an object allows multimedia streaming to perform in either forward or backward direction. This is an essential feature for handling interactive VCR functions in proxy server environments. Wei Tu [14] et al, Lian Shen [15] et al, Songqing Chen [17-18], [23], [37] et al, James Z. Wang [19] et al, Ponnusamy SP [20] et al, Xiaoling Li [21] et al, Lei Guo [22] et al, and Yuan He [34] used the segment based caching method to support VCR functions by distributing the cached parts across the objects. From the study, it is understood that the larger size segment division does not provide fairness on VCR functionalities. But the smaller size segment division provides easy to support of VCR functions, because all smaller segments will have some cached part.
Cache Replacement The continuous caching of video object using any one caching methods described in the above sections will exhaust the cache space in the proxy server. Hence, the proxy cache space must be made free to accommodate the new objects by removing the existing cached objects either partially or fully. The optimization has been done with various cache replacement polices. Most of them selects an entire cached object or part of the cached object as a victim and apply the policy on it. The victim objects are identified based on the cache requirement and the popularity of the existing objects. There are two types of cache replacement policies such as single factor replacement policy and multifactor replacement policy. Single factor replacement policies include LFU, LRU, FIFO, and RAND [5], [21], [35-36]. Multifactor replacement policies comprise of many factors, such as access frequency, recency, latency, and size, which
A Study on Proxy Caching Methods for Multimedia Streaming
4985
are called weighted functions, [9], [14], [19], [22], [23], [37-43]. A different classification about cache replacement policies is given in Jin [44] et al as follows: 1) Recency based strategies incorporate recency (and size and/or cost) into the replacement process. These strategies use recency as a main factor. Most of them are more or less extensions of the well known Least Recently Used (LRU) strategy. LRU has been applied successfully in many different areas. LRU is based on the locality of reference seen in request streams. Simple LRU variants do not combine recency and size in a useful, balanced way. They do not consider frequency information. This could be an important indicator in more VoD environments. 2) Frequency based strategies incorporate frequency (and size and/or cost) into the replacement process. They are based on the fact that different Web objects have different popularity values and that this popularity values result in different frequency values. Frequency based strategies track these values and use them for future decisions. Least Frequently used (LFU) based strategies require a more complex cache management. LFU can be implemented, for example with a priority queue. Many objects could have the same frequency count. In this case, a tie breaker factor is needed. 3) Recency/frequency based strategies consider both recency and frequency under fixed or variable cost/size assumptions. Due to special procedures, most of these strategies introduce additional complexity. 4) Function Based Strategies use a potentially general function to calculate the value of an object. They do not assume a fixed combination of factors or fixed usage of data structures. There exists no builtin bias for some objects. Through the proper choice of weighting parameters, one can try to optimize any performance metric. A number of factors may be considered for handling different workload situations. As a result, most of the cache replacments produced better results by applying function based strategies on the proxy server. The recently developed cache replacement alogorithms used the frequency based cache replacement starategies.
Conclusion The literature study is made on various parameters involved in the proxy caching such as type of proxy environments, measuring the startup latency, determining the size of the segments, choosing the caching part of the segments, updating the popularity of the objects and adjusting the caching part, adopting the random seek support and optimizing the cache space using various cache replacement policies. The Table I summarizes the various parameters that are impact on proxy caching. The scalable proxy caching suitable to support heterogeneous environment clients with compromising the proxy caching performance in the fields of startup latency, random seek support, popularity maintenance, and increased processing complexity of the proxy server. Additional time required identifying the current network conditions and identifying exact versions or converting into the appropriate client requirement
4986
Ponnusamy S. P.
increases the startup latency and maintaining the popularity and user access patters is tedious due to the keeping different versions. TABLE I. Comparison of various parameters on proxy caching Scalable Proxy Caching Layered QoS Adaptive Transcoding encoding
Properties
Prefix/ Suffix
Segment
Reduced
Reduced
Reduced
Low
Low
Very Low
Average
Low
Very Low
Very Low
Average
Average
Low
Very Low
Bandwidth consumption rate
Reduced
Reduced
Startup Latency
Low
Low
High High Variable Bit Rate
Variable Bit Rate
Cache space consumption rate Proxy jitter Bit rate supported
Variable Bit Rate Constant Bit Rate Constant Bit Rate
Single Version
Single and Multiple Versions
Single version
Single version
Yes
Yes
Yes
No
No
Number of Objects and Multiple versions Versions Content Adoption
Nonscalable Proxy Caching
Supported
Supported
Supported
Supported
Supported
Cache processing complexity
High
Average
Average
Low
Low
Popularity support
Complex
Complex
Complex
Low
Easy
Random seek support
Poor
Poor
Poor
Poor
Easy
Replacement strategy support
Complex
Complex
Complex
Easy
Easy
Cooperative
-ness
The nonscalable proxy caching is well supported in all the parameters as shown in Table I, because the nature of the proxy server placement and the nature of the media cached in the proxy server. Compare with prefix/suffix caching method, the segment method provides well support on popularity maintenance and random seek support. Hence, our proxy caching method focuses on the segment based proxy caching for the newly loaded objects with random seek support.
References [1]
[2]
[3]
[4]
Jiangchuan Liu, “Streaming Media Caching”, in Springer Web Content Delivery, Web Information Systems Engineering and Internet Technologies Book Series, Vol.2, pp 197-214, 2005. Reza Rejaie and Jussi Kangasharju, “Mocha: A Quality Adaptive Multimedia Proxy Cache for Internet Streaming”, in Proceedings of the 11th International Workshop on Network and Operating System Support for Digital Audio and Video, pp. 3-10, June 2001. Fang Yu, Qian Zhang, Wenwu Zhu and Ya-Qin Zhang, “QoS-Adaptive Proxy Caching for Multimedia Streaming Over the Internet”, in IEEE transactions on circuits and systems for video technology, Vol. 13, No. 3, pp.257-269, March 2003. Baofeng Liu, Wenjun Zhang and Songyu Yu, “Proxy Caching Based on Segments for Layered Encoded Video over the Internet” in Proceedings of the
A Study on Proxy Caching Methods for Multimedia Streaming
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
4987
IEEE 6th Circuits and Systems Symposium on Emerging Technologies Emerging Technologies: Frontiers of Mobile and Wireless Communication, Vol. 1, pp.41-44, June 2004. Reza Rejaie, Mark Handley, Haobo Yu and Deborah Estrin, “Proxy Caching Mechanism for Multimedia Playback Streams in the Internets”, in Proceedings of the 4th International Web Caching Workshop, March 1999. Niu Xianlong, Yang Shoubao, Wu Bin, Liu Xiaoqian and Guo Liangmin, “A Cache Scheduling Scheme Based on Layered Coding VOD System”, in proceedings of 8th International Conference on Grid and Cooperative Computing, pp.238-243, August 2009. Bo Shen, Sung-Ju Lee and Sujoy Basu, “Caching Strategies in Transcodingenabled Proxy Systems for Streaming Media Distribution Networks”, in IEEE Transactions on Multimedia, Vol. 6, Issue 2, pp. 375-386, April 2004. Yoohyun Park, Yongju Lee, Hagyoung Kim and Kyongsok Kim, “Hybrid Segment-based Transcoding Proxy Caching of Multimedia Streams”, in proceedings of the IEEE 8th International Conference on Computer and Information Technology Workshops, pp.319-324, July 2008. Alan T.S. Ip, Jiangchuan Liu and John Chi-Shing Lui, “COPACC: An Architecture of Cooperative Proxy-Client Caching System for On-Demand Media Streaming”, in IEEE Transactions on Parallel and Distributed Systems, Vol. 18, No. 1, pp.70-83, January 2007. Dongliang Guan and Gang Xiong, “Optimal Prefix Cache Allocation Among Multiple Cooperative Local Proxies”, in proceedings of 5th International Conference on Wireless Communications, Networking and Mobile Computing (WiCom '09), pp. 1-4, September 2009. Li Zhu, Gang Cheng, Nirwan Ansari, Zafer Sahinoglu, Anthony Vetro, and Huifang Sun, “Proxy Caching for Video on Demand Systems in Multicasting Networks”, in proceedings of Annual Conference on Information Sciences and Systems, The Johns Hopkins University, March 2003. M Dakshayini, and T R Gopalakrishnan Nair, “Client-to-Client Streaming Scheme for VoD Applications”, in the International Journal of Multimedia and its Applications, Vol. 2, No. 2, pp. 46-55, May 2010. http://arxiv.org /abs/1005.5436 M Dakshayini and T R GopalaKrishnan Nair, “An Optimal Prefix Replication Strategy for VoD Services”, in Journal of computing, Vol. 2, Issue 3, pp.1-7, March 2010. Wei Tu, Eckehard Steinbach, Muhammad Muhammad and Xiaoling Li, “Proxy Caching for Video-on-Demand Using Flexible Starting Point Selection”, in IEEE Transactions on Multimedia, Vol.11, No. 4, pp.716-729, June 2009. Lian Shen, Wei Tu and Eckehard Steinbach, “A Flexible Starting Point Based Partial Caching Algorithm for Video on Demand”, in proceedings of IEEE International Conference on Multimedia and Expo, 2007 (ICME 2007), Beijing, pp.76-79, July 2007.
4988 [16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
Ponnusamy S. P. Kun-Lung Wu, Philip S. Yu and Joel L. Wolf, “Segmentation of Multimedia Streams for Proxy Caching”, in IEEE Transactions on Multimedia, Vol. 6, No. 5, pp.770-780, October 2004. Songqing Chen , Haining Wang, Xiaodong Zhang, Bo Shen and Susie Wee “Segment-Based Proxy Caching for Internet Streaming Media Delivery”, in IEEE Multimedia Magazine, Vol. 12, Issue 3, pp.59-, September 2005. http://dx.doi.org/10.1109/MMUL.2005.56 Songqing Chen, Bo Shen, Susie Wee and Xiaodong Zhang, “SProxy: A Caching Infrastructure to Support Internet Streaming”, in IEEE Transactions on Multimedia, Vol. 9, No. 5, pp.1062-1072, August 2007. James Z. Wang and Philip S. Yu, “Fragmental Proxy Caching for Streaming Multimedia Objects”, in IEEE Transactions on Multimedia, Vol. 9, No. 1, pp.147-156, January 2007. Ponnusamy S. P and Kathikeyan E, “HPProxy:Hot-Point Proxy Caching with Multivariate Sectoring for Multimedia Streaming”, in European Journal of Scientific Research, Vol. 68, No. 1, pp. 21-35, January 2012. Xiaoling Li, Wei Tu and Eckehard Steinbach, “Dynamic Segment Based Proxy Caching for Video on Demand”, in proceedings of IEEE International Conference on Multimedia and Expo, Hannover, Germany, pp.1181-1184, April 2008. Lei Guo, Songqing Chen, Zhen Xiao and Xiaodong Zhang, “DISC: Dynamic Interleaved Segment Caching for Interactive Streaming”, in proceedings of 25th IEEE International Conference on Distributed Computing Systems (ICDCS 2005), Ohio, USA, pp. 763-772, June 2005. Songqing Chen, Bo Shen, Susie Wee and Xiaodong Zhang, “Adaptive and Lazy Segmentation Based Proxy Caching for Streaming Media Delivery”, in Proceedings of NOSSDAV’03, pp. 22-31, June 2003. Jiang Yu, Chun Tung Chou, ZongKai Yang, Xu Du and Tai Wang, “A Dynamic Caching Algorithm Based on Internal Popularity Distribution of Streaming Media”, in Springer-Verlag Transactions on Multimedia Systems, Vol. 12, Issue 2, pp.135-149, October 2006. Jiang Yu, Chun Tung Chou, XuDu and TaiWang, “Internal popularity of streaming video and its implication on caching”, in Proceedings of the 20th International Conference on Advanced Information Networking and Applications (AINA’06), Vol .1, April 2006. Beomgu Kang, Eunjo Lee and Sungkwon Park, “Popularity-based Partial Caching Management Scheme for Streaming Multimedia on Proxy Servers over IP Networks”, in proceedings of International Conference on Ultra Modern Telecommunications & Workshops, (ICUMT '09), pp. 1-7, October 2009. T R GopalaKrishnan Nair and M Dakshayini, “Stochastic Model Based Proxy Servers Architecture for VoD to Achieve Reduced Client Waiting Time”, in International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, pp.73-80, January 2010.
A Study on Proxy Caching Methods for Multimedia Streaming [28] [29] [30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
4989
Zipf G.K, “Selected Studies of the Principle of Relative Frequency in Language”, Cambridge, MA, Harvard University Press, 1932. Zipf G.K., “Human Behavior and the Principle of Least Effort”, Cambridge, MA: Addison-Wesley, 1949. Zeng Zeng, Bharadwaj Veeravalli and Kenli Li, “A novel server-side proxy caching strategy for large-scale multimedia applications”, in Journal of Parallel Distributed Computing, Vol. 71, Issue 4, pp. 525-536, April 2011. Chen-Lung chan , Te-chou su, shih-Yu Huang and Jia shung wang, “Cooperative proxy scheme for large scale VoD system”, in Proceedings of the Ninth International Conference on Parallel and Distributed Systems (ICPADS’02), pp. 404-409, December 2012. M. Dakshayini and Dr.T..R. Gopalakrishan Nair, “Cooperative Proxy Servers Architecture for VoD to Achieve High QoS with Reduced Transmission Time and Cost”, in Proceedings of Symposium on Multidisciplinary Research , pp.104-115, 2008. Yoshiaki Taniguchi, Naoki Wakamiya and Masayuki Murata, “Quality-Aware Cooperative Proxy Caching for Video Streaming Services”, in Journal of Networks, Vol. 3, No. 8, pp.16-25, November 2008. Yuan He and Yunhao Liu, “VOVO: VCR-Oriented Video-on-Demand in Large-Scale Peer-to-Peer Networks”, in IEEE Transactions on Parallel and Distributed Systems, Vol. 20, No. 4, pp. 528-539, April 2009. http://dx.doi.org/10.1109/TPDS.2008.102 Cheng, K. and Kambayashi, Y., “A size adjusted and popularity-aware LRU replacement algorithm for Web caching”, in Proceedings of the 24th Annual International Computer Software and Applications Conference (COMPSAC2000). Taipei, pp. 48–53, October 2000. Yeung K. H., Wong, C. C., and Wong K. Y., “A Cache Replacement Policy for Transcoding Proxy Servers”, in IEICE Transactions On Communications, Vol. E87-B, No. 13, pp. 209-211, 2004. Songqing Chen , Bo Shen , Susie Wee and Xiaodong Zhang, “Segment-Based Streaming Media Proxy: Modeling and Optimization”, in IEEE Transactions on Multimedia, Vol. 8, No. 2, pp. 243-256, April 2006. C.-F. Kao and C.-N. Lee, “Aggregate Profit-Based Caching 38 Replacement Algorithms for Streaming Media Transcoding Proxy Systems”, in IEEE Transactions on Multimedia, Vol. 9, No. 2, pp. 221-230, February 2007. H.-P. Hung and M.-S. Chen, “On Designing a Shortest-Path-Based Cache Replacement in a Transcoding Proxy”, in Springer Transactions of Multimedia System, Vol. 15, No. 2, pp. 46-62, April 2009. http://dx.doi.org /10.1007/s00530-008-0143-z K. Samiee, “A Replacement Algorithm Based on Weighting and Ranking Cache Objects”, in International Journal of Hybrid Information Technology, Vol. 2, No. 2, pp. 93-104, April 2009. S. Podlipnig and L. Böszörmenyi, “A Survey of Web Cache Replacement Strategies”, in Journal of ACM Computing Surveys, Vol. 35, Issue 4, pp. 374398, December 2003.
4990 [42]
[43]
[44]
Ponnusamy S. P. Ponnusamy S. P and Kathikeyan E, “Cache Optimization on Hot-Point Proxy (HPProxy) using Weighted-Rank Cache replacement Policy”, in ETRI journal, Korea, Vol. 35, No. 4, pp. 687-696, August 2013. Ponnusamy S. P and Kathikeyan E, “Cache Optimization on Hot-Point Proxy (HPProxy) using Dual Cache Replacement Policy”, in Proceedings of ICCSP’12, India, pp.108-113, April 2012. Jin S. and Bestavros A., “GreedyDual*: Web caching algorithms exploiting the two sources of temporal locality in Web request streams”, in Journal of Computer Communications, Vol. 24, Issue 2, pp.174-183 , February 2001.