Jul 7, 2009 - Deliveryâ, In Proceedings of the International Workshop on. Networked Group Communication (NGC), 2002. [6] Raj Sharman, Shiva Shankar ...
An IP multicast - HTTP Gateway for Reliable Streaming in Heterogeneous Networks Ion BICA, Stefan-Vladimir GHITA, Mihai TOGAN Computer Science Department, Military Technical Academy Bucharest, 050141, Romania multicast network resources aggregated into HTTP delivery service that will also combine methods for improve performance to bandwidth limitations and network congestions. The article presents the architecture of our gateway system. It describes the prototype implementation, traffic conversion mechanism and the proposed buffering management optimizations. Finally, we present some experimental results and conclusions.
Abstract—IP multicast has proved to be the best approach for large scale multimedia communications enhancing video transmissions for applications like IPTV, video conferencing or other best-effort real-time audio/video applications. IP multicast connectivity, unfortunately, relays on certain network technologies and protocols and is present only in some areas of the Internet, while the majority of hosts lack in such connectivity and do not have sufficient network resources, or are locked behind firewalls. In this paper we present an effort to design and implement a HTTP Gateway for IP multicast communications. The main objective is to allow connections to IP multicast subsystems from different locations over Internet by tunneling multicast traffic over HTTP protocol. This way, the content of multicast network is easily accessible and independent of transport network. Another important goal of our gateway design is the optimization mechanism based on cache overriding to cope with network congestions. It is well known that TCP video streaming amplify the network congestions. Our proposed solution implements an intelligent buffering mechanism based on packet discarding rather then connection dropping.
II.
A. IP Multicast Multicast communications are based on the necessity of a group of hosts interested in sending/receiving information streams across an IP network. Such IP multicast transmission mechanisms for multipoint distribution are available and special mechanism have been proposed to build distribution trees and define the forwarding path between the subnet of the content source and each subnet containing members of the multicast group. During the late 80s, an intensive work was invested in defining and creating the IP multicast service model, core service algorithms and protocols. Multicast technology involves complex mechanisms that are beyond the scope of this article. We only mention the IGMP protocol [1] used to signal, maintain and remove group memberships, and the traffic belonging to multicast groups is forwarded though best-effort IP/UDP transport.
Keywords: HTTP gateways, IP multicast, HTTP tunneling, cache overriding, buffer management
I.
INTRODUCTION
For high-bandwidth applications like IPTV and Video Conferencing, MPEG video streaming requires a large portion of the total available network bandwidth. In such cases the IP multicast delivery is the natural manner to implement a scalable simultaneous delivery service [2], efficient for potentially large numbers of participants with fewer loads on the streaming side of the infrastructure and bandwidth conservation on the entire network involving the application [5]. The idea of having multicast enabled applications and m-bone connected networks remains a worthy desiderate while current technical transport capabilities lack in network resources (e.g. routing/switching protocols) or suffer from security limitations leading multicast traffic being locked behind firewalls and isolated in multicast islands. In the context described, there are situations when a multicast to unicast gateway is needed to allow remote access to these application or resources. Current efforts for tunneling multicast traffic over heterogeneous networks have moved multicast communications principles to the application layer where communications between peers relay on unicast connections [9]. The purpose of the work presented in this paper was to design and implement a solution that permits accessing
c 978-1-4244-6363-3/10/$26.00 2010 IEEE
BACKGROUND
B. Basics of multicast-HTTP gateway system Accessing multicast networking can be achieved via a HTTP [3] gateway. When receiving a HTTP GET/PUT request, the gateway will identify not only the method (send/receive) but also the multicast group from the requested URL [4]. At this point the gateway will create the multicast socket and forward traffic according to the HTTP request. 1. Client to Server GET /udp=224.5.0.80&port=1234 HTTP/1.0 CRLF
2. Server to Client Response : HTTP/1.1 200 OK Date: Fri, 07 Jul 2009 22:45:48 GMT Content-Type: application/multicast_gateway CRLF [data] Table 1. Example of some elementary HTTP messages
389
requirements. This leads to idea of data management for each client, but not compulsorily having a different buffer for every client. Following the premise of eliminating redundant data from the buffering management, complemented with the application connection reliability principle, we propose a special caching mechanism.
If the client application needs to remotely reach multiple multicast groups, it must open multiple HTTP connections to the gateway or send multiple requests over the same connection. In the case the capability of differentiating data afferent to different multicast groups is based on information specified in HTTP header. After connection though the gateway is established, data between remote client and the multicast socket should be forwarded. In a trivial approach this data will simply be moved from socket to socket without any middle processing from the gateway. III.
V.
It’s obvious until now that the purpose of our gateway is to stream media over HTTP for on-demand request, as pure unicast server. In this context retransmission-based schemes are generally considered inappropriate for video applications because of network latency invoked [7]. Caching mechanisms can greatly improve performance but also consume computational power and memory resources for cache building. In streaming applications caching has the meaning of keeping some amount of data stream for some period of time [6] and, contextually, buffed data can serve different purposes such as retransmission, streaming timing calculation or just attenuation of network delays. Because streaming through HTTP within heterogonous networks involves the possibility of insufficient connection transport capacity, congested network environments or any other anomalistic behavior in both network and application layer, a reliability policy should be defined and implemented to deal with such situations [10]. Even if streaming applications generally involve diverse client-server interactions and pose stringent demands on packet delay and jitter to ensure continuity of the presentation, temporary broken consistency or adulterated content is accepted from the client experience. Having this in mind we designed a cache overriding mechanism to simplify and maintain a realistic approach for HTTP stream gateway.
PROBLEM FORMULATION
In the real world networking, large amount of data and sometimes real time is needed to be tunneled. Resources are not always enough and somehow service alteration policies are necessary to be implemented. The HTTP tunneling of multicast datagram can be an alternative to traditional (standardized) methods [8] witch are not appropriate for current internet development and have difficulties for passing through security application like firewalls, IDS, IPS, etc. Also, the HTTP tunneling techniques can be used in the context of small amount of data, not real time and not a lossy network environment. Based on this assumption, we mainly focus our gateway design work on reproducing connectionless behavior through a connection oriented communication. That may be useful within some environments like on-demand streaming multimedia information where some amounts of data dropping is more accepted than connection loss. In order to accomplish this challenge, a special cache anatomy necessary to be created to allow large information streams to be forwarded in this context. IV.
CACHING POLICY – STRATEGY FOR RELIABILITY
MULTICAST-HTTP FORWARDING SERVICE
Our designed gateway architecture consists of tree main elements: the HTTP delivery multithreaded server part, a cache memory (SHM – Shared Memory) used for buffering multicast data and internal flags, and a multicast data acquisition and sender process (multicaster) as showed in Figure 1.
A. Caching Model Regarding the previous mentioned connection issues, the data flow from a multicast network that will be sent to the client may be in some situations when reliable connection management policies are to be invoked. On the other hand, the traffic received from HTTP and addressed to the multicast network is a connection less trivial case and does not make the topic of the present paper. The cached data received via multicast connectors is managed by the gateway until reaches the client(s) or is overwritten by the cache engine according to the buffer dimensions. Our cache design is based on a single circular buffer for each multicast data source. This buffer is also accessed by multiple pointers according to the number of requesting clients instead of having multiple copies for every client sending thread. For heterogeneous connections each client serving thread will keep a streaming pointer to the corresponding buffer and will try sending data as fast as possible until reaching the buffer storing index.
Figure 1. IP multicast-HTTP gateway data flow
The SHM cache is used by both HTTP service delivery module and multicast process to signal the multicast group adherence and buffering data. Multiple clients asking for the same multicast session will result in copying source data packets to each client connection. This situation results in every client having its own connection channel with its own transmission speed and transport parameters, should generate its own buffering
390
VI.
EXPERIMENTAL RESULTS
This section presents some experimental results obtained for relay multicast multimedia streams through unicast HTTP connection. To measure the reliability of the proposed system we considered the following parameters: the configured maximal buffering length in seconds is T, the input rate of the multicast traffic is Rr and the capacity rate of output connection Sr. In a bandwidth fluctuating network environment like public Internet, the connection transport capacity is fluently variation near Sr even if the average is greater then it. Be A = Rr (%) the difference in traffic level Sr between input rate and output rate, Xr (t ) = Sr * t the amount Figure 2. Cache architecture
of data sent in t time at Sr speed, and Xr (t ) = Rr * t amount of buffered data at an input speed of Rr. It is obvious from the gateway buffering architecture we presented that the two buffer pointers meet at the same position which results in: (1), Xr (t ) = B + Xs(t ) where B is the buffer length and is equal to Sr * T . In these conditions, the time elapsed until the buffer overflows and the streaming index is updated is B + Xs (t ) Xr (t ) (2) t= = Rr Sr
In the case of a temporary congestion, our cache engine overwrites data excess by default because of a circular buffer design without overloading TCP buffers or drop client connection. This feature permits client decision on the connection state based on his evaluation of quality of service. Another improvement of circular buffering to TCP streaming is the avoidance of congestion amplification. Besides memory optimal use, the efficiency of the proposed cache model consists in simplifying buffer management of independent connections.
t=
B. Cache dynamic dimensioning Let C be the maximum cache size of the gateway. Having N multicast data channels to be served, each channel i having buffer size d i , i = 1,..., N , results in di ≤ C .
∑
i =1, n
A common technique for computing di, is to multiply the bit rate with some amount of time that determines the buffer length. In practice this time varies between 200ms to several numbers of seconds depending on the bandwidth dynamicity and application needs. Unfortunately, the case where buffer windows can cover caching necessities is an ideal situation. Streaming servers are usually limited in the number of clients influenced by the available bandwidth and also by forwarding capabilities. This will lead to ∑ d i > C witch drives to cache dimension
Rr T(ms) Buffer A t(s) t(s) (Mbps) Size(KB) (%) Calculated Measured 0.5 200 12.8 1 0.2020 0.1878 0.8 400 41 5 0.5263 0.5016 1 800 102.4 10 0.8888 0.7538 1.5 1200 230.4 20 1.5 1.439 Table 2: Calculated and measured connection buffering overrunning time
i =1, n
adjustment to the maximum available memory C. Decreasing all buffers is an undesirable option. A better approach is to apply a popularity profile pm for deciding which multicast receiving buffer should be reduced. Computing pm as the relation between the number for requests of muticast m and the total number of requests, the buffer size of the multicast m is d m = pm C , where n
∑p
i
T
1− A Solving the equation system (1) and (2) results in as the time that buffered data, for A(%) output bandwidth unavailability, can be used to deal with network delays, packet jitter or lag transmission. To evaluate the efficiency of reliability mechanism in the proposed solution we deployed the gateway implementation to serve multiple MPEG-TS live CBR streams accessed from a IP multicast enabled network. In this experiment we used physhaper [13] software installed on same machine with the gateway to emulate bandwidth unavailability of real networks by limiting the transmission speed for HTTP traffic. The HTTP client used for connecting to the gateway is VideoLan [12]. Our experiment consists in measuring this time for certain environmental parameters as shoed in Table 2 and Figure 3.
=1.
i =0
391
[5] B. Zhang, S. Jamin, L. Zhang, “Universal IP Multicast Delivery”, In Proceedings of the International Workshop on Networked Group Communication (NGC), 2002. [6] Raj Sharman, Shiva Shankar Rammana, Ram Rammesh, ”Cache Architecture for On-Demand Streaming on the Web”, ACM-TRANSACTION on the Web Vol. 1, No. 3, Article 12, September 19, 2007 [7] J. Liu, X. Chu, J. Xu, “Proxy Cache Management for FineGrained Scalable Video Streaming”, IEEE INFOCOM'04, Hong Kong, 2004. [8] S. Deshpande, W. Zeng, “Scalable Streaming of JPEG2000 Images using Hypertext Transfer Protocol”, In Proceedings ACM Multimedia, 2001. [9] S. Jin, A. Bestavros, “Cache-and-Relay Streaming Media Delivery for Asynchronous Clients”, Proceedings of the 4th International Workshop on Networked Group Communication, Boston, 2002. [10] C. Krasic, J. Walpole, K. Li, A. Goel, “The Case for Streaming Multimedia with TCP”, In 8th International Workshop on Interactive Distributed Multimedia Systems (iDMS), 2001. [11] Ivan Pepelnjak, Jeff Apcar, Jim Guichard, MPLS and VPN Architectures, Volume II, chapter 3, Using Multicast Domains. [12] http://www.videolan.org/vlc/ [13] http://www.freenet.org.nz/python/pyshaper/
Figure 3: Calculated and measure overrunning time
The measured time expresses the maximum amount of stream available to be retransmitted when the transmission capacity overtake the input rate. In other words, this time value is an important system parameter because when temporary output bandwidth is fluctuating beyond input rate, it expresses the fluctuation amount the system can tolerate. VII. CONCLUSIONS The proposed caching scheme enables an efficient buffer management and the request-response control of retransmission resulting from the caching configuration that allows maximized effectiveness of the enhanced client QoE. The features presented in this paper add several improvements to a multicast distribution network regarding data access from remote heterogeneous networks, possibility to authenticate and authorize remote access and possibility of passing through IP security appliances (firewalls, IDS/IPS). Input data are inserted efficiently, meaning that the utilized buffering scheme enables insertion, transition and overwriting without packet sorting or searching. On the other hand the solution keeps a fragile client connection dropping management and a brutish maximal limitation of buffer growing. Opposite to multicast communications where users cannot be accounted and security mechanisms are limited, conversion to HTTP instead of usual multicast tunnels, like CISCO GRE [11], may solve the issues by implementing access control and accounting mechanisms. Advanced security mechanisms, like strong traffic encryption, may also be added. REFERENCES [1] S. Deering, “Host extensions for IP multicasting”, Internet RFC 1112, 1989. [2] B. Cain, S. Deering, I. Kouvelas, B. Fenner, A. Thyagarajan, “Internet Group Management Protocol, Version 3, Internet RFC 3376, 2002. [3] J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach, T. Berners-Lee, “Hypertext Transfer Protocol -- HTTP/1.1”, Internet RFC 2616, 1999. [4] T. Berners-Lee, L. Masinter, M. McCahill, “Uniform Resource Locators (URL)", Internet RFC 1738, 1994.
392