A Novel Bandwidth Management System for Live Video ... - IEEE Xplore

3 downloads 254 Views 2MB Size Report
Oct 12, 2013 - shared networks, such as FON networks, through the construction of an efficient, robust, and high-availability video delivery system. By using ...
3848

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 62, NO. 8, OCTOBER 2013

A Novel Bandwidth Management System for Live Video Streaming on a Public-Shared Network Hong-Yi Chang, Member, IEEE, Nen-Fu Huang, Senior Member, IEEE, Yuan-Wei Lin, and Yih-Jou Tzang

Abstract—This paper presents a novel concept that is intended to exploit more thoroughly the sharable bandwidth of publicshared networks, such as FON networks, through the construction of an efficient, robust, and high-availability video delivery system. By using bandwidth expansion, only a small amount of bandwidth is spent on the video streaming source, despite the system being capable of streaming content to numerous clients simultaneously. Two algorithms were designed to optimize the public-shared bandwidth, wherein the needs of all clients are addressed, despite minimal usage of system resources. In this research, a resource management scheme was developed for recycling and reusing resources to improve the continuity of streaming experienced by clients and to reduce the overall system load on the devices involved. An implementation of the proposed system demonstrates the overall feasibility of the concept. Index Terms—FON, live video streaming, public-shared network, resource management.

I. I NTRODUCTION

I

N RECENT years, smart handheld devices (with limited upload bandwidth and low computing power), such as smartphones, personal digital assistants, and mobile Internet devices, have connected an increasing number of people to the Internet. Many manufacturers of smart handheld devices are also dedicated to developing stylish user interfaces with appended functions. Features such as communication, reminders, personal assistants, and Global Positioning System, are already well developed. With the support of more stable networking technology such as wired networks, Wi-Fi, and Third-Generation (3G), smart handheld devices will be able to support live video streaming more effectively, which is bound to increase the popularity of this service as a topic of research. Traditional multimedia systems are primarily based on client–server architecture. Each receiver is individually connected to the streaming source server; however, a rapid increase in the number of clients can easily overload the server, thus Manuscript received February 4, 2012; revised December 4, 2012 and February 27, 2013; accepted April 1, 2013. Date of publication May 1, 2013; date of current version October 12, 2013. This work was supported in part by the National Science Council of Taiwan under Grant NSC-101-2218-E415-001 and Grant NSC-101-2221-E-007-065, and in part by the Information and Communications Research Laboratory, Industrial Technology Research Institute, Taiwan. The review of this paper was coordinated by Dr. L. Cai. H.-Y. Chang is with the Department of Management Information Systems, National Chiayi University, Chiayi 60054, Taiwan (e-mail: hychang@cs. nthu.edu.tw). N.-F. Huang is with the Department of Computer Science, National Tsing Hua University, Hsinchu 300, Taiwan. Y.-W. Lin is with the Industrial Technology Research Institute, Hsinchu 300, Taiwan. Y.-J. Tzang is with the Department of Information Management, Hsing Wu University of Science and Technology, New Taipei, Taiwan. Digital Object Identifier 10.1109/TVT.2013.2261100

severely limiting the capacity of the system. Clients impose a heavy burden on the bandwidth of the source server, and Internet protocol (IP) multicast [1]–[3] could be the most effective means for resolving this problem because it is specifically designed to deliver content related to group-oriented applications efficiently. Nevertheless, IP multicast has certain limitations; as Setton and Girod [4] indicated in their book on peer-to-peer (P2P) video streaming, “Although this architecture is elegant as it places minimal burden on the network resources, multicast is not universally deployed and is not available outside proprietary networks or research networks such as Mbone.” Problems such as stream encryption and digital rights management are also difficult to resolve at the router level. Consequently, previous researchers have developed P2P schemes on applicationlevel overlay networks [5]–[10] for live media streaming applications. However, live video streaming is difficult under P2P architecture in smart handheld devices because of frequent changes in the P2P network, such as the joining and leaving of peers, and limitations in the upload bandwidth. Thus, developing an appropriate and stable video delivery platform that supports live video streaming in all types of online multimedia devices is necessary. This paper proposes a tree-based architecture to deliver live video streaming, in which the smart handheld devices are leaves on delivery trees. Table I lists a comparison of the results. Recently, an idea has been proposed in which users construct the networking system by themselves. By sharing their own bandwidth with the public, users have the opportunity to access the Internet from anywhere by using their shared bandwidth. This concept is referred to as a “public-shared network.” Currently, FON [11]–[13] is the only public-shared network system in the world. As shown in Fig. 1, the distribution of La Foneras [the official Wi-Fi access points (APs) of FON] in the area surrounding Madrid is represented by gray spots, each representing one La Fonera. Obviously, the distribution of gray spots is concentrated in certain areas, and in two spots, they completely overlap. By using this form of distribution, only some La Foneras in the area are used, whereas the other areas remain idle. For example, if a building has a La Fonera on each floor, the Foneros (people who access the network using La Foneras) can use La Foneras located on the lower floors only, whereas La Foneras on the upper floors are idle, thus wasting bandwidth. Wasted bandwidth may occur in any public-shared network because a resource management system that can solve this problem has yet to be developed. Because the bandwidth available on the Internet changes frequently and dynamically, it is possible that clients may

0018-9545 © 2013 IEEE

CHANG et al.: BANDWIDTH MANAGEMENT SYSTEM FOR LIVE VIDEO STREAMING ON A PUBLIC-SHARED NETWORK

3849

TABLE I COMPARISONS OF RELATED NETWORK ARCHITECTURES

Fig. 1.

FON maps nearby Madrid (http://maps.fon.com).

be unable to receive video streaming because of insufficient bandwidth. For clients with different bandwidths to receive streaming video and use Internet bandwidth more efficiently, scalable video coding (SVC) [14] is considered in constructing video streaming delivery systems. Huang et al. [15] demonstrated that the problem of constructing architecture on a public-shared network for delivering video streaming with SVC and a minimal amount of sharable bandwidth is NP-hard. Therefore, SVC was not used in this study because of the heavy loading required for encoding/decoding in SVC; in addition, the handheld devices, as clients in the proposed architecture, cannot perform the operations of SVC. Instead, the split-andmerge (S-M) method was adopted to satisfy the various bit rate requirements of clients. Furthermore, when streaming is delivered over the tree by using SVC, the bit rate of streaming at the child level can be only equal to or smaller than that at the parent level. In the proposed S-M architecture, split video streaming can be delivered using multiple PSnet-Ns with lower sharable bandwidths, thereby providing handheld devices with high bit rates through stream merging. In this paper, a resource management system that exploits the sharable bandwidth of public-shared networks, such as FON, was designed and implemented to construct an efficient, robust,

and high-availability video streaming delivery system (referred to as PSnet). Users traveling in nearly any type of vehicle must simply open the Wi-Fi connections on their smart handheld devices to enjoy the stable video streaming services on the PSnet system when they move into the signal range of any Wi-Fi AP in the PSnet system. The sharable bandwidth of an AP refers to the sharable uplink bandwidth of wired, rather than wireless, bandwidth. Thus, streaming is delivered to the AP from the downlink, on which the AP uploads multiple copies of streams to other APs. Thus, a sharable AP acts as an amplifier for the streaming video. Based on this concept of bandwidth expansion, the video streaming source only requires a small amount of bandwidth to deliver the video to the system, and numerous clients are capable of receiving the video stream simultaneously. Two optimized algorithms are proposed to arrange the publicly shared bandwidth so that all clients are served and minimum resources are consumed. The PSnet system was implemented to demonstrate the overall feasibility of the concept. This paper is organized as follows: Section II presents the key concept of the proposed PSnet, including the process of optimization and encompassing the recycling problems involved in arranging sharable APs, the optimized algorithms for these problems, and an analysis of complexity. Section III describes the experimental results and explains the video streaming delivery system implemented in this study, which is based on the FON platform. Finally, Section IV presents the conclusion of this study. II. P ROPOSED PS NET S YSTEM A. S-M Model In the asynchronous multisource streaming (AMSS) model that Itaya et al. [16], [17] developed for P2P, multiple content peers transmit packets with multimedia content to each requesting leaf peer. Each content peer begins transmitting a sequence of packets to the leaf peer independently. The content peers select discrete packets to be transmitted by exchanging control information with other content peers. Thus, content peers avoid sending the same packets to each leaf peer.

3850

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 62, NO. 8, OCTOBER 2013

Fig. 2. S-M model.

Fig. 2 shows the concept of an S-M model based on the AMSS model. In the S-M model, the S-M process is implemented by placing the programs developed in this study on the streaming server and client. The original stream within the buffer can be partitioned into n substreams and be delivered to n subtrees. The client receives n substreams and combines them in the buffer. The following scenarios introduce the S-M method, in which a delivery tree is organized by two binary subtrees to satisfy the requirements of bit rate a (e.g., 512 kb/s), as shown in Fig. 2. The two binary subtrees share the video sequence, each with a bit rate of a/2. Control message (i, j, k) is sent to the sharable nodes of the delivery tree, instructing them to transmit frames beginning with sequence i at interval j at an expected transmission rate of k. For instance, the streaming server sends messages (1,2,256) and (2,2,256) to subtrees T1 and T2 . After receiving control messages, T1 and T2 transmit video streams with frame sequences (1, 3, 5, 7, 9, . . .) and (2, 4, 6, 8, 10, . . .), respectively. Finally, the client receives the combination of these two substreams containing all frames of (1,1,512). Another scenario involves organizing a delivery tree into four binary subtrees to satisfy the requirements of bit rate a (e.g., 512 kb/s). The four binary subtrees, each at a bit rate of a/4, share the video sequence. For instance, the streaming server sends messages (1,4,128), (2,4,128), (3,4,128), and (4,4,128) to subtrees T1 , T2 , T3 , and T4 . After receiving control messages, T1 , T2 , T3 , and T4 transmit the video stream with frame sequences (1, 5, 9, 13, 17, . . .), (2, 6, 10, 14, 18, . . .), (3, 7, 11, 15, 19, . . .), and (4, 8, 12, 16, 20, . . .), respectively. Finally, the client receives the combination of these four substreams containing all of the frames for (1,1,512). B. Infrastructure Fig. 3 shows the infrastructure of the proposed PSnet, in which the solid lines represent streaming, the dotted lines represent control links, and the arrows indicate the direction of the stream. The infrastructure can be divided into several parts, i.e., a streaming source to provide video streaming, a server (denoted as PSnet-S) to manage the entire system, the streaming delivery architecture (denoted as PSnet-G) comprising several groups of organized sharable APs (denoted as PSnet-Ns, where the programs used in this study were ported onto relay video streaming), the sharable AP pool for backup (denoted as PSnet-P), and clients who wish to receive the video stream. Although La Fonera is designed to be permanently functional,

Fig. 3.

PSnet infrastructure.

it may still become nonsharable because of power failure, network failure, or even heavy wireless access. Under such conditions, the PSnet-N in the PSnet-P pool is selected to replace the unworkable PSnet-N to ensure the smooth delivery of the streaming content. Based on requests from clients, the PSnet-S constructs the PSnet-G by organizing the sharable PSnet-Ns into groups. For managing efficiency, the PSnet-Ns in each group are organized as complete-binary-tree structures. Because PSnet-N (the term PSnet-N or node is expressed in this paper) is essentially an AP and usually remains online for a long time following initialization and powering up, the tree structure is usually stable. When a client requests video streaming from PSnet-S, it delivers streaming to the root node of the tree, where it is “duplicated” (uploaded) to the child nodes of the root. Thus, streaming is relayed to the internal nodes of the tree. Finally, the stream is delivered to the leaf nodes, which forward the stream to the clients. The degree to which each node contributes to the stream (PSnet-N) depends on the amount of sharable bandwidth of that node and the rate of video streaming. For example, if clients connect to the Internet via an asymmetric digital subscriber line link with an upload speed of 512 kb/s, for 256 kb/s video streaming (nearly the minimal bandwidth required for acceptable video quality), each node is capable of providing (uploading) two copies of the stream (512/256 kb/s = 2). Thus, the degree of this type of node is 2. Clients connected by a 10 Mb/s fiber-to-the-home Fast Ethernet link (symmetric)

CHANG et al.: BANDWIDTH MANAGEMENT SYSTEM FOR LIVE VIDEO STREAMING ON A PUBLIC-SHARED NETWORK

3851

Fig. 4. Three kinds of PSnet-G arrangements. (a) All clients can be satisfied by two trees (14 nodes). (b) All clients can be satisfied by three trees (21 nodes). (c) All clients can be satisfied by five trees (35 nodes).

have a degree of 5 for 2 Mb/s high-quality video streaming (10/2 = 5 Mb/s). The PSnet-S selects appropriate leaf node(s) from the tree to deliver the stream to each client. When necessary, more than one node may be arranged to serve a client, with each node delivering part of the stream. In this case, the client must combine multiple substreams to obtain the original video program. For instance, as shown in Fig. 4(a), a case in which eight clients requesting 256 kb/s streaming and eight clients requesting 512 kb/s streaming can be employed to satisfy each of the clients. In Fig. 4(a), seven nodes, each with 512 kb/s of sharable upload bandwidth, are organized as the left binary tree to serve the eight clients with 256 kb/s streaming. Additional seven nodes, each with 1024 kb/s of sharable upload bandwidth, are organized as the right binary tree to serve the eight clients with 512 kb/s streaming. In Fig. 4(b), the left binary tree is organized in the same manner as that in Fig. 4(a); however, the right binary tree is organized with two subtrees, where each subtree is the same as the left binary tree. Thus, 14 nodes, each with 512 kb/s of sharable upload bandwidth, are organized to form the right binary tree serving the eight clients with 512 kb/s streaming. This situation may occur when sufficient sharable 512 kb/s nodes are available but without enough sharable 1024 kb/s nodes.

In this case, the original 512 kb/s streaming is partitioned into two 256 kb/s substreams, which are delivered to the roots of the left and right binary trees, respectively. To obtain the original 512 kb/s stream, each client must receive and combine two 256 kb/s substreams, i.e., one from each of the two binary trees. Another example is shown in Fig. 4(c), in which the left binary tree is organized in the same manner as that in Fig. 4(a); however, the right binary tree comprises four left binary trees. Therefore, 28 nodes, each with 256 kb/s of sharable upload bandwidth, are organized as the right binary tree to serve the eight clients with 512 kb/s streaming. Such a situation may occur when sufficient sharable 256 kb/s nodes are available, but without enough sharable 1024 or 512 kb/s nodes. In this case, the original 512 kb/s stream is divided into four 128 kb/s substreams, which are delivered to the right binary tree. To acquire the 512 kb/s stream, each client must receive and combine four 128 kb/s substreams, i.e., one from each of the four binary trees. The shared bandwidth of PSnet-S is consumed for delivering video streaming for a wireless client, according to the following rule. 1) The wireless client is connected to a general wireless AP that receives the streaming video from the leaf PSnet-N

3852

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 62, NO. 8, OCTOBER 2013

of the PSnet system through a wired network. In this case, all wireless clients that are connected to the same general wireless AP are regarded as one wired client. 2) The wireless client is connected to the leaf PSnet-N of the PSnet system to receive the streaming video directly. The PSnet-N delivers the streaming video to a wireless client by broadcast. Therefore, regardless of how many wireless clients are to be connected on the PSnet-N to receive video streaming, the sharing of bandwidth only needs to be calculated once. C. Optimization Problems Because the number of sharable nodes is always limited, the problem of how to arrange the nodes or how to establish the trees optimally is worth investigating. Here, two optimization problems are considered. First, the following definitions are provided. 1) Q = {qi } is the set of video quality levels that the streaming source provides. For example, Q = {q1 , q2 , q3 , q4 } = {128 kb/s, 256 kb/s, 512 kb/s, 1024 kb/s} indicates that the streaming source is capable of offering streaming at four quality levels, i.e., q1 = 128 kb/s, q2 = 256 kb/s, q3 = 512 kb/s, and q4 = 1024 kb/s. For simplicity, qi+1 = 2qi . |Q| is assumed to be the number of quality levels of the streaming system. 2) F = {fj } is the set of nodes in the PSnet system. fj stands for the sharable bandwidth of the jth node. For example, F = {256 kb/s, 256 kb/s, 256 kb/s, 512 kb/s, 512 kb/s} indicates that there are five nodes; the first three nodes can share 256 kb/s, whereas the final two nodes can contribute 512 kb/s. |F | is the number of nodes. Let Wp be the number of nodes with a sharable bandwidth of qp . For example, W2 refers to the number of nodes with a sharable bandwidth of q2 (256 kb/s). In the previous example, W2 = 3, and W3 = 2. Hence  Wp = |F |. (1) 3) Mr = {rk } is a set of requests, i.e., one from each client. rk indicates the requested bandwidth for streaming from the kth client. For example, Mr = {1, 1, 1, 2, 2, 2, 2, 2} indicates that there are eight requests, i.e., the first three for 128 kb/s (q1 ) streaming and the final five for 256 kb/s (q2 ) streaming. Let Rb be the number of clients who request streaming at a bandwidth of qb . For instance, R2 refers to the number of clients who request streaming at a bandwidth of q2 (256 kb/s). In the previous example, R1 = 3, and R2 = 5. 4) T = {tr,s are the trees, r ≥ s ≥ 1} is the set of trees (groups) in PSnet-G, in which each tree may comprise several subtrees. Index r indicates that the tree can deliver streaming at a bandwidth of qr , and s denotes that the tree comprises subtrees that can provide streaming at a bandwidth of qs . The number of subtrees is equal to qr /qs . For example, Fig. 5(a) and (b) shows the structure of two possible binary trees, i.e., t2,2 and t2,1 , for delivering streams at a bandwidth of q2 (256 kb/s). The trees t2,2

Fig. 5.

(a) Tree t2,2 . (b) Tree t2,1 .

comprise seven nodes, each with a sharable bandwidth of q3 (512 kb/s), to deliver streaming at a bandwidth of q2 (256 kb/s). The number i within each node refers to the sharable bandwidth of qi . The trees t2,1 are constructed by two subtrees (qr /qs = q2 /q1 = 256 kb/s/128 kb/s = 2). Each subtree comprises seven nodes, each with a sharable bandwidth of q2 (256 kb/s), to deliver the stream at a bandwidth of q1 (128 kb/s). For this tree, the original 256 kb/s stream is partitioned into two 128 kb/s streams, which are delivered to the roots of the two subtrees. The client receives the two 128 kb/s streams, i.e., one from each subtree, and combines them to obtain the original 256 kb/s streaming. When there is more than one tr,s in PSnet-G, they are marked as t1r,s , t2r,s , etc. 5) |T | is the number of trees in PSnet-G, and d(tr,s ) and h(tr,s ) are the degree and height of the subtrees in tr,s , respectively. For simplicity, all of the subtrees in one tree are assumed to have the same degree and height. 6) n(tr,s ) is the number of nodes of tr,s . Hence   qr d(tr,s )h(tr,s )+1 −1 . (2) n(tr,s ) = qs d(tr,s ) 7) l(tr,s ) is the number of clients that tr,s can serve. Note that this equals the number of clients that one subtree in tr,s is capable of serving. Therefore n(tr,s ) · d(tr,s ) · qs + 1. (3) qr Hence, the “optimal arrangement problems” can be discussed. Problem 1: Given Q, F , and Mr , find a way to construct the tree(s) such that all requests are satisfied and the number of constructed trees is minimized, that is  min (|T |) , that Rb ≤ l(tb,s ), ∀b in Mr . l(tr,s ) = d(tr,s )h(tr,s )+1 =

s

The reason for minimizing the number of constructed trees is because PSnet-S must deliver streaming to every root node of the trees. Fewer trees are helpful for reducing the uplink loading of PSnet-S. For example, consider a circumstance in which Q ={qi } = {128 kb/s, 256 kb/s, 512 kb/s, 1 Mb/s} F ={fj }, |F | = 100, W2 = 69, W3 = 28, W4 = 3 Mr={rk } = {2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3}.

CHANG et al.: BANDWIDTH MANAGEMENT SYSTEM FOR LIVE VIDEO STREAMING ON A PUBLIC-SHARED NETWORK

3853

if l(tb,i ) ≥ Rb , where tb,i is the minimum tree that satisfies Rb 6 then T otal_T ree ← T otal_T ree + 1 7 T ← T ∪ {tb,i } 8 i←0 9 else Rb ← Rb − l(tb,i ) 10 T otal_T ree ← T otal_T ree + 1 11 T ← T ∪ {tb,i } 12 i←i−1 13 if Rb > 0 return “It can’t satisfy Rb ” 14 return T otal_T ree, T 5

Fig. 6.

PSnet-G with three trees.

Fig. 7.

PSnet-G with two trees.

Therefore, R2 = 16, and R3 = 4. Fig. 6 shows a method for constructing three trees t3,3 , t12,2 , and t22,2 to serve all of the clients. Tree t3,3 serves the four clients at 512 kb/s, and trees t12,2 and t22,2 are established to serve the rest of the 16 clients at 256 kb/s. Fig. 7 shows an improved method whereby only two trees, i.e., t3,3 and t2,2 , are organized to serve all of the clients. The first tree serves the four clients at 512 kb/s, and the second tree serves the 16 clients at 256 kb/s. Although the number of trees has decreased, the height of the second tree has increased. Fewer trees save the uplink loading of PSnet-S, but a deeper tree introduces longer streaming latency. A tradeoff must exist between these two factors. For this optimization problem, an optimal algorithm (see Algorithm 1) with a greedy method is proposed to determine the optimal solution as follows. Algorithm 1: GREEDY-PSNET-TREE(Q, F, Mr ) 0 T otal_T ree ← 0, T ← ϕ 1 for each group of clients that requests streaming in Qb , considering Qb in descending order 2 i←b 3 while (i > 0) 4 do use Wi+1 to construct tb,i

To satisfy all of the clients, nodes with a greater sharable bandwidth are chosen first to construct the trees. The rest of the nodes are chosen according to their sharable bandwidth in descending order. Thus, higher quality requests are processed first. Every time the algorithm constructs a new tree, the value of T otal_T ree is increased by 1. Because the number of nodes is limited, serving all of the clients might be infeasible; therefore, an error message is returned. If the trees can be constructed to satisfy all of the clients, then the value of T otal_T ree is the minimal number of trees. Theorem 1: The solution produced using the proposed GREEDY-PSNET-TREE algorithm is optimal. Proof: The proof is obtained by contradiction. Assume that optimal solution S uses tb,i−1 instead of tb,i and that solution S  uses tb,i . Because the number of subtrees in tb,i−1 is twice that in tb,i for satisfying Rb , contradiction can be derived.  Theorem 2: The worst-case GREEDY-PSNET-TREE (Algorithm 1) overhead is O(b2 × log2 n(tb,i )). Proof: Q is sorted at the beginning of the algorithm. Radix sort can be applied as the result, such that the elements in Q are all small integers, and the time complexity is Q(n) to sort n elements in the range [0.nd ], where d is a constant. Therefore, the overhead of sorting is O(|Q|). To satisfy client requests Rb with a minimal number of trees, tb,i is constructed using Wi+1 , where i is equal to b. There are two possibilities. 1) If Rb can be satisfied by l(tb,i ) of only one tree, then the amount of the tree that is consumed is minimal. 2) tb,i−1 is constructed using Wi to satisfy the clients who cannot be satisfied by tb,i . Thus, trees are systematically added by one. If Rb is still not satisfied, then tb,i−2 is constructed using Wi−1 to satisfy the clients who cannot be satisfied by tb,i−1 . This procedure continues until all of the clients are satisfied. The main overhead for constructing the binary tree is searching, and the time complexity is O(h), where h is the height of the tree. Therefore, in the proposed algorithm, the overhead involved in searching is O(log2 n(tb,i )). Because the number of binary trees is no larger than b(b + 1)/2, which is O(b2 ), the overhead of the worst case in the proposed algorithm is O(|Q|) + b2 × O(log2 n(tb,i )) = O(b2 × log2 n(tb,i )), including the procedure for sorting and constructing binary trees. 

3854

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 62, NO. 8, OCTOBER 2013

Fig. 8. PSnet-G with 33 nodes. (a) Summary of PSnet-G. (b) Detailed structure of tree within PSnet-G.

Fig. 9. PSnet-G with 24 nodes. (a) Summary of PSnet-G. (b) Detailed structure of tree within PSnet-G.

Problem 2: Given Q, F , and Mr , find a way to construct the tree(s) such that all requests are satisfied and the number of used nodes is minimized, that is    min l(tb,s ), ∀b in Mr . n(tr,s ) , that Rb ≤ s

The reason for minimizing the number of used nodes is because the number of available and sharable nodes is limited and, therefore, must be allocated as efficiently as possible. For example, consider the case in which Q ={qi } = {128 kb/s, 256 kb/s, 512 kb/s, 1 Mb/s} F ={fj }, |F | = 100, W2 = 69, W3 = 28, W4 = 3 Mr ={rk }= {2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3}. Therefore, R2 = 8, and R3 = 12. Fig. 8 shows a way to construct three trees t3,2 , t3,1 , and t2,2 to serve all of the clients with a total number of 33 nodes. The nodes with number 3(2) inside are from W3 (W2 ) at a sharable bandwidth of 512 kb/s (256 kb/s). Trees t3,2 and t3,1 are organized to serve the 12 clients at 512 kb/s, and tree t2,2 is built to serve the eight clients at 256 kb/s. Fig. 9 shows a more effective way to construct three trees t3,3 , t3,2 , and t2,2 to serve all of the clients with a total number

of 24 nodes. The nodes with number 4(3) inside are from W4 (W3 ) with a sharable bandwidth of 1 Mb/s (512 kb/s). Trees t3,3 and t3,2 are organized to serve the 12 clients at 512 kb/s, and tree t2,2 is built to serve the eight clients at 256 kb/s. For this optimization problem, an optimal algorithm (Algorithm 2) with a greedy method is also proposed to determine the optimal solution as follows. Algorithm 2: GREEDY-PSNET-NODE (Q, F, Mr , Server_limit) 1 T otal_N ode ← 0, T ← ϕ, T  ← ϕ 2 T ← GREEDY-PSNET-TREE(Q, F, Mr ) 3 U sed_Bandwidth ← Aggregate qr of all tr,s in T 4 Available_Bandwidth←Server_limit−U sed_ Bandwidth 5 If Available_Bandwidth < 0 6 then return “Can’t satisfy all client” 7 While (Available_Bandwidth ≥ M in − Quality) 8 T ← T − M in − T ree 9 if (level > 1) 10 then 11 separate tr,s to two level-1 trees (t1r,s and t2r,s ) 12 T ← T ∪ {t1r,s } ∪ {t2r,s } 13 Available_Bandwidth←Available_Bandwidth − qr

CHANG et al.: BANDWIDTH MANAGEMENT SYSTEM FOR LIVE VIDEO STREAMING ON A PUBLIC-SHARED NETWORK

3855

14 else T  ← T  ∪ {tr,s } 15 T  ← T  ∪ T 16 T otal_N ode ← Aggregate all of n(tr,s ) in T  17 return T otal_N ode Here, the Server_limit is used to present the uplink bandwidth limitation of the server. First, this algorithm applies the GREEDY-PSNET-TREE algorithm (see Algorithm 1) to construct a set T of the minimal number of trees. Let n(h) denote the number of nodes in a complete binary tree of height h. Because n(h) = 2n(h − 1) + 1, to keep the total number of nodes as low as possible, the algorithm attempts to separate the tree of height h in T into two trees of height h − 1. Nevertheless, because the server must deliver a copy of the stream to the root of each tree, the tree separation consumes more of the uplink bandwidth of the server. Here, M in − Quality is defined as the minimal qr among all qr in T and M in − T ree as tr,s in T with minimal qr . If more than one M in − T ree is found with the same M in − Quality, then that with minimal qs would be chosen as M in − T ree. If the heights of M in − T ree h > 1 and the available server bandwidth are greater than that of M in − Quality, M in − T ree would be separated into two trees (t1r,s and t2r,s ) of height h − 1. These two trees are then placed into set T . Otherwise, M in − T ree is placed into T  . The tree-separating procedure is repeated until the available bandwidth of the server can serve no more trees. Finally, the remainder of the trees in T are placed into T  , and the minimal number of nodes can be calculated according to T  . Theorem 3: Solutions produced using the proposed GREEDY-PSNET-NODE algorithm are optimal. Proof: 1) The optimal solution must separate M in − T ree. Let ta,b be the M in − T ree. Assume that optimal solution S separates tc,d instead of ta,b and solution S  separates ta,b , where qa ≤ qc , qb ≤ qd , and qa + qb < qc + qd . The trees separated from tc,d then consume more bandwidth than do those separated from ta,b . Thus, S  has more bandwidth available for at least one more separation. Consequently, the number of nodes in S  is at least one less than that in S, and S is not the optimal solution. This is a contradiction. Therefore, the optimal solution must separate the M in − T ree. 2) The local optimal choice will lead to a global optimal solution. Let ta,b be the M in − T ree in T . If M in − T ree has been chosen and separated in optimal solution S, by using the result obtained from (1), let the available bandwidth become Available_Bandwidth − qa and S  = T − {ta,b }, and the M in − T ree in the current set S  must be chosen. By repeating the process until the available bandwidth is insufficient for any tree to separate, the solution can be proved to be optimal.  Theorem 4: The worst-case GREEDY-PSNET-NODE (see Algorithm 2) overhead is O(b2 × log2 n(tb,i )).

Fig. 10. Stream delivery in Psnet-G. (a) Without a maintenance scheme. (b) With a maintenance scheme.

Proof: At the beginning of the algorithm, Algorithm 1 was adopted to construct the minimal number of trees. The overhead was O(b2 × log2 n(tb,i )), and the proof is given in Theorem 1. Because n(h) = 2n(h − 1) + 1, the number of nodes has one subtracted when separating tb,i into two level-1 trees (t1b,i and t2b,i ). Therefore, the solution with a minimal number of nodes can be obtained by continuing to separate minimal qr trees into two level-1 trees until Available_Bandwidth ≤ M in − Quality. The overhead of each separation is O (1), because only the root must be removed, and the time of separation in one tree is, at most, log2 n(tb,i ). Because the number of trees is, at most, b(b + 1)/2, which is the O(b2 ) time complexity, the overhead of the worst case in the proposed algorithm is O(b2 × log2 n(tb,i )) + O(1) × O(b2 × log2 n(tb,i )) = O(b2 × log2 n(tb,i )), including the procedure for constructing and separating the binary trees.  III. R ESOURCE M ANAGEMENT When all of the clients of a serving node closed their media players and stopped receiving the stream, the serving node wasted its bandwidth when it was still forwarding the stream. To prevent this inefficiency, a client informs the streaming server when it closes the media player. After receiving such a notification from the client, the streaming server monitors each of the sharable nodes along the path to this client by using parent/child information. Thus, the streaming server can stop the delivery stream to the nodes, which are no longer serving clients. In Fig. 10(a), only four clients receive the stream, but all of the nodes are active (gray circles), which means a number of these nodes are wasted. However, by using the proposed maintenance scheme, streaming is not delivered to a node unless it has at least one receiving client or node. As shown in Fig. 10(b), three of the nodes are freed for further arrangement. This section proposes algorithms for recycling and reusing resources. 1) Resource Recycling: All of the nodes and clients of the delivery tree are numbered from top to bottom and from left to right. The clients who have closed their media players in this round and the previous round are recorded in set B  and C, respectively. Set B includes elements of sets B  and C. In the resource-recycling algorithm (see Algorithm 3), set B is sorted, the first even number M is sought, and N is the total of continuous numbers starting from M (N = 1 means that the number after M is not continuous). Even numbers in set B must continue to be sought for until N is larger than 1. Next,

3856

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 62, NO. 8, OCTOBER 2013

the root of the delivery subtree that is closed is found, and the node number P of the root is calculated using (4). If P is an integer, then all of the continuous numbers would be in the same subtree, and video streaming would not be delivered to this subtree rooted by P , i.e., P =

M log2 N

2

.

(4)

If P is not an integer, the continuous numbers would not be in the same subtree. In this case, 2log2 N must continue to be divided by 2 until P is an integer, at which point node number P can be calculated using the following equation: P =

M 2log2

, N −i

for i = 1, 2, 3, . . . , log2 N − 1.

(5)

The resource-recycling algorithm repeats this procedure and ends when there are no even numbers remaining in set B. The elements in set B are placed in set B  , and all of the P values are placed in set R. The main advantage of this algorithm is that the root of the delivery subtrees that can be closed can be accessed, thereby stopping the delivery of the video stream to these nodes quickly and effectively without affecting the other clients who are enjoying streaming.

overhead is O(l(tr,s )). The number of elements in set B is, at most, twice that of the tree leaves and is 2 × l(tr,s ). However, the total overhead required for the operation in the proposed algorithm is (2 × l(tb,i ))/2 = l(tr,s ), because only elements with an even value are selected. The time required to calculate root P is log2 N , where N is at most 2 × l(tr,s ), and the time complexity is O(log2 l(tr,s )). Therefore, the overhead of the worst case in the proposed algorithm is O(l(tr,s )) + O(l(tr,s ) × O(log2 l(tr,s )) = O(l(tr,s ) × log2 l(tr,s )), including the procedure to sort and find all root P ’s.  Algorithm 4: Input: F : The set of a leaf’s sharable nodes, which were closed. Client_Back: The number of clients that have restarted their media players in this round Output: A: The set of sequence of reuse of sharable nodes. 01 E ← 0, A ← φ, A ← φ, i ← 0 02 E ← The capability of the delivery tree to serve the clients 03 if (Client_back > E) 04 for each number i in set F by decreasing order 05 while (number i/2 sharable nodes is not open) 06 i←

Algorithm 3: Input: B  : The set of clients that have closed the media player in this round. C: The set of clients that have closed the media player in last round. Output: R: The set of roots of delivery subtrees that will be closed 01 B ← B  ∪ C, M  ← 0, R ← φ 02 Sort B by increasing order 03 while (There are any even numbers in set B) 04 do M ← The even numbers after M  in set B 05 N ← The amount of continuous numbers started from M 06 if (N = 1) 07 i←0 08 while (P = integer) 09 P =

M log2 N −i

2

10 i←i+1 11 M  ← M , R ← R ∪ P 12 B  ← B 13 return R Theorem 5: The worst-case resource-reuse algorithm (see Algorithm 3) overhead is O(l(tr,s ) × log2 l(tr,s )). Proof: Radix sorting can be adopted in the proposed algorithm because the elements are small integers and the

  i 2

07 A ← A ∪ i 08 Sort A by decreasing order 09 A ← all the nodes in the paths that traverse each root in A by the distributed file system (DFS) 10 return A 2) Resource Reuse: If the number of clients that have restarted their media players in this round is not greater than E, which is the number of clients that the delivery tree leaves can still connect, the clients would be directly served by the available leaves. Otherwise, the resource-reuse algorithm (see Algorithm 4) would be processed to reuse the fewest sharable nodes without affecting existing clients. Set F includes the leaves that have stopped delivering video streaming, and their roots can be calculated and implemented into set A in decreasing order. For each element in A , the subtree rooted by it is traversed by the DFS, and the nodes are placed in the path of set A. This means that if the root sharable node is not a leaf, then the sharable nodes in its left subtree are reused first, followed by those in its right subtree, recursively. For example, as shown in Fig. 10, attempting to reuse the node, which is not a leaf and is numbered 3, requires first reusing the node numbered 6 instead of 7. The node sequence in set A is the optimal order by which to reuse the sharable nodes, because it saves the most resources. The main advantage of this algorithm is that it enables obtaining the fewest necessary sharable nodes from the closed sharable nodes and uses them to satisfy all of the clients that

CHANG et al.: BANDWIDTH MANAGEMENT SYSTEM FOR LIVE VIDEO STREAMING ON A PUBLIC-SHARED NETWORK

restart their media players without affecting the clients who are enjoying their streaming. Theorem 6: The worst-case resource-reuse algorithm (Algorithm 4) overhead is O(n(tr,s )). Proof: Radix sorting can be adopted for the proposed algorithm, because the elements are small integers, and the overhead is O(l(tr,s )). To calculate the capability of each delivery tree to serve the clients, determining whether two clients are connected to each leaf is all that is necessary, and the time complexity is O(l(tr,s )). The time complexity of the DFS is O(V + E). When it is operated on a tree, in which E = V − 1, the time complexity becomes O(V ). The algorithm operates the DFS on set A, and the overhead is O(n1 (tr,s )) + O(n2 (tr,s )) + · · · + O(nk (tr,s )), because every node in each tree is walked by one DFS operation at most, n1 (tr,s ) + n2 (tr,s ) + · · · + nk (tr,s ) ≤ n(tr,s ), and the time complexity is O(n(tr,s )). Therefore, the time complexity for all operations in set A is O(l(tr,s ) × log2 l(tr,s )), and the overhead in the worst case for the proposed algorithm is O(l(tr,s ) × log2 l(tr,s )) + O(n(tr,s )) = O(l(tr,s ) × log2 l(tr,s )), including the operation  on set A and the DFS.

3857

Fig. 11. PSnet system deployment diagram. TABLE II PARAMETER FOR THE EXPERIMENT

IV. E VALUATION AND I MPLEMENTATION The experimental results are presented here to demonstrate the effectiveness of the proposed algorithms. First, the construction of the experimental environment is presented, followed by the results of the evaluation of system performance. This section concludes with an evaluation of video quality. A. Experimental Environment Setup 1) Constructing the Experimental Environment: The experiment was designed using the real networking system, and 15 PSnet-Ns were set at four campuses, namely, Minghsin University of Science and Technology (MUST), Taiwan; National Tsing Hua University (NTHU), Taiwan; National Chiao Tung University (NCTU), and Chung Hua University (CHU), Taiwan. One PSnet-N in MUST is the tree root, and the tree is divided into two subtrees. The left subtree contains three parts, i.e., one PSnet-N at MUST, three PSnet-Ns at NCTU, and three PSnet-Ns at NTHU. The right subtree contains two parts, i.e., four PSnet-Ns at MUST and three PSnet-Ns at CHU. Fig. 11 shows the PSnet system deployment diagram. The four campuses were connected by TANET, and 16 clients could measure the PSnet system from the test center through wired network connections. The test center was located at NTHU, but the subnet it belonged to was different from the PSnet-Ns at NTHU. These two subnets were connected by Fast Ethernet (maximum 100 Mb/s). The Hypertext Transfer Protocol (HTTP) video streaming was adopted as the network service in this study, and some experimental data were recorded to evaluate the overall performance. Before beginning the experiment, the quality of the network in this study was preevaluated according to IxChariot [18]. The parameters listed in Table II were used to preevaluate the experimental environment.

Table III shows the preevaluated results, and packets with 1500 bytes were sent over 90 min throughout the normal networking environment. The next evaluation item was the transmission of data over 90 min by using the Tansmission Control Protocol (TCP), so that the time required by network devices to handle exceptionally large packets could be obtained. Finally, the User Datagram Protocol was adopted as the item for evaluation, setting the data rate at 1024 kb/s and the time for streaming at 90 min. As shown in Table III, for the throughput of multimedia data streaming, the test traffic file named IPTVv.scr (CISCO IP/TV, MPEG video steam) was used in IxChariot as the parameter to test the capability of the network devices and the packet loss rate. Table III summarizes the average preevaluated results. 2) Evaluating Streaming Quality and Producing Test Video Program: According to Klaue et al. [19], the peak signal-tonoise ratio (PSNR) is usually adopted for the evaluation index of the objective quality measure approach. The following is the definition of the PSNR, where YS and YD are the frames at the sender and the receiver, respectively. The equation used to acquire the PSNR is shown in (6). The larger the PSNR is, the smaller the difference between YS and YD , and vice versa. Thus PSNR(n) dB = 20 log10 ⎛ ⎜ ⎜ ×⎜ ⎜ ⎝

⎞ Vpeak

1 Ncol Nrow

N col N row

i=0 j=0

[YS (n, i, j) − YD (n, i, j)]

⎟ ⎟ ⎟. ⎟ 2⎠

(6)

3858

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 62, NO. 8, OCTOBER 2013

TABLE III SUMMARY OF AVERAGE PREEVALUATED RESULTS

TABLE IV INFORMATION ABOUT THE TESTING VIDEO PROGRAM

of the substreams was calculated using the MSU Video Quality Measurement Tool [20], and the average of the five values was recorded. B. Evaluating System Performance

In this paper, three standard video test programs were used at different levels of movement established by the International Radio Consultative Committee (CCIR). Table IV lists the details of these three types of video program. “Akiyo” is a video with little movement. The apparent changes are mainly above Akiyo’s neck, particularly her facial expression, with almost no changes in the background. “News” has more movement but is smoother than “Akiyo,” and the apparent changes are mainly the two anchors and the dancers in the background. “Football,” which has the highest degree of movement, is a video of a football game that is characterized by the high-speed movement of people and scenes. To increase the accuracy of testing, the same video stream was repeatedly input because of the limited streaming length; however, such repetition requires considerable human effort and cost. Therefore, several identical video streams were merged into a single stream, as shown in Fig. 12. In the experiment, 20 identical streams were merged, and the black frames were placed in front of and behind each stream to recognize them at the receiver. Hence, the entire video stream was composed of several substreams with black frames at both the head and the tail. Because there are stark differences between the black and the normal frames, the frames were systematically compared through the entire stream to acquire their PSNRs. The appearance of a low PSNR indicated that one black frame was successively followed by one normal frame, and using the computer program to distinguish the substreams automatically was feasible. In this paper, five individual substreams from both the server and the client side were randomly adopted, the PSNR

This study involved implementing a prototype of the proposed PSnet system. Windows Media Encoder 9 was used to encode streaming, with an IBM laptop used as the PSnet-S running the server-side program that received video streams from a streaming source and split the video stream into each tree root of the subtree. La Foneras were used as the PSnet-Ns, with the proprietary programs implemented to relay the streaming video. Additionally, ASUS EeePC was used as the client to run the client-side program receiving and merging streaming video from the leaf of the subtrees. Table V lists the experimental conditions. 1) Video Delay: In a live streaming delivery system, the video delay between the source and the client should be as short as possible. Video programs between the client and the server are closely synchronized when the average video delay is short. The proposed PSnet system was compared with Goalbit [21], which is the first open-source and worldwide P2P live streaming system. In a P2P system, a buffer mechanism is designed for each peer to provide smooth video playing at a desired quality, with a subsequent increase in video delay. Fig. 13 clearly indicates how the average video delay of Goalbit exceeds that of the PSnet system. The average video delay was shorter in the PSnet system, because PSnet-Ns were responsible only for relaying the video stream without buffers. As shown in Fig. 13, the proposed PSnet system is highly effective and appropriate for delivering live streaming. 2) Redundancy Ratio: The redundancy ratio (R) denotes the ratio of redundant video streaming packets to original packets on the client side. By assuming that C denotes the total video size received by a client and that S denotes the total video size sent by the server, R can be estimated using the following equation: R=

C −S × 100%. S

(7)

Because of the distributed characteristics of the P2P approach, a peer may download duplicated contents from multiple peers,

CHANG et al.: BANDWIDTH MANAGEMENT SYSTEM FOR LIVE VIDEO STREAMING ON A PUBLIC-SHARED NETWORK

3859

Fig. 12. Input video streaming. TABLE V EXPERIMENTAL CONDITIONS

Fig. 15. Start latency.

Fig. 13. Video delay.

Fig. 14. Redundancy ratio.

with the redundant packets causing wasted bandwidth. In contrast to the P2P approach, the proposed PSnet system uses the TCP/HTTP streaming protocol, and each client downloads independent content from the leaf node of a multiple subtree with the intentions of maintaining the transmission of the stream as smooth as possible and using the bandwidth optimally. The proposed PSnet system was compared with the Goalbit P2P live streaming system. Fig. 14 indicates that the PSnet provides a much more favorable average redundancy ratio than Goalbit does. This figure also reveals that the PSnet offers only slight traffic redundancy, thus indicating the high efficiency of bandwidth utilization. 3) Start Latency: Start latency denotes the time delay incurred from the instant users start media players to when

they receive the streaming video. In a live streaming delivery system, the start latency should be kept to a minimum. The proposed PSnet system was compared with the Goalbit P2P live streaming system, and the results are shown in Fig. 15. Obviously, the average start latency of Goalbit is much larger than that of the PSnet system because each peer of Goalbit spends a considerable amount of time filling its frame buffers. This figure also indicates that Client 1 in the proposed PSnet system requires the longest time (i.e., approximately 10 s). However, in Fig. 13, the delay in transmitting video streaming was only 0.4 s, and after Client 1 started the media player, the server had to open the network sockets of PSnet-Ns numbered 1, 2, 4, and 8 sequentially to receive and deliver the video stream. This explains why the start latency required as much as 10 s. When Client 2 joined the system, the start latency was only 0.8 s because the sockets of PSnet-Ns numbered 1, 2, 4, and 8 have been already opened. An alternative approach would be to open all of the PSnet-N sockets in advance and have them transmit the streaming video. However, doing so would produce considerable unnecessary overhead because all PSnetNs would have to operate, even if there were no clients. With the improvements to the proposed PSnet system, the server would have to open only the necessary PSnet-N sockets in advance and force these PSnet-Ns to transmit the streaming video. Thus, the start latency of Client 1 would be reduced to approximately 0.2 s, which is approximately five times shorter than that prior to this adjustment. As shown in Fig. 15, the proposed PSnet system is highly effective, making it feasible for live streaming. C. Handoff Evaluation To enhance system availability, a handoff scheme was designed for the proposed PSnet system. When a PSnet-N crashes, a new PSnet-N is selected from PSnet-P for replacement. If a broken PSnet-N occurs in the delivery path of a streaming video, then the buffer of the media player determines whether

3860

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 62, NO. 8, OCTOBER 2013

Fig. 16. Playout deadline. TABLE VI IDEAL PSNR

the client is streaming video. The client remains unaffected only when the PSnet-N replacement procedure is performed before the buffer of the media player is empty; otherwise, the video program pauses the program. After replicating the experiment 1000 times, the handoff process requires only an average of 1.567 s. As shown in Fig. 16, experiments involving playout deadline with various nodes were performed. In this experiment, three nodes (i.e., bit rates of 256, 512, and 1024 kb/s) were adopted. For each node, a video delivery path was constructed based on 15 nodes. Six of the nodes in this path were randomly chosen and crashed individually to invoke the handoff procedure. The results indicated that a higher stream bit rate produced a shorter playout deadline, and vice versa. This is largely because a higher stream bit rate consumes more buffers, subsequently causing a shorter playout deadline for the client. The results further indicated that the highest stream bit rate (i.e., 1024 kb/s) produced a playout deadline averaging 5.301 s. With the lower stream bit rates (i.e., 512 and 256 kb/s), the playout deadlines were reduced to 6.723 and 11.673 s. Because the handoff procedure only lasted for 1.567 s, it was able to recover the node crash, thereby enhancing system availability. D. Video Quality Evaluation 1) PSNR: Before evaluating the quality of the video, the ideal PSNR had to be directly obtained from the streaming server via the crossover cable. Thus, the PSNR was nearly unaffected by networking transmission. Table VI shows the ideal PSNR gained from testing an average of 50 times. Three types of videos were used on various video delivery platforms for testing. The results shown in Fig. 17(a) indicate that, by using the proposed system, the “Akiyo” and “News” videos were stable, but the “Football” video was unstable. Fig. 17(b) indicates that the “Akiyo” video was stable, but the “News” and “Football” videos were unstable on the Goalbit

Fig. 17. PSNR comparison results with different networks. (a) PSNR of PSnet. (b) PSNR of Goalbit. TABLE VII AVERAGE PSNR IN DIFFERENT NETWORKS

system. The results of the test were sketched (see Table VII for a summary). The worst average PSNR value in the experiment was 25.40657 on the “Football” video. The results differed from the ideal (27.26874) by only 1.86217. This suggests that the video quality in the receivers of the PSnet system was satisfactory. The 167th frame shown in Fig. 18(b) shows the following problem. Because of the jittering and congestion of networking, the media player used the 166th frame [shown in Fig. 18(a)] to replace the 167th frame during the video-playing process, with the occurrence of lag and unsmooth streaming. However, the PSNR between Figs. 18(b) and 19(b) was still as high as 31.05133. Therefore, the “Akiyo” video showed only a few movements. The largest portion of the background in the “Akiyo” video remains unchanged, and the difference between frames is seen only in the area of Akiyo’s head. Because the background encompasses a considerably larger area than Akiyo’s head does, the PSNR value does not reveal how the lag affected the video stream. Therefore, the PSNR of a video with a low level of movement is less sensitive to the jittering or congestion of networking than

CHANG et al.: BANDWIDTH MANAGEMENT SYSTEM FOR LIVE VIDEO STREAMING ON A PUBLIC-SHARED NETWORK

3861

Fig. 18. Video stream sequence at the sender side. (a) 166th frame. (b) 167th frame. (c) 168th frame.

Fig. 19. Video stream sequence at the receiver side. (a) 166th frame. (b) 167th frame. (c) 168th frame.

Fig. 20. Continuity ratio.

those with a high level of movement. Therefore, the continuity ratio is another index of video quality pertaining to the receiver. A higher rate at which duplicated frames appear implies a lower degree of smoothness in a video stream. A detailed experiment on the continuity ratio was performed in this study. 2) Continuity Ratio: The continuity ratio determines whether or not video streaming is continuous and whether or not frames are discarded. In other words, it determines whether users can watch the program smoothly. The continuity ratio can be calculated using the following equation: Continuity Ratio   Total continuous streaming frames = · 100%. Total streaming frames

(8)

As shown in Fig. 20, all video streams on the PSnet system had a continuity ratio exceeding 99.5%, which means that the users could watch the program much more smoothly. However, the Goalbit system was the worst, with a continuity ratio of between 99.8% and 97.1%. Because the Goalbit system is more unstable than the PSnet system, the users often perceived stuttering when watching video programs. V. C ONCLUSION This paper has presented a novel concept that is intended to exploit more thoroughly the sharable bandwidth of publicshared networks, such as FON networks, through the con-

struction of an efficient, robust, and high-availability video delivery system. Two algorithms were designed to optimize public-shared bandwidth, wherein the needs of all clients are addressed, despite minimal usage of system resources. In addition, a resource management scheme was developed for recycling and reusing resources to improve the continuity of streaming experienced by clients and to reduce the overall system load on the devices involved. The time complexity of all proposed algorithms was analyzed. Implementing the proposed system demonstrates the overall feasibility of the concept. According to the results of comparing the proposed system with Goalbit, the proposed system is highly effective and appropriate for delivering live streaming. In the future, the MapReduce algorithm is expected to be used to manage the bandwidth of FON. R EFERENCES [1] T. Kim and M. H. Ammar, “A comparison of heterogeneous video multicast schemes: Layered encoding or stream replication,” IEEE Trans. Multimedia, vol. 7, no. 6, pp. 1123–1130, Dec. 2005. [2] D. Wu, Y. T. Hou, and Y.-Q. Zhang, “Scalable video coding and transport over broad-band wireless networks,” Proc. IEEE, vol. 89, no. 1, pp. 6–20, Jan. 2001. [3] Q. Zhang, Q. Guo, Q. Ni, W. Zhu, and Y.-Q. Zhang, “Sender-adaptive and receiver-driven layered multicast for scalable video over the Internet,” IEEE Trans. Circuits Syst. Video Technol., vol. 15, no. 4, pp. 482–495, Apr. 2005. [4] E. Setton and B. Girod, Peer-to-Peer Video Streaming. New York, NY, USA: Springer-Verlag, 2007. [5] N.-F. Huang, Y.-J. Tzang, H.-Y. Chang, and C.-W. Ho, “Enhancing P2P overlay network architecture for live multimedia streaming,” Inf. Sci., vol. 180, no. 17, pp. 3210–3231, Sep. 2010. [6] N.-F. Huang, Y.-J. Tzang, H.-Y. Chang, and C.-S. Ma, “Construction of an efficient ring-tree-based peer-to-peer streaming platform,” in Proc. NCM, Aug. 2010, pp. 75–80. [7] C. Wu, B. Li, and S. Zhao, “On dynamic server provisioning in multichannel P2P live streaming,” IEEE/ACM Trans. Netw., vol. 19, no. 5, pp. 1317–1330, Oct. 2011. [8] L. Zhou, Y. Zhang, K. Song, W. Jing, and A. V. Vasilakos, “Distributed media services in P2P-based vehicular networks,” IEEE Trans. Veh. Technol., vol. 60, no. 2, pp. 692–703, Feb. 2011. [9] B. Zhang, S. Jaminand, and L. Zhang, “Host multicast: A framework for delivering multicast to end users,” in Proc. IEEE INFOCOM, New York, NY, USA, Jun. 23–27, 2002, pp. 1366–1375.

3862

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 62, NO. 8, OCTOBER 2013

[10] D. Pendarakis, S. Shi, D. Verma, and M. Waldvogel, “ALMI: An application level multicast infrastructure,” in Proc. 3rd USENIX USITS, San Francisco, CA, USA, Mar. 2001, pp. 49–60. [11] FON official website. [Online]. Available: http://www.fon.com [12] N.-F. Huang, H.-Y. Chang, Y.-W. Lin, and K.-S. Hsu, “A novel bandwidth management scheme for video streaming service on public-shared network,” in Proc. IEEE ICC, Beijing, China, May 2008, pp. 1755–1759. [13] N.-F. Huang, H.-Y. Chang, T.-C. Wang, Y.-S. Lin, and Y.-W. Lin, “An efficient and locality-aware resource management scheme for SVC-based video streaming system on public-shared network,” in Proc. APCC, Shanghai, China, Oct. 2009, pp. 682–685. [14] H. Schwarz, D. Marpe, and T. Wieg, “Overview of the scalable video coding extension of the H.264/AVC standard,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 9, pp. 1103–1120, Sep. 2007. [15] N.-F. Huang, H.-Y. Chang, Y.-W. Lin, H.-C. Liu, and K.-S. Hsu, “On the complexity of bandwidth management problem for scalable coding video streaming on public-shared network,” IEEE Commun. Lett., vol. 13, no. 1, pp. 61–63, Jan. 2009. [16] S. Itaya, N. Hayashibara, T. Enkido, and M. Takizawa, “Asynchronous multi-source streaming protocol to realize high-performance multimedia communication,” in Proc. DEXA Workshop, Aug. 2005, pp. 116–120. [17] S. Itaya, T. Enokido, and M. Takizawa, “A high-performance multimedia streaming model on multi-source streaming approach in peer-to-peer networks,” in Proc. IEEE AINA, Mar. 2005, pp. 27–32. [18] Official website. [Online]. Available: http://www.ixchariot.com/products/ datasheets/ixchariot.html [19] J. Klaue, B. Rathke, and A. Wolisz, “EvalVid—A framework for video transmission and quality evaluation,” in Proc. of the 13th International Conference on Modelling Techniques and Tools for Computer Performance Evaluation, Urbana, IL, USA, Sep. 2003, pp. 255–272. [20] Official website. [Online]. Available: http://compression.ru/video/ quality_measure/video_measurement_tool_en.html [21] Official website. [Online]. Available: http://goalbit.sourceforge.net

Nen-Fu (Fred) Huang (SM’06) received the Ph.D. degree in computer science from National Tsing Hua University (NTHU), Hsinchu, Taiwan, in 1986. From 1997 to 2000, he was the Chairman of the Department of Computer Science, NTHU, where, since 2008, he has been a Distinguished Professor. He is the Founder of BroadWeb Corporation (www.broadweb.com) and served as its CEO and Chairman from 2002 to 2006. He is also the Founder of NetXtreme Corporation (www.netxtream.com). He has published more than 200 journal and conference papers, including more than 50 papers in IEEE INFOCOM/ICC/GLOBECOM conferences. His current research interests include Cloud/peer-to-peer-based interactive video streaming technologies, network security, high-speed switch/router, mobile and wireless networks, and IPv6enabled sensor networks. Dr. Huang was the Editor of the Journal of Information Science and Engineering from 1997 to 2003. He also served as the Guest Editor for the IEEE J OURNAL ON S ELECTED A REAS IN C OMMUNICATIONS Special Issue on “Wireless Overlay Networks Based on Mobile IPv6” in 2004. Since 2008, he has been the Editor of the Journal of Security and Communication Networks. He received the Outstanding Teaching Award from NTHU in 1993, 1998, and 2008; the Outstanding University/Industrial Cooperation Award from the Ministry of Education, Taiwan, in 1998; and the Outstanding IT People Award during IT month in Taiwan in 2002; the Technology Transfer Award from the National Science Council of Taiwan in 2004; the Technology Creative Award from the Computer and Communication Research Center, NTHU, in 2005; and the Outstanding University/Industrial Collaboration Award from NTHU in 2010. He also served as a Program Chair for the 14th International Conference on Information Networks in 2000 and for the Taiwan Academic Network Conference in 2002. He also served as a Program Cochair and a Keynote Speaker for the IEEE International Conference on Selected Topics in Mobile and Wireless Networking in 2012.

Yuan-Wei Lin received the B.S. and M.S. degrees in computer science from National Tsing Hua University, Hsinchu, Taiwan, in 2005 and 2007, respectively. He is currently a Software Engineer with the Industrial Technology Research Institute, Hsinchu. He is working on applications over networks. His research interests include live video streaming, network management, and worldwide interoperability for microwave access (WiMAX)/Long-Term Evolution technology. Hong-Yi Chang (M’11) received the B.S. degree in information management from Minghsin University of Science and Technology, Hsinchu, Taiwan, in 2000; the M.S. degree in computer science from Chung Hua University, Hsinchu, in 2003; and the Ph.D. degree from National Tsing Hua University, Hsinchu, in 2010, under the supervision of Prof. N.-F. Huang. Since August 2011, he has been an Assistant Professor with the Department of Management Information Systems, National Chiayi University, Chiayi, Taiwan. His research topics are mainly related to peer-to-peer live video streaming, networks and applications, cloud computing, IPv6 networks, and resource management.

Yih-Jou Tzang received the M.S. degree in computer science and engineering from Yuan Ze University, Jhongli, Taiwan, in 1995 and the Ph.D. degree in computer science from National Tsing Hua University, Hsinchu, Taiwan, in 2010. He is an Assistant Professor with the Department of Information Management, Hsing Wu University of Science and Technology, New Taipei, Taiwan. His main research interests include peer-to-peer overlay networks, network security, multimedia networking, cloud computing, and other technologies that enhance the effectiveness of network communication in the Internet.