MDC-Ca: Efficient Caching Management Strategy for CCN Using Multiple Description Coding †
Fuxing Chen† , Weiyang Liu† , Hui Li†‡∗ , Wensheng Chen†‡ , Jun Lu†‡ and Dagang Li† School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China. ‡ Institute of Big Data Technologies, Peking University, China.∗ Corresponding Author Email:
[email protected],
[email protected],
[email protected]
Abstract—Due to the explosive growth of multimedia content (especially videos) over the Internet, content-centric networking (CCN) is proposed to remit the problems of modern bandwidth-intensive Internet usage patterns. Additionally, the current streaming media coding is designed for video service for IP networks. However, there are few researches who concentrate on efficient streaming media coding for the CCN pattern. It motivates us to find a suitable video coding to improve the performance of CCN. This paper proposes an advanced caching management strategy by combining the Multiple Description Coding (MDC) to the content items, termed as MDC-Ca. The core parts of CCN, e.g., location-independent naming, namebased routing and in-network caching strategy, are all adjusted to the content communication. Moreover, MDC-Ca can deal with the ruleless content chunk distribution in the caching of network nodes. MDC-Ca was studied in a mathematical analysis model and the scheme performs well in the simulation. Further, we perform the emulation in a practical network. The experimental results from both simulations and practical emulations show the superiority of the proposed caching management strategy.
I.
I NTRODUCTION
The IP protocol is designed based on host-to-host conversation model, facilitating ubiquitous interconnectivity on current Internet. However, such conversation model is no longer suitable for handling the explosion of multimedia content [1]. A novel network architecture is in high demand for resolving such problems. Content-centric networking (CCN) [2], is proposed for this end. Specifically, CCN retrieves content by name, which indicates that the fundamental paradigm of Internet has shifted from host-to-host IP conversation model to a content-centric communication model. In such a model, it is no longer necessary to get data content from certain servers, but instead to send a request with content name to the network for content without considering where the content is stored or cached. In order to put the strategy into effect, CCN adopts in-network caching, also called Content Store (CS), to serve requests from nearby router nodes before reaching the original content source, once the content is cached. Therefore, the issues of caching size, management and performance innetwork are important to study, and many effective methods [1], [3] have been proposed. Facing such multimedia stream explosion, a reliable and efficient multimedia routing and diffusing strategy needs to be considered for improving the traffic efficiency, quality of service (QoS) and quality of experience (QoE) in CCN. The current multimedia stream coding was designed for the traditional end-to-end communication paradigm in IP networks. In CCN, it is difficult to maintain a
U1
Chunk{.........}
Chunk{.........}
Chunk{.........}
U2
r0 U3
Fig. 1.
interest
r1 responseof server
r2
server
responseof
An example of Basic CCN function in the Linear topologies.
high QoS for users, since the video chunks retrieved should be in correct order. Hence, the streaming media coding needs to be redesigned for CCN to adapt to the feature of in-network caching, and to solve the sequence sensitive problem for videos in CCN. For example, we assume that a stream video file provided by a server is segmented into 40 chunks (from 0 to 39). Each CCN node has a caching capacity of 10 chunks as illustrated in Fig.1. According to the basic caching in everywhere strategy and Least Recently Used (LRU) caching replacement policy, the caches of the three CCN nodes will be filled with chunks {31, . . . , 39} after user U3 retrieves all the video chunks. If user U1 now issues requests for the chunk 16 to 39, these requests will be forwarded to the server due to the incomplete video chunks in caching nodes r0 , r1 , r2 . All the chunks {31, . . . , 39} stored in caching nodes will be replaced by incoming chunks. When U1 have retrieved the requested chunks, nodes r0 , r1 , r2 will be cached with chunks {31, . . . , 39} once again, weakening the significant advantages of in-network caching. However, the proposed caching management with Multiple Description Coding (MDC), MDC-Ca, can effectively solve the problem. MDC-Ca removes the decoding reliance among video chunks, namely there is no particular order to receive and decode the video. The more video chunks user receives, the better quality the decoded video is. Via the MDCCa, the caching in r0 , r1 , r2 can be fully utilized to serve the requests issued by users. The detailed design of the MDCCa can be found in Section III. Further, the performance of the MDC-Ca will be verified in a mathematical model and analyzed by comparing current media coding with MDC. The remainder of this paper is organized as follows. Section II introduces the fundamentals of CCN and MDC briefly. We give the detailed description of new caching management strategy combined with the implementation of MDC in Section III. The analysis model for CCN with MDC will be introduced in section IV. The results of simulation and evaluation will be shown in Section V. Finally in Section VI, we give the concluding remarks and further work.
978-1-4799-5952-5/15/$31.00 ©2015 IEEE
MDC [4,2,0,5,7,1,3,6]
U1
R1
R6
R3
III.
R4
S1
R5 U2
MDC M [0-7]
Fig. 2.
U3
R2
Illustrate of the MDC-based scheme in CCN
II.
MDC instead of the conventional coding technique is shown to give gains in CCN. User 1 has the best video quality with all the video chunks received. Different from the conventional coding technique, user 2 is still able to see the video with just a few chunks received. Most importantly, the order of chunks that are received do not matter in the MDC-based scheme.
R ELATED W ORK
A. Content-Centric Networking In this section, we briefly introduce some preliminaries of CCN.The naming of CCN is hierarchical and modeled over the standard form of Universal Resource Identifier (URI). In accordance with URI names, the content items are split into chunks, which are stored in one (or more) repository. Users can retrieve the chunks using a receiver-driven transport protocol based on per-chunk queries triggering data chunk delivery. Any interest received from an input interface will first find out whether the given chunk is cached in local CS. If so, it will be sent back to the input interface. Otherwise, the interest will be forwarded to the other interface(s) indicated by the Forwarding Information Base (FIB). Every intermediate node keeps track of pending queries in a Pending Interests Table (PIT), in order to deliver the requested chunks back to the receiver and temporarily cache data chunks in a LRU managed caching. In order to improve the performance of the multipath routing in CCN, we need to optimize the caching and replacement strategy. Thereby a MDC-based method is proposed to split the content items into MDC chunks, which is insensitive to order. We give some necessary backgrounds about MDC below. B. Multiple Description Coding MDC [4] is proven to be a promising method to enable the high quality of experience (QoE) and enhance the error resilience in video delivery. In general, MDC generates several video descriptions of equal rate and equal importance so that each description alone can provide low but acceptable video quality while more descriptions lead to higher quality. Apart from error resilience, MDC helps to achieve the traffic dispersion and load balance in the network, effectively relieving the network congestion. More importantly, combining MDC for video into CCN help improve the video delivery efficiency, QoE for video and real-time interactive applications. However, these benefits of MDC come with an inevitable price. In the error-free scenario, MDC needs more bits to meet the same quality as a conventional single description coder. In fact, the redundant bits are intentionally added for the resilience of transmission error. In CCN, the video transmission is enhanced at an affordable cost of lower video compression rate. Fig. 2 illustrates the MDC-Ca in a simple network. Using
T HE M ANAGEMENT S TRATEGY OF MDC-C A
A. Naming Since MDC is infused into the CCN, the design of content naming scheme, caching policy, caching replacement and PIT table need to be modified correspondingly. For naming, the widely-used hierarchical structure naming is designed to improve the routing aggregation which is perfectly applied to IP address. For simplicity and convenience, every chunk is allocated a unique naming in CCN naming policy, i.e., chunk C5 of a video can be named /youku/videos/Kung-FuPanda.mpg/05. Therefore, it will further aggravate the problem for naming routing aggregation based on the huge amount of contents items. Moreover, the tremendous name FIB table has become a bottleneck in CCN routing, and slow down the development of its own. In this paper, we shorten the naming length by omitting the naming of chunk number to increase the aggregation, which benefits from MDC. Therefore, it is unnecessary to specify chunks to request, and the user only needs to repeat the request of /youku/videos/Kung-Fu-Panda.mpg for the remaining chunks. Furthermore, the simplified chunk naming can help to reduce the size of name FIB table. B. Caching policy In the LCE caching policy of CCN, the data chunks are stored independently in the nodes along the path back to the user, and then it causes many problems such as low utilization of caching in network. Therefore, a global popularity-based caching policy is introduced by combining MDC to our design, which we term as Caching policy of MDC-Ca strategy. We add a new principle to the caching algorithm for the Caching policy: Do the best to make the chunks of the same video object to be distributed with no repeat-chunk-copies along with the path from server to the user. The pseudo-code of the algorithm is described in Algorithm 1, and the new policy can be realized by meeting the following methods: 1). A new label for counting the number of nodes, which has been passed through by the request interest packet, is added in packet header with the variable h. 2). Every chunk coded with MDC will be added a mark of Circle To Caching (CTC) with the variable of α as shown in equation (2). It is calculated according to the topology information and the number of hops from the user to the content server. 3). The value of CTC α ≥ 0 will be decreased by one when the data packet passes through a CCN router. If a chunk is sent from a content server, the CTC is set as α = 0. If a chunk is stored in the CS of a CCN router, the CTC α is set as a default value that is determined by the network topology. Then, the chunk cannot be stored again until the CTC α becomes 0. 4). If 0 ≤ Yif < Xi (Xi , Yif respectively indicate the cache size and the chunk number of content f cached in node
Vi .), the chunk Cfj (chunk j of content file f ) will be cashed in CS, when it passes through node Vi ; On the contrary, the chunk will be forward to the next node while Yif = Xi . In other word, the chunks will not be replaced by the ones which belong to the same video in a CCN node. In order to distribute the chunks of a content uniformly, Yif is limited to a fixed value, which is calculated from the information of current topology based on the graph theory. The number of the pathes W between user and server, and the number of hops of one path hi,j , (0 < i ≤ v, 0 < j ≤ W, v: the number of node in the topology). The average distance (r) of the W pathes is formatted as r=
v W
hi,j /W
(1)
i=1 j=1
The routing structure adopts the semi-centralized naming routing scheme which is proposed in [5], the Routing Information Base (RIB), which contains all the routes, is calculated based on the current topology in a controller. The RIB contained all the naming routes, offering the updating server for FIB when it does not match the forwarding information entry for an interest packet. According to the RIB, the CTC can be calculated from the following equation. α = Sf /r WS = v Wf i=1
j=1
hi,j
(2)
Sf is the chunks number of content. Algorithm 1.
Caching Strategy of MDC-Ca
Input: Cfj , Yif , Xi , α, Cf (i) Output: The process actions of Cfj : Caching, Forward f 1: if Yi = Xi then 2: Search PIT for the Face that the interest packet comes from 3: Forward the chunks back to destination Face 4: α=α−1 5: else 6: if α = 0 then 7: Execute the caching policy at the node 8: Reset the α to the default value 9: Forward the new data packet to the corresponding Face 10: end if 11: end if
C. Replacement Policy of MDC-Ca The LRU and Least Frequently Used (LFU) are becoming widely popular for the replacement strategy of CCN [6], [7]. In this paper, a new rule replacement policy of MDC-Ca based on LRU proposed to match the property of MDC. When a new chunk Cfj ∈ / Cf (i) (a set of chunks Cf cached in node Vi ) is passing through the node, it will be stored in Vi if and only if Yif < Xi and α = 0. On the contrary if Yif = Xi , the chunk Cfj will not be cached in vi , then forward to next node according to the PIT. If Yif < Xi , the replacement strategy of MDC-Ca is executed. In another case, a chunk Cfj is contained
in the cache of node Vi (Cfj ∈ Cf (i)), indicating that the chunk Cfj has been retrieved from the node Vi and it should be regarded as duplicate chunk. Then, Cfj will be filtered to to improve the communication efficiency of CCN. Meanwhile a new request for the content f will be generated according to the PIT in node Vi and sent out to the face, where chunk Cfj has came. The pseudo-code of the caching policy of MDC-Ca is introduced in Algorithm 2. Algorithm 2.
Replacement Strategy of MDC-Ca
Cfj ,
Input: Yif , Xi , α, Cf (i) Output: The process actions of Cfj : Filter, Replace, Forward j 1: if Cf ∈ Cf (i) then 2: Cfj will be filtered 3: A new request interest packet for Cf will be generated and sent out to the target face target Face 4: else 5: if Yif < Xi then 6: Execute the replacement policy at the node with LRU 7: end if 8: if Yif = Xi then 9: Forward the new data packet to the corresponding Face 10: end if 11: end if Some interest packets request the same chunk and come from different faces. In current CCN communication model, there is no method to distinguish wether these requests are issued from the same end-user or not in a short-period. Such problem can not be avoid in the MDC-Ca. However, the filter of duplicated requests is becoming an increasingly important function for MDC-Ca. Therefore, an user identity is added to the interest packet header to differentiate the origin of the interests. The process detail of request/respons communication model can be described as following. f The total number of requests qi,· for content f issued from the same user, is assumed to be Nq . If Nq ≤ Yif , all the requests can be served by supplying the chunks Cf (i) as responses; if Nq > Yif , the amount requests of overhang will be forwarded to the next node according to the FIB. The above process will be executed again for the requests issued from the other user.
In order to explain the advantage of the proposed scheme, the example from Fig.1 is again used here. A video file, named as Kung-Fu-Panda.mpg, is provided by the server and encoded into 40 chunks with MDC. Each CCN node is equipped with a caching capacity of 10 chunks X = 10 and a caching initialization value of X = 0. Hence, the corresponding chunks will be retrieved, while a series of requests issued from U1 for video Kung-Fu-Panda.mpg chunks continually arrive to the server . In the initial period, the chunks Cf (r2 )[0, 9] (a set of chunks from Cf0 to Cf9 cached in node r2 ) will be stored in the local CS of CCN router r2 . In the same way, the CCN routers r1 and r0 will store the chunks Cf (r1 )[10, 19] and Cf (r0 )[20, 29] in their local caching respectively. After that, if user U2 wants to request Kung-Fu-Panda.mpg, router r0
will provide chunks Cf (r0 )[20, 29], and then the requested video can be played smoothly thanks to MDC. If a significantly higher quality of the current video are requested, more requests will be issued for the Kung-Fu-Panda.mpg video from U2 . Therefore, the router r1 , r2 and the server provide the chunks Cf (r1 )[10, 19], Cf (r2 )[0, 9] and Cf (Server)[30, 39] respectively. As a result, each node caching has higher utilization than the ones in the example illustrated in introduction. When all the conversations are finished, the chunks of Cf (r0 )[20, 29], Cf (r1 )[10, 19] and Cf (r2 )[0, 9] are still stored in the CS r0 , r1 , r2 without any unnecessary replacing operations. To some extent, the proposed caching scheme can avoid the unnecessary computations in nodes.
IV.
S IMULATIONS , E XPERIMENTS AND R ESULTS
In this section we give the simulation results performance comparison between MDC-Ca and other concerned caching strategies. Furthermore, an emulation verification is presented on practical topology as shown in Fig. 4. We construct a topology with CCN nodes and implement the proposed data caching and forwarding policy as well as the receiver driven transport protocol. The simple transport protocol is assumed to use fixed window size for chunk requests based on the PARC’s CCNx. The Routing Information Base (RIB) is calculated according to the publish & subscribe routing protocol in semicentralized controller. According to the RIB, the FIB of the CCN nodes is updated as needed [5].
D. Miss Probability Model
A. Simulations on Icarus
In this subsection, we set out to find an analysis model and metric for the performance comparison between the MDCCa with other caching strategy. Inspired by the data transfer model in CCN [8], we improve the model of caching sharing under limited resources for MDC-Ca of a network topology as shown in Fig. 1. Specifically, we redesign the data format of MDC-Ca, and modify the caching management strategy to reduce the miss probabilities and average content delivery time. Moreover, the main formulas for modeling is given based on [9]. We formulate the content request process and provide an explicit characterization of the hit/miss probabilities at the caches along the path from user to repository of servers, under the assumption of unlimited upstream bandwidth to the servers. More details can be found in the following subsection. The downstream path to the user is characterized by finite link capacity (1Gbps). The arrival process of the requests generated by users is modeled through a Markov Modulated Poisson Process (MMPP) [10]. The requests for content items in class c are generated in accordance with a Poisson process of intensity ηc = ηqc (η content request rate), and the requested content is uniformly picked among the m content items in the popularity class (η ≡ η(i = 1)). The miss probability model is derived based on the data transfer model proposed by Carofiglio in [9]. The stationary miss probability for chunks of class c, pc (i = 1), formulates as following equation (3) by modifying the indispensable variables. The modification is in the light of the new caching policy by applying the MDC to CCN. S β pc (i = 1) = e−qc t( m ) (3)
The simulator Icarus1 is designed based on [11], and the configuration of which is assumed as following: the cache size X is assigned as constant; the content request q is modelled as Poisson process while content popularity satisfies the Zipf distribution. The performance of the MDC-Ca is evaluated by considering the following two metrics [12].
For large S (caching size of node), where 1/t = dσ β Γ(1− β1 )β , and qc = d/cβ . Here, we state the main result on the miss probabilities at hop i > 1 characterization for topology in Fig. 1.The MMPP content request process with rate η(i) and popularity distribution qc i−1 pc (z)qc qc (i) = C z=1 , c = 1, . . . , C (4) y−1 y=1 z=1 py (z)qy where qc = d/cβ , β > 1 as input at the content store at ith hop, then ∀i ∈ (1, N ] it holds log pc (i) =
i−1
(
y=1
Sy+1 β ) pc (y) log pc (i = 1). Sy
(5)
1). cache hit ratio (1−p). In a fixed period, the amounts of the content chunks match with interest packets is Ri , and the total number of the request interest packet is assumed as Rn . The cache hit ratio of chunks can be expressed as 1 − P = Ri /Rn . It reflects the performance and efficient of the caching policy. 2). average delivery delay (latency). A content is assumed to be encoded into n chunks by MDC. tn indicates the pointin-time when the first interest packet is issued. tl indicates the point-in-time when the last content packet is retrieved. The average delivery delay t¯ of the a chunk can be expressed as t¯ = (tn − t1 )/n. This metric is used for understanding the performance of caching strategy under the condition of link jamming and link utilization. The metrics mentioned above are measured in three topologies with various cache capacity sizes and different content popularity distribution skewness, which is expressed by the parameter α of Zipf. In particular, the variables of cache to N content population ratio Ccpr = M δ/ i=1 (Xi ) and α are set in the range of [0.002 ∼ 0.5] and [0.6 ∼ 1.8] respectively. To clearly understand the better performance of the MDC-Ca strategy within the wider spectrum of in-network caching, we give some performance comparisons between MDC-Ca, the Ubiquitous LCE, Cache ”Less for More” (CL4M) [13] and ProbCache [14], the caching replacement strategies of which are all according to traditional LRU policy. The simulation results are reported in Fig. 3.(a) and Fig. 3(d) present the performance of cache hit rate and Latency respectively against various values of cache size parameter Ccpr when the parameter β of Zipf is set as constant value(β = 1.2). As shown in the graphes, MDC-Ca strategy outperforms the any other ones. Particularly, among all the cache strategies, MDC-Ca performs the best hit ratio for all the values of cache size in topology GEANT, when the parameter α = 1.2 as depicted in Fig. 3(a). In the same condition, we see from Fig. 3(d) that the delivery delay of the MDC-Ca still outperforms LCE, CL4M and ProbCache strategies. Nevertheless, we can 1 http://www.ee.ucl.ac.uk/
lsaino/software/icarus
0.80 0.75 0.70 0.65
0.8 0.7 0.6
0.7
0.5
0.6
0.4
0.5
0.3
0.4
0.1
0.1 0.8
LCE MDC−Ca CL4M ProbCache
90 80 70
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
Content distribution (b) Cache hit ratio vs α : T=GEANT Ccpr=0.01
200
100
60 50
GEANT
WIDE
TISCALI
80 70
LCE MDC−Ca CL4M ProbCache
60 50
50 40
40 30
0.0
(c) Cache hit ratio vs Topology: α =1.2, Ccpr=0.01
LCE MDC−Ca CL4M ProbCache
150
Latency (ms)
100
LCE MDC−Ca CL4M ProbCache
0.2
0.2 10 -2
Cache to population ratio (Ccpr) (a) Cache hit ratio vs Ccpr: T=GEANT, α =1.2
Latency (ms)
0.8
0.3
0.60 0.55
LCE MDC−Ca CL4M ProbCache
0.9
Cache hit ratio
0.85
0.9
1.0
LCE MDC−Ca CL4M ProbCache
Latency (ms)
Cache hit ratio
0.90
Cache hit ratio
0.95
10 -2
Cache to population ratio (Ccpr) (d) Latency vs Ccpr: T=GEANT, α =1.2
0 0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
Content distribution (e) Latency vs α : T=GEANT, C=0.01
1.6
30
GEANT WIDE TISCALI (f) Latency vs Topology : α =1.2, Ccpr = 0.01
Fig. 3. The performance of MDC-Ca strategy in terms of cache hit ratio and latency for three topologies (GEANT, WIDE TISCALI), content popularity (α) and cache size (Ccpr)
also see that the difference among all the strategies declines when the cache to popularity ratio increases, because more caching space is available within the network. Fig. 3(b) indicates that the MDC-Ca achieves better hit ratio than the other three strategies for all values of β considered while the cache size maintaining a constant value Ccpr = 0.01. The similar outstanding performance of delivery delay (the less the better) can be easily found from Fig. 3(e), where the configuration of the parameters are the same with Fig. 3(b). As we all know, the network topology as an important factor can vastly impact the performance of all the caching strategies. In order to evaluate the performance of the proposed strategy, MDC-Ca is carried out on three different topologies with β = 1.2, Ccpr = 0.01. In Fig. 3(c) and 3(f), the cache hit ratio and the delivery delay (latency) are depicted respectively for the three network topologies. From the evaluation results, we can clearly find that the MDC-Ca strategy has the greatest hit ratio and the least latency than the LCE, CL4M and ProbCache caching management. The caching policy of MDC-Ca management strategy efficiently reduce the redundancy with the network caches, and maximise the distribution of the same content chunks. Thus, MDC-Ca has the same purpose to ProbaCache, but has better performance. Moreover, the replacement policy of MDC-Ca can further reduce the redundancy and guarantee the distribution of chunks. To the best of our knowledge, applying MDC to CCN is the first attempt to find a suitable content (video) coding for CCN. Furthermore, all the existing caching managements can be improved by equipping the MDC into its content items. B. Emulation results and analysis The total time elapsing between the dispatch of the first chunk request and the last chunk reception is called content delivery time. From the previous section, we can see that the miss probability of chunks is an effective indicator to evaluate the performance of the node’s caching management policy. Therefore, a real network topology as shown in Fig. 4 is
constructed with real physical PC, which is deployed as CCN router by using the modified version of PARC’s CCNx. Moreover, the basic content routing of CCNx has been improved by efficient content semi-centralized routing mechanism in [5]. The test topology in Fig. 4, which is partially referred to the NDN testbed topology [15], is composed of 6 CCN nodes2 . For convenient testing, we add 3 content servers for supplying content and 3 user-ends for making requests, all the link bandwidths between nodes are set as 1Gbps. In this experiment, every server can feed the network 20000 video content items, which are organized in 200 classes of decreasing popularity, each class with 256 items. In particular, each item is split in chunks of 4KB according to MDC coding. All the items are different with each other and uniformly distributed in 3 servers. The content popularity is Zipf distributed and requested with probability qc = d/cβ (d > 0, β = 2) by the 3 user-ends. All the emulation results are calculated when the network is running after 5 hours and becomes stable. Fig. 5 presents the miss probability of content at the 6 nodes. The red curves indicate the results of the MDC-Ca, and the back ones are the emulation results under traditional caching management policy. The miss probability of the content increases with the content probability degrease reducing. Note that, the bigger the content id is, the lower the content probability is. For every class id except class 1, the miss probability of the MDC-Ca is lower than the traditional ones. From the whole comparison of the miss probability with the LRU-Trd and MDC-Ca, we can conclude that the MDC-Ca has higher content hit probability in almost every node of the CCN. In other words, the higher content hit probability in nodes can further improve the content average delivery time. 2 The hard ware environment of the every router node and user-end are all equipped with CPU of Inter(R) i3-4130, 4G RAM, 4 ports Intel i350t4 network cards, while every server is equipped with CPU of Intel (R) and Xeon (R) E5620, 32G RAM. Further more, the Operating System of all the PCs and servers adopt Ubuntu 12.04 X64 with Linux kernel version 3.2.0-59-generic.
1
U1
S2
Miss Probability
U3
R1 R6
R3
2.4
0.9 0.8 0.7 0.6 0.5 0.4
R4
0.3 U2
S1
R5
S3
Fig. 4. The Emulation topology with 6 CCN router nodes, 3 content servers, 3 user-ends.
0.2 0
2
4
6
8
10 12 Class Id (c)
LRU−Trd−U1 LRU−Trd−U2 LRU−Trd−U3 LRU−Trd−U4 LRU−Trd−U5 LRU−Trd−U6 MDC−Ca−U1 MDC−Ca−U2 MDC−Ca−U3 MDC−Ca−U4 MDC−Ca−U5 MDC−Ca−U6 14 16 18
C ONCLUSION AND F UTURE W ORK
CCN proposals have recently emerged to define new network architectures where content becomes the core of the communication model instead of hosts. To the best of our knowledge, applying the MDC to CCN is the first attempt to find a suitable content (video) coding for CCN. This paper proposed an novel caching management scheme combining with the MDC. The CCN equipped with MDC-Ca can provide the user with no particular order video content chunks, improving the utilization of the in-network caching and reducing the content average delivery time by increasing the content hit probability in caching. Furthermore, we verify the performance of the MDC-Ca via simulation on Icarus and emulation on practical network topology. From the results, we can clearly find that the MDC-Ca with novel MDC-Ca has distinctly better performance by comparing the traditional caching management with LRU-Trd. ACKNOWLEDGMENT This work was supported in part by National Basic Research Program of China (973 Program) under Grant 2012CB315904, the Shenzhen Basic Research Program under Grant JCYJ20140509093817684, the National Natural Science Foundation of China under Grant NSFC61179028, the Natural Science Foundation of Guangdong Province under Grant NSFGDS2013020012822.
2.2 2 1.8 1.6 LRU−Trd−U1 LRU−Trd−U2 LRU−Trd−U3 MDC−Ca−U1 MDC−Ca−U2 MDC−Ca−U3
1.4 1.2
20
Fig. 5. The miss probability of CCN router nodes from R1 to R6 at different class id.
Fig. 6 shows that the content average delivery time affects by the popularity of the content. The lower the popularity is, the higher average delivery time we will have. The average delivery time of MDC-Ca for the 3 users are lower than the traditional ones at every class id. While it also indicates that data content downloading efficient of our scheme is better in CCN. According to the configuration of the experiment variables, we can figure out that the size of the every class content is 1200M B, which is larger than the caching capacity. For class 1 content items, partial chunks cannot be hit in the nearby nodes because of the limitation of the caching size. Unfortunately, the miss rate of the content chunks will become higher due to the sensibility of the chunks under the traditional LRU policy as illustrated above. Therefore, all the unfulfilled requests have to forward to the servers for contents. On the other hand, the chunks of class 1 is close to uniformly distributed in the network caches under the proposed MDCCa scheme. As a result, all the chunks of class 1 could be retrieved from the nodes nearby. The content average delivery time of the other class id can be analysed in the same way. Thereby the average delivery time of MDC-Ca policy is less than LRU-Trd policy, which is verified by emulation results. V.
Average Delivery Time (s)
R2
1
0
2
4
6
8
10 12 Class Id (c)
14
16
18
20
Fig. 6. The average delivery time at U1 , U2 and U3 of the topology as shown in Fig. 4.
R EFERENCES [1] Y. Kim and I. Yeom, “Performance analysis of in-network caching for content-centric networking,” Computer Networks, vol. 57, no. 13, pp. 2465–2482, 2013. [2] V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H. Briggs, and R. L. Braynard, “Networking named content,” in Proceedings of the 5th international conference on Emerging networking experiments and technologies. ACM, 2009, pp. 1–12. [3] I. Psaras, R. G. Clegg, R. Landa, W. K. Chai, and G. Pavlou, “Modelling and evaluation of ccn-caching trees,” in NETWORKING 2011. Springer, 2011, pp. 78–91. [4] V. K. Goyal, “Multiple description coding: Compression meets the network,” Signal Processing Magazine, IEEE, vol. 18, no. 5, pp. 74–93, 2001. [5] F. Chen, W. Liu, H. Li, and Z. Zhu, “Semi-centralized name routing mechanism for reconfigurable network,” in Testbeds and Research Infrastructure: Development of Networks and Communities. Springer, 2014, pp. 444–452. [6] F. B. Sazoglu, B. B. Cambazoglu, R. Ozcan, I. S. Altingovde, and ¨ Ulusoy, “A financial cost metric for result caching,” in Proceedings O. of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM, 2013, pp. 873–876. [7] M. Tortelli, D. Rossi, G. Boggia, and L. A. Grieco, “Ccn simulators: analysis and cross-comparison,” in Proceedings of the 1st international conference on Information-centric networking. ACM, 2014, pp. 197– 198. [8] L. Muscariello, G. Carofiglio, and M. Gallo, “Bandwidth and storage sharing performance in information centric networking,” in Proceedings of the ACM SIGCOMM workshop on Information-centric networking. ACM, 2011, pp. 26–31. [9] G. Carofiglio, M. Gallo, L. Muscariello, and D. Perino, “Modeling data transfer in content centric networking (extended version),” Research report, available at http://perso.rd.francetelecom.fr/muscariello, Tech. Rep., 2011. [10] W. Fischer and K. Meier-Hellstern, “The markov-modulated poisson process (mmpp) cookbook,” Performance Evaluation, vol. 18, no. 2, pp. 149–171, 1993. [11] L. Saino, C. Cocora, and G. Pavlou, “A toolchain for simplifying network simulation setup,” in Proceedings of the 6th International ICST Conference on Simulation Tools and Techniques. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), 2013, pp. 82–91. [12] L. Saino, I. Psaras, and G. Pavlou, “Hash-routing schemes for information centric networking,” in Proceedings of the 3rd ACM SIGCOMM workshop on Information-centric networking. ACM, 2013, pp. 27–32. [13] W. K. Chai, D. He, I. Psaras, and G. Pavlou, “Cache less for more in information-centric networks,” in NETWORKING 2012. Springer, 2012, pp. 27–40. [14] I. Psaras, W. K. Chai, and G. Pavlou, “Probabilistic in-network caching for information-centric networks,” in Proceedings of the second edition of the ICN workshop on Information-centric networking. ACM, 2012, pp. 55–60. [15] C. Cabral, C. E. Rothenberg, and M. F. Magalh˜aes, “Mini-ccnx: fast prototyping for named data networking,” in Proceedings of the 3rd ACM SIGCOMM workshop on Information-centric networking. ACM, 2013, pp. 33–34.