2017 IEEE 23rd International Conference on Parallel and Distributed Systems
Hierarchical Resource Distribution Network Based on Mobile Edge Computing ЪЫ
ЪЫ
ЪЫ
ЪЫ
ЪЫ
ЪЫ
Dewang Ren , , Xiaolin Gui , ,*, Huijun Dai , , Jian An , , Xin Liang , , Wei Lu#, Meihong Chen , School of Electronics & Information Engineering, Xi’an Jiaotong University, Xi’an, 710049, China Ы Shaanxi Province Key Laboratory of Computer Network, Xi’an Jiaotong University, Xi’an, 710049, China #Shanghai Huawei Technologies Co., Ltd., Shanghai, 200135, China *Corresponding Author {rendewang, xliang}@stu.xjtu.edu.cn, {xlgui, daihuijun, anjian}@mail.xjtu.edu.cn,
[email protected],
[email protected] Ъ
The emergence of MEC makes up the demerits of cloud computing.
Abstract—Mobile edge computing (MEC) is an emerging paradigm to support the proliferation of smart phones and recent outstanding mobile data traffic growth. The network edge devices with computation, communication and storage capabilities can be applied to cache popular resource in close proximity to end uses and directly distribute resource to them, which is a promising solution to provide enhanced service. Firstly, in order to respond to the users’ access requests at the edge of network, the distributed resource distribution network (DRDN) is constructed to realize the quick query, positioning and distribution of resource. It mainly consists of network elements: NEF (Network Exposure Function) node for global management and MEC nodes for distribution service. Secondly, on the basis of DRDN, two kinds of MEC’s service models of distributing resource are defined and classified, the total energy consumption and average service delay for distributing resource to end users are analyzed. Finally, for the heavy access load and high latency of root caching MEC node, a hierarchical caching scheme (HCS) is designed and a caching node selection algorithm based on fuzzy C-means clustering is proposed according to the location characteristics of end users. The simulation results show that the HCS can save a lot of energy and reduce the average service delay. Additionally, for the different total arrival rate of users’ access requests, the change of average service delay is stable with good adaptability in the HCS.
MEC can interact with the terminals and cooperate with the cloud, its benefits includes delay-sensitive, location-awareness, proximity to the end users, mobile support [1]. MEC not only supports the terminals to offload computation, but also caches cloud data to improve the intermediate communication between the terminal application and its corresponding cloud [2]. Mobile edge networks provides cloud computing and caching capabilities at the edge of network. In [3], the author presented genetic issues on mobile edge network including definition, architecture, advantages and the computing, caching and communication technologies. Besides, the author surveyed the caching places, content popularity, caching policies, caching of different file types as well as mobility awareness. In order to utilize the caching capability at the edge of network, application service was deployed to edge cloud or intelligent router to alleviate the long latency and network congestion [4]. To offer services with enhanced quality of experience via edgecaching, dedicated cache space of mobile networks was provisioned to OTT content provider. In [5], the fair billing problem such caching service based on Stackelberg equilibrium was analyzed. Similar to the edge network, to alleviate the heavy burden on backhaul links, the popular contents were cached to base stations and even user terminals in the content-centric wireless networks, and the design methodologies for mobility-aware caching were firstly proposed [6]. Furthermore, to enhance users’ Quality of Experience (QoE) and reduce redundant transmissions over the cellular networks, caching the contents to the user devices is emerging. In the wireless D2D (Device-to-Device) network, a caching scheme with user clustering and file clustering according to the file preferences has been designed [7]. The structure and key concepts of mobile social device caching were introduced, and content placement, radio resource management and routing were addressed [8]. Although caching in user devices emerges as an attractive solution to alleviate the overload of network, it will not only lead to high battery consumption of mobile devices, but also involves the user's privacy and security issues. Caching at the network edge is considered to be a promising option, however, the issue about how many edge devices of the mobile networks selected to cache popular contents is not mentioned in [3,4,5,6].
Keywords—mobile edge computing; caching resource; fuzzy Cmeans clustering; service delay; distribution energy consumption
I. INTRODUCTION The invention of smart mobile terminals enables the computational applications to be prevalent, such as online gaming, video conferencing. Many applications often require a strict delay-bound, which varies from second to millisecond. In fact, network congestion and long latency due to exponentially increasing data traffic are becoming a bottleneck in the traditional computing environment. To solve this problem, a new concept, known as mobile edge computing, has been introduced. The MEC can provide enhanced service quality with increased network capacity and low latency via making full use of the computation and storage capabilities of network edge devices. As a promising solution, MEC satisfies the requirements of emerging interactive applications. The reason is that it is in close proximity to end users and can avoid longdistance communication between the user device and the cloud center. Moreover, it can satisfy QoS of a large number of users.
When popular contents are cached at the network edge, how to distribute them to end users is another challenge.
This work was supported by NSFC under Grant 61472316, by Major Basic Research Project of Shaanxi Province under Grant 2016ZDJC-05, and by Key Research and Development Project of Shaanxi Province under Grant 2017ZDXM-GY-011.
978-1-5386-2129-5/17/31.00 ©2017 IEEE DOI 10.1109/ICPADS.2017.00019
57
II. CONSTRUCTION OF DISTRIBUTED RESOURCE DISTRIBUTION NETWORK
Fortunately, the traditional content delivery network (CDN) gives the solution. It used the distributed caching technology [9], which could effectively alleviate server load and accelerate information access through the deployment of a group of geographically distributed caching nodes. CDN [10], P2P (Peer-To-Peer) network [11] and hybrid content delivery network (HCDN) combining CDN and P2P [12], could reduce the data transmission delay. Their work pattern is as follows: Firstly, the content of the streaming media is distributed to the proxy server in each autonomous domain by CDN centralized control mode, and then the content is pushed to the users of the local domain by P2P. The content routing is mainly responsible for redirecting user requests through a certain routing algorithm to the nearest proxy server. In view of individual popularities of segments within a file, a new cache eviction scheme was proposed based on the difference between the attributes of content popularity [13]. However, due to the characteristics of edge network, the above mentioned work pattern cannot be directly applied. The distribution network via edge devices should be thoroughly investigated.
The popular resources are cached in MECs to respond rapidly to access requests from end users. Popular resources include hot applications from the cloud, and popular videos and music uploaded by the end users. MECs and NEF are important network elements with computation and storage capabilities in mobile networks. MECs can cooperate with each other to process users’ access requests. When user accesses the resource at the edge of network, the MEC firstly checks whether the requested resource is cached locally, and if so, returns it directly to the user. Otherwise, the user’s access request is redirected to the nearest MEC caching the requested resource, and the resource is returned to user along the forwarded path. DRDN is constructed based on lots of MECs and one NEF, which provides resource to the users at the edge of network. DRDN includes two layers, as shown in Fig.1. The top layer is the service providers and the bottom is the service consumers. In the top layer, the MECs serve as service nodes, caches the popular resource and responds to users’ access requests. The NEF acts as a management node to monitor the whole of information in the real time, including MECs’ status, access load and the users’ current location. The bottom layer is composed of smart phones, smart vehicles, cameras, etc. In addition to using the services provided by the service layer, the end user can also perceive and upload the mainly location-related data to MECs.
In this paper, in order to build distribution network for popular resources, such as videos, music, network edge devices like MECs, NEF are used for caching and distributing. Therefore, at first, the distributed resource distribution network (DRDN) based on MECs and NEF is constructed. Then, on the basic of DRDN, to alleviate the heavy load of caching nodes caused by large-scale user access requests, especially the "first" caching node, a hierarchical caching scheme (HCS) for resource distribution is designed. In the HCS, to avoid the low resource utilization due to too much nodes selected and long access latency because of selecting too few caching nodes, a caching node selection algorithm based on fuzzy C-means clustering is proposed.
A. Formalization of node information 1) End user: The user set is U {u1 , u2 ," , un } and the total number is n . The i th user is described in tuples, which includes its current location and corresponding local MEC, denoted as ui =< ( xi , yi ), ei >. The users are linked with MECs through the access network (low latency, high bandwidth).
The remainder of the paper is organized as follows: In section Ċ the construction of DRDN, MEC’s service model and distribution workflow are introduced, and energy consumption and service delay are analyzed; Section ċ is a detailed discussion of the HCS and node selection algorithm. In Section Č, the effectiveness of our proposal is verified by experiments. Finally, Section čsummarizes this article. Hearbeat link Distribution path
MEC11
2) MEC: The MEC set is as E = { e1 , e2 , …, en }, m represents the total number of MECs and the MEC ei is described in tuples including its location, storage capacity, user set, and neighbor MEC set, expressed as ei = < ( xi , yi ) , vi , U i , Eic >. Among them, the number of users in the coverage area of ei is ni . Each MEC’s coverage area does not overlap and MECs are linked with each other by single hop or multiple hops fiber optic communications. The number of hops of the communication link among the MECs is represented by a matrix W of m u m dimensions, W is a symmetric matrix, and the elements on the diagonal is zero.
NEF
MEC22 MEC43
MEC32
Cluster center
Cluster center
MEC53
3) NEF: It is the global management node, monitoring all the MECs’ current status in real time, including the MEC’s coverage area, the current load and cached resources. At the same time, NEF can obtain the current location information of all the end users. The MECs are connected with NEF by single hop and inquire the status information for other MECs through NEF. Fig. 1. Distributed resource distribution network
58
resource to all the users. It includes three kinds of energy consumption: Ccmp , Cstg , C faw , which are used for processing access requests, storing the resource in the caching MECs, and forwarding resource to the caching MECs and distributing it to all the users. Hence, TEC= Ccmp + Cstg + C faw .
B. Definion of service model For the MEC ei , the process of responding to users’ access requests can be described as an M/M/1 queue. Assuming that the arrival rate of requests accessing popular resource is Oi , it is satisfied to the Poisson process. If resource is cached in ei , it can directly process users’ access requests and provide requested resource to the users, whose average rate to process access requests is Pi . Otherwise, ei forwards access requests to the caching MEC with average rate Pic . Pi and Pic obey the negative exponential distribution. In general, Oi is related to the number of users with the coverage of ei , while Pi and Pic are related to CPU, memory of ei , and Pi , Pic ! Oi . According to the Little formula [14], the average service time of ei is equal to the sum of waiting time and service time. For a certain kind of popular resource requested by users, it is firstly cached in an MEC, called root MEC. To reduce duplicate traffic, it will select some other MECs to cache the resource. Hence, all the MECs can be divided into the caching and noncaching nodes according to whether or not resource is cached in MEC. Their respective service models are different and the number of caching nodes is mc . The following definitions are given:
a) Computation energy consumption It is the energy consumed by DRDN to process and analyze n users’ access requests, that is
nD
Ccmp
(1)
Where, D is unit energy required for processing an access request. b) Storage energy consumption The energy required to store resource to mc caching MECs is as shown:
mcE s
Cstg
(2)
Where, E is energy storing unit data in the caching MECs, s is the resource’s size.
Definition 1 (Service model of non-caching MEC): If the MEC ei does not cache the requested popular resource, then ei is the non-caching MEC. The ei redirects its users’ access requests to the nearest caching MEC. The average processing and forwarding time of the requests is 1 ( Pic Oi ) . The process that the users’ requests leave ei also obeys the Poisson process, whose parameter is Oi .
c) Forwarding energy consumption The requested resource is forwarded to each caching MEC, mc
energy consumption is ¦ wei F s . In addition, caching MECs i 1
distribute the resource to n users, and the energy consumption n
required is ¦ wu F s . Hence, forwarding energy consumption k
Definition 2 (Service model of caching MEC): If the MEC ei caches the requested popular resource, then ei is the caching MEC. The ei responds to both the redirected requests from other non-caching MECs and its own access requests. Assume that 4 is a collection of MECs consisting of the noncaching MECs redirecting access requests to ei and the ei . Then, the arrival process of access requests to ei is in compliance with the compound Poisson process with Oi ¦ k4 Ok . The average time for responding to access
k 1
is as follows: mc
C faw
¦w
ei
i 1
n
F s ¦ wu F s k
(3)
k 1
Among them, F is energy forwarding unit data between two nodes. wei and wuk are the hops’ number between root MEC and caching ei , the caching MEC ei and user uk .
requests is1 ( Pi Oi ) .
Therefore, the total energy consumption distributing resource to all the end users by DRDN is as follows:
Definition 3 (MEC’s service intensity): U i = Oi Pi represents the service intensity of ei and U i 1 ˈ otherwise the queue of access requests is dropped to infinity. If U i is closer to 1, it is indicating that MEC ei has heavier access load and will seriously affect the user's access experience.
TEC
mc n § · nD ¨ mcE ¦ wei F ¦ wuk F ¸ s i 1 k 1 © ¹
(4)
2) Analysis of delay Definition 5 (Delay accessing resource of user): It is the total time that user uk spends to access the requested resource since uk sends the access request, denoted as Dk , including transmission delay of access request and service delay of DRDN. In general, as the packet of access request is small, the
C. Energy consumption and service delay 1) Analysis of energy Definition 4 (Total energy consumption of distributing resource to users): The total energy consumption (TEC) is the energy consumed by DRDN to distribute the requested
59
transmission delay is low and can be ignored. DRDN’s service delay equals processing delay of access request D p and
D. Resource distribution workflow When a user accesses the popular resource cached in the DRDN, the local MEC receives and processes the access request, and returns the requested resource to user. If the requested resource is cached in the local MEC, it can be directly provided to the user, otherwise, the request is redirected to the caching MEC which is closer to the local MEC. The overall process responding to the user’s request is as follows:
distributing delay of requested resource Dr . Hence, Dk = D p + Dr . a) Processing delay of access request For the local MEC ek of uk , its processing delay of access request depends on its service model. If ek is a caching MEC, the processing delay is the time that ek processes the access request. If not, it includes the delay forwarding access request to the nearest caching MEC ei by ek and the average delay processing access request by ei . Therefore, the average processing delay is as shown in (5):
Dp
1 ° P O ¦ k j4 j ° ® 1 ° 1 ° Pkc Ok Pi ¦ j4 O j ¯
x User’s access request is sent to the local MEC; x The requested resource is provided to user by the local MEC if it is cached, otherwise, skip to the next step; x The local MEC inquired where is the nearest caching MEC from NEF and obtain its node number; x Then, the re-encapsulated request is sent to that caching MEC by the local MEC, and the requested resource will be distributed to the local MEC;
ek is a caching node
(5)
x The local MEC provides requested resource to the user.
ek is not a caching node
III. DESIGN OF HIERARCHICAL CACHING SCHEME For the first MEC (root MEC) caching the popular resource, its total request arrival rate will increase, because all the users’ access requests will be redirected to it. The average service delay is the increasing function f (O ) = 1 ( P O ) of O . When O approximates P , f (O ) grows faster and tends infinity. Simultaneously, root MEC’s service intensity U will exceed its threshold T and be infinitely close to 1, indicating its access load is heavy and will affect the QoS. In order to alleviate the root MEC’s access load and to reduce duplicate traffic, an appropriate number of MECs should be selected as caching nodes. Therefore, a hierarchical caching scheme (HCS) is proposed in this paper. The basic idea is that root MEC acts as the level-one caching MEC, like MEC11 in Fig.1. Then the clustering algorithm is used to divide the users into different clusters. In each cluster, the nearest MEC to the original clustering center is selected as the level-two caching MECs, as shown MEC 22 , MEC 23 in Fig.1. If this level-two MEC’s service intensity still exceeds the threshold T , the second-nearest MEC to original cluster center is selected as the level-three caching node, as MEC34 , MEC35 in Fig.1. After that, popular resources are distributed to those caching MECs, and simultaneously both popular resources and the number of caching MECs are backed up to NEF. Users in the coverage of caching MECs can directly obtain resources, while those without the coverage of caching MECs, obtain resource from the nearest MEC through the local MEC, which can reduce the duplicate traffic of the resource distribution.
Where 4 is the collection of non-caching MECs whose access requests are redirected to ei and the ei , 1 d i , j d m , 1d k d n . b) Distributing delay of requested resource Distributing delay includes the transmission time and propagation time of requested resource, that is ( s ] + N L C ), where s , ] , L , N , C are resource’s size, transmission rate of the link, fiber length, refractive index of fiber (for the 1310nm fiber, it is about 1.5) and speed of light in the vacuum (3×108m/s) respectively. Because DRDN mainly provides service to users in a local area, the communication distance is relatively short, the value of N L C is to be ignored. Hence, the distributing delay is as follows: Dr |
s
(6)
]
Definition 6 (Average service delay of DRDN): For each ei belonging to E ˈ Ui is the user set of ei , the total delay
distributing resource to each user of Ui by ei is ¦ ki 1 Dk , that n
is Di = ¦ ki 1 Dk . Average service delay (ASD) of DRDN is n
the mean of all MECs’ service delay.
A. Selection of caching MECs 1) Analysis of distribution characteristics of users
m
ASD
1 ¦ Di mi1
(7)
According to the principle of access locality, for the popular resource in the root MEC, a large number of visits appear in a short time. In order to avoid access load imbalance, it is necessary to cache the resource to other MECs in advance.
60
Before selection of caching MECs, the users’ distribution characteristics should be analyzed. Root MEC inquires the current location information of all the users from the NEF. Then, it analyzes the users’ distribution characteristics (for example, the number of users within each MEC’s coverage area and the users’ sparseness), and calculates the total request arrival rate accessing the resource, that is O c
The basic steps are as follows: a) Initialization: The initial cluster centers: v (0) = { v1(0) , v2(0) ,…, vc(0) }, l 0 , l is the number of iterations, and the maximum number of iterations is T , the threshold is H . b) Updating J ik
( l 1)
m
¦O
i
.
via the following formula:
i 1
2) Estimation of number of level-two caching MECs Before dividing users into different cluster, it is important to estimate the number of level-two caching MECs. The number should be appropriate. If too few MECs are selected, it is hard to alleviate the access load. If two much MECs are selected, it is easy to lead to low utilization of MECs. The number of level-two MECs is related to all the MECs’ request arrival rate and average service rate. Assuming that the total number of level-one and level-two caching MECs is c 1 , for Oc , each caching MEC, its average arrival rate assigned is c 1 1 m and average service rate of all the MECs is ¦ Pi , and mi1 eventually the service intensity must be less than the threshold T , then
¦
n
(9)
Input:
J u
f k 1 ik k n f k 1 ik
J
(i 1, 2,..., c)
(11)
U,E
Output: E , mc 1. Root MEC calculates
U
and determines whether or not it
2.
exceeds T . If so, continue to the next step; Root MEC obtains all the users’ location information from NEF,
3.
analyzes the distribution characteristic, calculates O c , c ; FCM is applied to divided users into c clusters and original cluster
) f d 2 (uk , vi ) ,where,
centers are given: 4.
v ( l 1) = { v1( l 1) , v2( l 1) ,…, vc( l 1) };
In each cluster, the nearest MEC from cluster center is selected as the level-two caching node, then the initial caching MEC set is
E {e12 , e22 ,..., ec2 } ;
cluster˗ d 2 ( xk , vi ) is the Euclidean norm, that is d 2 (uk , vi ) = 2
via the following formula:
4) Selection of level-three MECs With the deployment of level-two caching MECs, users can request resource from the nearest caching MEC. All the access requests will be assigned to the caching MECs, it is difficult to ensure their load balance, especially some caching MECs in the user-intensive areas. Hence, level-three caching MECs should be added. In each cluster, if the service intensity of level-two caching MECs exceeds the threshold T , the second-nearest MECs to its original cluster center is selected as a level-three MEC. Therefore, the total number of caching MECs is mc .
k 1 i 1
uk vi . Among them, ¦ J ik
(10)
v2( l 1) ,…, vc( l 1) }, for each cluster center, the nearest MEC to it is selected as a level-two caching MEC, so the level-two caching MEC set can be obtained as E 2 {e12 , e22 ,..., ec2 } .
n , c , f ! 1 , J ik are the total number of users, number of clusters, fuzzy factor, membership the sample uk belongs to ith c
(i 1, 2,..., c; k 1, 2,..., n )
Finally, the cluster centers can be obtained: v ( l 1) = { v1( l 1) ,
c
ik
2/( f 1)
go to step b).
FCM is a fast and computationally simple algorithm, and has more intuitive geometric significance [15]. Therefore, FCM is adopted to divide all the users into different clusters based on their distribution characteristic, then a corresponding MEC in each cluster is selected as caching MEC. FCM divides n users ui (i 1, 2,..., n ) into c clusters Gi ( i =1,2,…, c ), and obtains the cluster center of each cluster to minimize the
¦¦ (J
· ¸¸ ¹
If max i vi( l 1) vi( l ) d H or l ! T , stop; otherwise, l = l +1,
3) Selection of level-two MECs
function J m (u, v )
¦ ¦ n
vi
By the formula (8), the value of c meets the following formula:
objective
j
§ d ik ¨ 1¨ © d jk ( l 1)
(8)
m ª ¦i 1 Oi 1º» c!« « T ¦ m Pi » « m i1 »
c
c) Updating vi
Oc c 1 T 1 m ¦ Pi mi1
1
J ik
5.
1 , J ik (0,1) .
If the service intensity of each caching MEC exceeds T in the cluster, the second-nearest MEC to its original cluster center is selected as level-three caching node and added to
i 1
6.
61
E ;
Output caching MEC node set E and total number mc ;
In this way, the caching MEC set E consists of both leveltwo and level-three MECs. Then root MEC distributes the resource to each MEC of the set. Based on the proximity principle, the requested resource will be distributed to users from the nearest caching MEC. The FCM-based caching MECs selection algorithm is shown above:
one is No Caching Scheme (NCS). RCS’s number of level-two caching MECs is the same to HCS’s. In the RCS, the same number of level-two caching MEC node as HCS’s are selected randomly. If the service intensity of level-two caching MEC exceeds the threshold, its neighbor MEC is selected as the level-three caching MEC. In NCS, no other MEC is selected as caching node.
B. Release of caching MECs When the current resource’s access traffic drops, level-two and level-three caching MECs can delete it to release storage space for caching new resource. The subsequent access requests to the current resource will be redirected to root MEC. Further, if current resource storing in root MEC is also deleted, users’ access requests are redirected to NEF and the requested resource is obtained from NEF.
TABLE I.
IV. EXPERIMENTAL RESULTS AND ANALYSIS The purpose of simulation experiments is to verify the proposed method’s effectiveness mainly from two aspects: (1) Analysis of the DRDN’s TEC . The effects on TEC of the number of users ( n ) and the number of caching MECs ( mc ) is investigated. (2) Analysis of the DRDN’s ASD . The relationships between ASD and the total arrival rate ( O c ), average service rate ( P ) are addressed.
Parameter
Value
m n f
100
Parameter
F
2
l H D
0.00005
E
600J/M
100J
s ]
1000
150M 150M/s
vi a T
200
350J/M
Pk , Pkc
2G [0,1] 0.8 100/s
5
18
x 10
HCS RCS NCS
16
10
MEC User
9
Value
B. Analysis of TEC In the experiment, TEC of three kinds of caching schemes are observed under different number of users, O c is 90, P is 100. In HCS and RCS, mc is 3. As shown in Fig.3, TEC of HCS is the lowest compared to the RCS’s and NCS’s. In contrast, TEC of NCS is more than two times higher than the RCS’s and NCS’s. In the NCS, root MEC needs to distribute the resource to each user, so the duplicate traffic is high. Consequently, its forwarding energy consumption is high, so the forwarding energy is also high. In the HCS and RCS, due to the deployment of caching MECs, users can request the resource from the caching MECs closer to them, much forwarding energy could be saved. As can be seen from Fig.3, HCS’s TEC is lower than RCS’s. The main reason is that the users’ distribution characteristics is considered in the HCS, such as all the users’ current location information, the number of users within each MEC’s coverage.
A. Experiments setup Simulation experiment is achieved by Matlab in the Windows7 platform, its basic parameter includes 3.60GHz Inter Core (TM)i7-4790 processor, 12GB memory. Firstly, in the 10×10 coordinate plane, one MEC is developed in each grid and 1000 users are generated randomly. The network layout of MECs and users is shown in Fig.2. The settings of unit energy consumption for forwarding, storing 1M data and processing one access request are described in [16]. Assuming that the MEC’s user request rate is proportional to its number of users, and the value of scale factor a is [0, 1]. The average service rate and average forwarding rate are 100/s. Other parameter settings are shown in Table ĉ . One of the comparison scheme is Rand Caching Scheme (RCS), the other
14
12
TEC (J)
8
7
Y (Latidude)
Experimental Parameter Setting
6
5
10
8
6
4 4 3 2 2 0 1
1000
2000
3000
n ('=90) 0
0
1
2
3
4
5
6
7
8
9
10
X (Longitude)
Fig. 3. The effect of
Fig. 2. Network layout of MECs and users
62
n on TEC ( O c =90)
4000
5000
5
2.8
x 10
1.26
2.6
HCS RCS NCS
2.4
4
HCS RCS NCS
1.24
1.22
ASD (ms)
TEC (J)
2.2
x 10
2
1.8
1.2
1.18
1.6
1.4
1.16 1.2
1
1
2
3
4
5
6
7
8
9
1.14 10
10
20
30
Fig. 4. The effect of
40
50
60
70
80
90
' (n=1000, =100)
m' ('=90)
mc on TEC ( O c =90)
Fig. 5. The effect of O c on
The effect of the number of caching MECs on TEC is shown in Fig.4, mc varies from 1 to 10. When mc is 1, there is only level-one MEC (root MEC) as the caching node, the three schemes’ TEC are equal. With the increase of mc , TEC of HCS and RCS decrease gradually. When mc is 10, compared with NCS’s TEC , less than 50% of energy can be saved in the HCS and RCS. It is obviously seen that RCS’s TEC has fluctuations, because the caching MECs are selected randomly without taking the users’ distribution into consideration.
ASD ( P =100)
4
2.4
x 10
2.2
ASD (ms)
2
RCS (n=2000) RCS (n=1000) HCS (n=2000) HCS (n=1000)
1.8
1.6
1.4
C. Analysis of ASD The change of ASD in three kinds of caching schemes is analyzed. n =1000ˈ P =100ˈ O c varies from 10 to 90, ASD of NCS rises rapidly. When O c