Modeling Energy-Delay Tradeoffs in Single Base Station with Cache

4 downloads 5208 Views 1MB Size Report
Apr 15, 2015 - been widely investigated to speed up content distribution and improve network ... station with a cache capacity to buffer network contents. Although .... size for a BS as a proportion that the cache size is defined as the relative ...
Hindawi Publishing Corporation International Journal of Distributed Sensor Networks Volume 2015, Article ID 401465, 5 pages http://dx.doi.org/10.1155/2015/401465

Research Article Modeling Energy-Delay Tradeoffs in Single Base Station with Cache Chao Fang,1 Haipeng Yao,1 Chenglin Zhao,2 and Yunjie Liu1 1

State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Mail Box 199, Haidian District, Beijing 100876, China 2 Key Laboratory of Universal Wireless Communication, Beijing University of Posts and Telecommunications, Ministry of Education, No. 10 Xitucheng Road, Mail Box 199, Haidian District, Beijing 100876, China Correspondence should be addressed to Haipeng Yao; [email protected] Received 28 January 2015; Accepted 15 April 2015 Academic Editor: Sherwood W. Samn Copyright © 2015 Chao Fang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. With the explosive increase of mobile data traffic, the energy efficiency issue in cellular networks is a growing concern. Recently, the advantages of in-network caching in Internet have been widely investigated, for example, speeding up content distribution and improving network resource utilization. In this paper, we analyze the energy-delay tradeoff problem in the context of single base station (BS), which has a cache capacity to buffer the contents through it. Although additional power is consumed by the cache, work load of BS and network delay will be improved, which makes a tradeoff between network power consumption and delay. Simulation results reveal that, by introducing the cache in a BS, the network power and delay can be obviously reduced in different network conditions compared to the scenario without a cache. In addition, we find that a large cache size does not always mean a less network cost because of the more cache power consumption.

1. Introduction With the explosive growth of the mobile data traffic, mobile hosts will overtake fixed ones, not only in terms of numbers but also in terms of traffic load [1], which makes energy efficiency a growing concern in the current cellular networks due to the increasing network cost (e.g., BS power consumption and network delay) [2, 3]. However, given the centralized architecture of current cellular networks, the wireless link capacity as well as the bandwidth of the Radio Access Networks and the backhaul network cannot practically cope with the explosive growth in mobile traffic [4]. Recently, the advantages of in-network caching in cellular networks have been widely investigated to speed up content distribution and improve network resource utilization. Several works have been studied on this new field. Sarkissian [5] analyzes embedding caches in wireless networks to achieve significant reduction in response time and thus provide exceptional wireless user experience. Woo et al. [6] argue that caching in the cellular networks is an attractive approach that can address the backhaul congestion in core

networks and compare the benefits and tradeoffs of different promising caching solutions. Erman et al. [7] explore the potential of forward caching in 3G cellular networks and develop a caching cost model to realize the tradeoffs between deploying forward caching at different levels in the 3G network hierarchy. Wang et al. [4] first study techniques related to caching in current mobile networks, discuss potential techniques for caching in 5G mobile networks, and then propose a novel edge caching scheme based on the concept of content-centric networking or information-centric networking. Le et al. [8] propose a byte caching scheme to cache repetitive portions of an object, which functions at the network layer by looking for common sequences of data in the bytes of packet flows and does not need to split flows into segments but continuously penetrates into the byte strings to cache the often used bytes and eliminate any redundant ones. Golrezaei et al. [9] envision femtocell-like base stations with weak backhaul links but large storage capacity, which form a wireless distributed caching network that assists the macrobase station by handling requests of popular files that have been cached. Ahlehagh and Dey [10] demonstrate

2

International Journal of Distributed Sensor Networks

the feasibility and effectiveness of using microcaches at the base stations of the Radio Access Network (RAN), coupled with new caching policies based on video preference of users in the cell and a new scheduling technique that allocates RAN backhaul bandwidth in coordination with requesting video clients. Although these excellent works have been done on caching in the cellular networks, far little effort has been dedicated to study the problem from the viewpoint of tradeoff between energy and delay. In this paper, we analyze the energy-delay tradeoff problem in the context of single base station with a cache capacity to buffer network contents. Although additional power is consumed by the cache, the work load of BS and network delay will be reduced, which makes a tradeoff between network power and delay.

2. System Model We analyze the energy-delay tradeoff problem in single BS scenario, where the BS has a limited cache capacity 𝐶. Moreover, we assume that the packet error has been guaranteed by the physical layer technique, and the retransmission is not considered. 2.1. Content Popularity Model. We assume that the number of different contents provided by a content provider is 𝐹 ranked from one (the most popular) to 𝐹 (the least popular) based on its popularity. By letting the total number of requests in a given time duration 𝑡 be 𝑅, the number of requests to the content of popularity 𝑘, 𝑅𝑘 , follows a Zipf-like distribution [11, 12] as 𝑅𝑘 = 𝑅

𝑘−𝛼 ∑𝐹𝑘=1

𝑘−𝛼

=

𝑅 −𝛼 𝑘 , 𝑎0

𝑘 = 1, 2, . . . , 𝐹,

(1)

where 𝑎0 is a constant to normalize the request rate. A large value for 𝛼 indicates that the number of popular contents increases. 2.2. Power Model. We consider single BS with a cache scenario where users arrive according to a Poisson process with arrival rate 𝜆. Each user requires a random amount of downlink service with average length 𝑙 bits, for example, nonrealtime file download with average file size 𝑙, and then the user leaves after being served. Assuming the arrival rate can be well estimated [13], with time varying traffic intensity in practice, we just need to operate according to the current arrival rate. In the following, we use an energy-proportional model [14] to model the power consumption in single BS with a cache. 2.2.1. Cache Power. We assume that there is a fixed cache power cost 𝑃𝑠 for each user request search. The total power consumed by the cache is given by [11, 12] 𝑃cache = 𝑃𝑠 + 𝑁𝑐 𝑙𝑤ca ,

𝑁𝑐 ∈ [0, 𝐶] ,

Table 1: Power efficiency 𝑤ca of different caching hardware technologies.

High-speed solid state disk (SSD) Dynamic random access memory (DRAM) Static random access memory (SRAM) Ternary content-addressable memory (TCAM)

6.25 × 10−12 2.5 × 10−9 1.5 × 10−8 1.88 × 10−6

of 𝑤ca strongly depends on the caching hardware technology. In this paper, we consider high-speed solid state disk (SSD), dynamic random access memory (DRAM), static random access memory (SRAM), and ternary content-addressable memory (TCAM). The power efficiency of these technologies is listed in Table 1. 2.2.2. Base Station Power. The 𝑀/𝐺/1 processor sharing (PS) model is used here [15, 16]. The traditional BS power can be modeled as 𝑃BS = 𝑃0 + Δ 𝑃 𝑃𝑡 ,

(3)

where 𝑃0 is the static power consumption in the active mode, Δ 𝑃 is the slope of the load-dependent power consumption, and the transmit power 𝑃𝑡 adapts to the system traffic load. Assume that the BS service capacity or service rate is 𝑥 bits per second, which adapts to the system traffic load and is equally shared by all users being served. So the user departure rate is 𝜇 = 𝑥/𝑙, and the traffic load (or busy probability) at the BS is 𝜌 = 𝜆/𝜇 = 𝜆𝑙/𝑥. The relationship between the service rate 𝑥 and the transmit power 𝑃𝑡 is 𝑃𝑡 =

𝜌 𝑥/𝐵 (2 − 1) , 𝛾

𝑃𝑡 ∈ [0, 𝑃𝑡max ] ,

(4)

where 𝛾 = 𝜂𝑔/𝑁0 𝐵, 𝑔 represents the channel gain, 𝐵 is the bandwidth, 𝑁0 denotes the noise density, and 𝜂 is a constant related to bit error rate (BER) requirement when adaptive modulation and coding are used [17]. Therefore, with service rate 𝑥, the traditional BS power expression (3) can be rewritten as 𝑃BS = 𝑃0 +

Δ 𝑃 𝜌 𝑥/𝐵 𝜆𝑙 Δ 𝑃 𝑥/𝐵 (2 − 1) = 𝑃0 + (2 − 1) . (5) 𝛾 𝑥 𝛾

However, with a cache, all the user requests to the BS will be partly satisfied by it. We assume that the contents with content popularity of the top 𝑁𝑐 are buffered in the cache. Then, the number of requests unsatisfied by the cache is

(2)

where 𝑁𝑐 represents the number of contents cached in the BS cache and 𝑤ca is the power efficiency of caching. The value

Power efficiency (Watt/bit)

Caching hardware technology

𝜌󸀠 =

𝜌 𝐹 −𝛼 𝜆𝑙 𝐹 −𝛼 ∑ 𝑘 = ∑ 𝑘 . 𝑎0 𝑘=𝑁 +1 𝑎0 𝑥 𝑘=𝑁 +1 𝑐

𝑐

(6)

International Journal of Distributed Sensor Networks

3 6

200 190

5

180

4

160

Delay

Power

170

150

3 2

140 130

1

120 110 0.6

0.7

0.8

0.9 1 1.1 1.2 Content popularity

Cache size = 0.1% Cache size = 1%

1.3

1.4

1.5

0 0.6

0.7

0.8

0.9 1 1.1 1.2 Content popularity

Cache size = 0.1% Cache size = 1%

Cache size = 5% Without cache

(a) Power consumption

1.3

1.4

1.5

Cache size = 5% Without cache

(b) Network delay

250

Cost

200

150

100 0.6

0.7

0.8

0.9 1 1.1 1.2 Content popularity

Cache size = 0.1% Cache size = 1%

1.3

1.4

1.5

Cache size = 5% Without cache

(c) Network cost

Figure 1: Comparisons between energy-delay tradeoffs of a BS with and without a cache versus content popularity and cache size (𝛽 is 10).

Therefore, based on (6), the total BS power expression with a cache can be written as 󸀠 = 𝑃0 + 𝑃BS

= 𝑃0 +

Δ 𝑃 𝜌󸀠 𝑥/𝐵 (2 − 1) 𝛾 𝐹

𝜆𝑙 Δ 𝑃 𝑥/𝐵 (2 − 1) ∑ 𝑘−𝛼 . 𝑎0 𝑥 𝛾 𝑘=𝑁 +1

(7)

𝑐

2.3. Delay Model. According to the property of 𝑀/𝐺/1 PS queue [18] and Little’s Law [19], the average delay of a BS without a cache is 𝜌 𝑙 . = 𝐷BS = (8) 𝑥 − 𝜆𝑙 𝜆 (1 − 𝜌)

Similarly, the average delay of a BS with a cache is

󸀠 𝐷BS

𝑙 ∑𝐹𝑘=𝑁𝑐 +1 𝑘−𝛼 𝜌󸀠 = . = 𝜆 (1 − 𝜌󸀠 ) 𝑎0 𝑥 − 𝜆𝑙 ∑𝐹𝑘=𝑁 +1 𝑘−𝛼 𝑐

(9)

2.4. System Cost. The objective is to minimize the system cost, which is a weighted combination of average total power tot and average delay 𝐷Tsleep ,𝑥 . Here, Tsleep represents 𝑃T sleep ,𝑥 the parameter in the BS sleeping control: for the 𝑁-based sleeping control, Tsleep = 𝑁; Tsleep = 0 denotes that there is no sleeping control. In this paper, we assume that the system runs without considering the BS sleeping control.

4

International Journal of Distributed Sensor Networks

Therefore, we seek to find the service rate 𝑥 and sleeping control parameter Tsleep that minimize tot 𝑧 (0, 𝑥) = 𝑃0,𝑥 + 𝛽𝐷0,𝑥 .

800 700 600

The positive weighting factor 𝛽 indicates the relative importance of the average delay over the average power which can be thought of as a Lagrange multiplier on an average delay constraint [20, 21]. The “delay” we consider in this paper is the average response time from the user’s service request arriving at the BS until this request is finished. From Little’s Law, we know that the mean delay is directly related to the average queue length.

500

3. Simulation Results and Discussions In this section, we use computer simulations to evaluate the performance of the proposed energy-delay tradeoff model. We first describe the simulation settings and then compare it with the traditional network cost model [22], which does not have a cache. 3.1. Simulation Settings. The simulation is assumed to be carried out in a single urban microcell scenario. According to the ITU test environments [23], the system bandwidth 𝐵 = 10 MHz, the maximum transmit power 𝑃𝑡max = 15 W, and the channel gain 𝛾 = 10 (dB) in the simulation. We take the micro-BS energy consumption parameters 𝑃0 = 100 W and Δ𝑃 = 7 [24]. Users arrive according to a Poisson process with arrival rate 𝜆 = 4, and each user requests exactly one file whose size is exponentially distributed with mean 𝑙 = 2 MB from the BS and leaves the system after being served. The system operates in a time-slotted fashion and the BS schedules users in a round robin way, serving one user in each time slot. Moreover, we assume that the average user arrival rate 𝑥 is 70 Mbps. In the simulation, we assume that there are 10000 different contents in the network, where values of skewness factor 𝛼 range between 0.6 [25] and 1.5 [11]. We abstract the cache size for a BS as a proportion that the cache size is defined as the relative size to the total amount of different contents in the network, which varies from 0.1% to 5% [26]. Besides, we assume that the DRAM caching hardware technology is used, and the average search power of a cache uses fraction 50% of its power once fully utilized [14]. 3.2. Performance Evaluation Results. Figure 1 shows the energy-delay tradeoffs of the two models versus content popularity with different cache sizes when weight factor 𝛽 is 10. From Figure 1, we can observe that the content popularity and cache size have some effect on the energy-delay tradeoffs of the proposed model. When the Zipf skewness parameter 𝛼 increases, the content diversity decreases and the number

Total cost

(10)

400 300 200 100 10

20

30

40

Cache size = 0.1% Cache size = 1%

50 60 70 Weight factor

80

90

100

Cache size = 5% Without cache

Figure 2: Comparisons between total cost of a BS with and without a cache and weight factor.

of popular contents increases, which makes the network power consumption, delay, and cost decrease gradually, while these parameters of the traditional model remain the same because it does not consider the content popularity or have a cache capacity. As for cache capacity, a larger buffer means that more popular contents can be cached, which reduces the network delay, as shown in Figure 1(b). However, a larger cache size does not always mean a better energy-delay tradeoff, as shown in Figures 1(a) and 1(c). The reason is that the larger cache size is, the more cache power is consumed, which can influence the energy-delay tradeoff. Figure 2 shows the energy-delay tradeoffs of the two models versus weight factor 𝛽 with different cache sizes. From Figure 2, we can observe that the weight factor and cache size have some effect on the energy-delay tradeoffs of the two models. The increase of weight factor indicates that the average delay is more important than the network power. That is to say, network cost is more sensitive to average delay. Therefore, the network cost increases with the constant power consumption of a BS and cache. However, a larger cache size does not always mean a better energy-delay tradeoff, which has been analyzed above.

4. Conclusions and Future Work In this paper, we have studied the issues of the energydelay tradeoff problem in the context of single base station with a cache capacity to buffer network contents. Although additional power is consumed by the cache, the work load of BS and network delay will be improved, which makes a tradeoff between network power and delay. Simulation results reveal that, by introducing the cache, the network power and delay can be obviously reduced in different network conditions compared to the situation without a cache.

International Journal of Distributed Sensor Networks With recent advances of wireless mobile communication technologies and devices, more and more end users access the Internet via mobile devices, such as smart phones and tablets. Therefore, we will study user mobility in the proposed model in the future. Moreover, it would be interesting to discuss how to find a proper cache size to minimize network cost in the future work. Finally, future work is in progress to adopt the sleeping schemes to reduce network cost in the proposed model.

Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments The work described in this paper was fully supported under the General Program of National Natural Science Fund (Project no. 61471056) and the Youth Research and Innovation Plan of Beijing University of Posts and Telecommunications (Project no. 2014RC0103).

References [1] G. Xylomenos, X. Vasilakos, C. Tsilopoulos, V. A. Siris, and G. C. Polyzos, “Caching and mobility support in a publish-subscribe internet architecture,” IEEE Communications Magazine, vol. 50, no. 7, pp. 52–58, 2012. [2] J. Araujo, F. Giroire, Y. Liu, R. Modrzejewski, and J. Moulierac, “Energy efficient content distribution,” in Proceedings of the IEEE International Conference on Communications (ICC ’13), pp. 4233–4238, IEEE, Budapest, Hungary, June 2013. [3] V. Mathew, R. K. Sitaraman, and P. Shenoy, “Energy-aware load balancing in content delivery networks,” in Proceedings of the IEEE International Conference on Computer Communications (INFOCOM ’12), pp. 954–962, Orlando, Fla, USA, March 2012. [4] X. Wang, M. Chen, T. Taleb, A. Ksentini, and V. C. M. Leung, “Cache in the air: exploiting content caching and delivery techniques for 5G systems,” IEEE Communications Magazine, vol. 52, no. 2, pp. 131–139, 2014. [5] H. Sarkissian, “The business case for caching in 4g lte networks,” Tech. Rep., LSI, 2012. [6] S. Woo, E. Jeong, S. Park, J. Lee, S. Ihm, and K. Park, “Comparison of caching strategies in modern cellular backhaul networks,” in Proceedings of the 11th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys ’13), pp. 319–332, Taipei, Taiwan, June 2013. [7] J. Erman, A. Gerber, M. Hajiaghayi, D. Pei, S. Sen, and O. Spatscheck, “To cache or not to cache: the 3G case,” IEEE Internet Computing, vol. 15, no. 2, pp. 27–34, 2011. [8] F. Le, M. Srivatsa, and A. K. Iyengar, “Byte caching in wireless networks,” in Proceedings of the 32nd International Conference on Distributed Computing Systems (ICDCS ’12), Macau, China, July 2012. [9] N. Golrezaei, K. Shanmugam, A. G. Dimakis, A. F. Molisch, and G. Caire, “Femtocaching: wireless video content delivery through distributed caching helpers,” in Proceedings of the IEEE Conference on Computer Communications (INFOCOM ’12), Orlando, Fla, USA, March 2012.

5 [10] H. Ahlehagh and S. Dey, “Video caching in radio access network: impact on delay and capacity,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC ’12), pp. 2276–2281, IEEE, Shanghai, China, April 2012. [11] N. Choi, K. Guan, D. Kilper, and G. Atkinson, “In-network caching effect on optimal energy consumption in contentcentric networking,” in Proceedings of the IEEE International Conference on Communications (ICC ’12), Ottawa, Canada, June 2012. [12] K. Guan, G. Atkinson, D. C. Kilper, and E. Gulsen, “On the energy efficiency of content delivery architectures,” in Proceedings of the IEEE International Conference on Communications Workshops (ICC ’11), Kyoto, Japan, June 2011. [13] Y. Shu, M. Yu, J. Liu, and O. W. W. Yang, “Wireless traffic modeling and prediction using seasonal ARIMA models,” in Proceedings of the International Conference on Communications (ICC '03), vol. 3, pp. 1675–1679, Anchorage, Alaska, USA, May 2003. [14] L. A. Barroso and U. H¨olzle, “The case for energy-proportional computing,” Computer, vol. 40, no. 12, pp. 33–37, 2007. [15] S. Borst, “User-level performance of channel-aware scheduling algorithms in wireless data networks,” in Proceedings of the IEEE Conference on Computer Communications (INFOCOM ’03), San Francisco, Calif, USA, 2003. [16] I. E. Telatar and R. G. Gallager, “Combining queueing theory with information theory for multiaccess,” IEEE Journal on Selected Areas in Communications, vol. 13, no. 6, pp. 963–969, 1995. [17] X. Qiu and K. Chawla, “On the performance of adaptive modulation in cellular systems,” IEEE Transactions on Communications, vol. 47, no. 6, pp. 884–895, 1999. [18] J. Walrand, An Introduction to Queueing Networks, Prentice Hall, New York, NY, USA, 1998. [19] J. D. C. Little and S. C. Graves, Little’s Law, vol. 115, Springer, New York, NY, USA, 2008. [20] R. A. Berry and R. G. Gallager, “Communication over fading channels with delay constraints,” IEEE Transactions on Information Theory, vol. 48, no. 5, pp. 1135–1149, 2002. [21] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, UK, 2004. [22] J. Wu, S. Zhou, and Z. Niu, “Traffic-aware base station sleeping control and power matching for energy-delay tradeoffs in green cellular networks,” IEEE Transactions on Wireless Communications, vol. 12, no. 8, pp. 4196–4209, 2013. [23] Ericsson, “Radio characteristics of the itu test environments and deployment scenarios r1-091320,” 3GPP TSG-RAN1#56bis, Ericsson, Seoul, Republic of Korea, 2009. [24] M. A. Imran, E. Katranaras, G. Auer et al., “D2.3: energy efficiency analysis of the reference systems, areas of improvements and target breakdown,” ICT-EARTH Technical Report INFSOICT-247733, 2010, http://cordis.europa.eu/docs/projects/cnect/3/247733/080/deliverables/001-EARTHWP2D23v2 .pdf. [25] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker, “Web caching and Zipf-like distributions: evidence and implications,” in Proceedings of the 18th Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM ’99), vol. 1, pp. 126–134, IEEE, New York, NY, USA, March 1999. [26] D. Rossi and G. Rossini, “Caching performance of content centric networks under multi-path routing (and more),” Tech. Rep., Telecom ParisTech, 2011.

International Journal of

Rotating Machinery

Engineering Journal of

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Distributed Sensor Networks

Journal of

Sensors Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Control Science and Engineering

Advances in

Civil Engineering Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Submit your manuscripts at http://www.hindawi.com Journal of

Journal of

Electrical and Computer Engineering

Robotics Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

VLSI Design Advances in OptoElectronics

International Journal of

Navigation and Observation Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Chemical Engineering Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Active and Passive Electronic Components

Antennas and Propagation Hindawi Publishing Corporation http://www.hindawi.com

Aerospace Engineering

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

International Journal of

International Journal of

International Journal of

Modelling & Simulation in Engineering

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Shock and Vibration Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Advances in

Acoustics and Vibration Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Suggest Documents