Towards Greener Services in Cloud Computing: Re ...

2 downloads 314 Views 557KB Size Report
private cloud and provides the user with the result. Moreover, a Carbon .... form, energy efficient cloud computing and data center power distribution. Singh et al.
Towards Greener Services in Cloud Computing: Research and Future Directives Soha Rawas

Wassim Itani

Department of Mathematics and Computer Science Beirut Arab University [email protected]

Department of Electrical &Computer Engineering Beirut Arab University [email protected]

Abstract—Cloud computing is today’s most emphasized infrastructure for running high performance computing (HPC), business enterprise and web applications used by almost every online user. The growing demand of distributed powerful data centers has drastically increased the Cloud infrastructure energy consumption, which has started to limit further performance growth not only due to overwhelming electricity bills but also due to carbon dioxide footprint that effect the whole world. Therefore, energy-efficient solutions have become a challenging factor for the ICT future sustainability. To design such solutions, deep analysis of the Cloud infrastructure and the computing resources that contribute to energy consumption is required. This paper discusses various elements that form the cloud computing environment, its role in the total energy consumption and the research directions to enable green Cloud computing. Finally, an energy efficient green Cloud architecture is proposed to provide a unified solution that assists Clouds providers to utilize computing resources in the greenest or most energy-efficient way. Keywords—Green IT, Energy Efficiency, Power Saving, CO2 Emission

I. INTRODUCTION Throughout its history, computer systems have evolved in a spiral way of integration and distribution. They experienced a transition from centralized, massive, shared mainframes in the 1970s to decentralized, handy, personal PCs in the 1990s. In 2010, nevertheless, they are again moving into consolidated, shared virtualized machines invisible to users: socalled cloud computing systems [1]. Cloud computing has developed into a widespread computing pattern which provide systems with cheaper and accessible resources. It also provides researchers with a new method to set up their computation and carry out their data applications, such as the scientific applications with no infrastructure expenses at all. However, using cloud services allows for a more flexible and pay-as-you-go pattern according to which users are charged with respect to their usage of these cloud services which range from computation and storage to network services like conventional utilities in everyday life (e.g. water, electricity, gas and telephony) [3]. The proliferation of cloud computing has led to the establishment of far-reaching significant data centers worldwide

Ali Zaart

Ahmed Zekri

Department of Mathematics and Department of Mathematics and Computer Science Computer Science Beirut Arab University Beirut Arab University [email protected] [email protected]

with thousands of computing nodes. However, the cloud centers’ consumption of huge amounts of energy has caused high cost rates and carbon dioxide (CO2) emissions to the environment. According to a study by the Uptime Institute and McKinsey [34], server clusters in data centres contribute to 30% of the world’s CO2 emissions and will surpass those of the airline industry by 2020. Other recent studies by Greenpeace indicate that IT carbon footprints occupy 2% of global greenhouse gas emissions [4, 35], which is equal to the CO2 emissions by the aviation industry [5] and drastically leads to the greenhouse effect. Koomey expected in [6] that energy consumptions in data centers will continue to increase rapidly unless advanced energy efficient mechanisms are established and applied. The most recent studies by Greenpeace estimate that the number of currently online people are around 2.5 billion, while according to their studies that number is expected to increase by 60% in the next 5 years [36]. Thus, achieving power-efficiency in today’s Internet server systems is a fundamental concern. In order to tackle the issue of high-energy use, eliminating electricity ineffectiveness and waste in the way it is carried to computing resources is a must. It is also important to consider how these resources are used to serve the application loads of tasks. One way to accomplish such a mission is by enhancing the physical infrastructure of data centers, resource allocation and management algorithms. According to the Open Compute project, the Power Usage Effectiveness (PUE) of the Facebook’s Oregon data center has reached 1.08 [7], that’s means 91% of the Facebook’s data center energy consumption is fully utilized by its computing resources. It is well known that data centers are massive consumers of energy. This consumption leads to high carbon emissions, since most of this energy are produced using fossil fuels. Moreover, energy consumption cost will influence end users that pay as usage cloud resources and services. Also more power consumption requires more cooling, and that effect the environment in a negative way by producing more CO2. Therefore, if a small fraction of the energy consumed in datacentres is reduced, this results in substantial savings in financial and carbon emissions of the ecosystem. Moreover, in-

creasing the awareness about these emissions leads to a greater demand for cleaner products and services. Thus, many companies have started to build “green” datacenters, i.e. datacenters with on-site renewable power plants. For example, Apple [8,19] and McGraw-Hill [9] have built 20MW and 14MW solar arrays, respectively, for their data centers [10]. Cloud computing is growing in popularity among computing paradigms for its appealing property of considering “Everything as a Service”. The building block of cloud computing is the cloud infrastructure, that enables service providers to provision the infrastructure they need (in terms of processing, storage, networks, and other fundamental computing resources) for the delivery of their services without having to buy the resources necessary for running them. Being a stressing need, Cloud infrastructure has led to a radical increase in the data centres’ energy consumption, turning it into a serious problem. Such high-energy consumption means high operational costs and thus low profits for Cloud providers, and high carbon non-environment friendly emissions. Therefore, solutions based on energy efficiency should be devised in order to reduce its influence on the environment. The rest of the paper is organized as follows. Section 2 reviews the research work in cloud energy efficiency. Section 3 discusses the factors that contribute to the energy consumption in data centers. Section 4 describes the power management techniques and conserving strategies. Section 5 elaborates our design guidelines of an Energy-Efficient service model for green cloud environment. Section 6 concludes the paper. II. RESEARCH WORK IN CLOUD ENERGY EFFICIENCY Despite the fact that cloud computing has become a widespread computing pattern, research in the field is still inadequate. Challenges that cloud computing faces other than power consumption and efficient energy management are related to different issues such as standardization, security, software frameworks and quality of service. However, power and energy management is still one of the most challenging research concerns [11]. Even though various contributions in terms of Cloud energy efficient were proposed by researchers, the Cloud computing environment is still lacking a unified picture. Additionally, attention has been drawn to specific components of cloud computing without considering others thus leading to insufficient energy consumption saving. VM consolidation, for example, may decrease the number of active servers, but it will put extreme load on few servers where heat distribution can become a major problem. In order to support energy efficient cooling without making an allowance for the effect of virtualization, other researches focus mainly on the redistribution of workload. Being profit-oriented, Cloud providers are searching for solutions that can lessen the power consumption and carbon emission without affecting their

market. Therefore, a unified answer to the issue of energy consumption should be found in order to enable Green Cloud computing. This can be done through recommending a green Cloud architecture which helps decrease the energy consumption of Clouds without affecting the providers’ objectives. Several issues about green Information and Communication Technologies (ICT) and energy reduction in modern cloud computing systems are receiving huge attention in the research community. Several other efforts have been made to build energy consumption models, develop energy-aware cost, manage workload fluctuation and try to achieve an efficient trade-off between system performance and energy cost. Buyya et al., [20, 21] present a carbon aware green Cloud architecture to improve the carbon footprint of Cloud computing while considering its global view. Their architecture is designed in a way to provide incentives to both users and providers in order to utilize and deliver the greenest services respectively. The proposed architecture consists of two types of directories named as green offer and carbon emission. The Green Offer Directory provides a record of the providers’ services. These services were organized by the Green Broker with respect to their price, time, and the service that offers the least CO2 emission. Data related to the information of energy and cooling efficiency of cloud services as well as data centres are stored in the Carbon Emission Directory. Up-to-date service information only is used by the green broker. When a user asks for a service, the green broker is contacted and hence uses these directories to choose the green offer and energy efficiency information, allocates the services to the private cloud and provides the user with the result. Moreover, a Carbon Efficient Green Policy (CEGP) is proposed for the Green Broker which helps schedule user application workload with urgent deadlines on Cloud data centers using more energy efficiency and low carbon impact. This directory idea is proposed by Buuya [21] and used by the Hulkury et al. [22] who propose the Integrated Green Cloud Architecture (IGCA). The IGCA includes Cloud Middleware, therefore when a request is made by a client, the manager divided its request into jobs and distributed among the users and at the same time the information of the job is stored into the components. The manager selects the best green offer taking into account the security level of the job. Once the manager makes such a decision, the information is stored in the XML file for future usage. Therefore, due to the fact that the manager plays the central coordinator role in assigning the job to the users as well as in decision-making, it represents the weakest point in this architecture being the central point of failure, i.e. if the manager fails everything in the architecture collapses. The architectural framework proposed in [23] for green clouds also achieve VM reconfiguration, allocation and reallocation. The authors use a CPU power model to monitor the energy consumption of the cloud which observed a correlation between the CPU energy utilization and the workload with time. To dynamically consolidate VMs, different VM migration model were used. However, based on using thresh-

olds, an experimental model was proposed to decide the time of VMs migration from a host. Liu et al. present the Green Cloud architecture [24], which reduces power consumption by enabling comprehensive online-monitoring, live virtual machine (VM) migration, and VM placement optimization. Khosravi et al. [25] proposes a VM placement algorithm by developing the Energy and Carbon-Efficient (ECE) Cloud architecture. This architecture benefits from distributed Cloud data centres with different carbon footprint rates, PUE value, and different physical servers’ proportional power by placing VM requests in the best-suited data center site and physical server. The proposed architecture is represented by the following components:  Cloud Users  Cloud Provider: has several geographically distributed data centre sites.  ECE Cloud Information Service: a service where each center site registers its characteristics in and they keep their information updated (available physical resources and energy related parameters). This information is used by a Cloud broker to perform energy efficient VM placement.  ECE Cloud Broker: is a Cloud provider’s interface with Cloud users who make the placement decision based on the data centers’ power usage effectiveness, energy sources carbon footprint rate, and proportional power. The VM placement algorithm used by the proposed ECE architecture based on best-fit heuristic and evaluated using the cloudsim simulator [56]. Caglar et al [26], proposed a similar solution to that Khosravi et al. The main difference is that, they have evaluated their solution inside a cloud data center and applied machine learning to account for a large set of factors affecting power and performance which are difficult to model in the system. Moreover, Khosravi et al, considered power consumption as a function of CPU frequency, whereas Caglar et al, have taken several factors including memory, network, overbooking rate etc. into account for predicting performance and power. Awada at al. [14] presented new energy consumption models and formulas that give detailed description on energy consumption in virtualized data centers. The proposed model divides the energy consumption into two parts, Fixed and Dynamic; which represents the energy consumed during server idle state and the other consumed by other Cloud tasks and the cooling system. The energy-saving mechanism is based on optimization; reconfiguration and monitoring which show that it can save 20% of energy consumption in cloud data centers Energy management techniques in cloud data centers have also been explored in the past years. The work in [28] describes how servers can be turned ON/OFF using the Dynamic Voltage Frequency Scaling (DVFS) approach to regu-

late servers’ power status. This technique adjusts CPU energy consumption proportionally to the workload. However, the scope is limited to the CPUs. Therefore, there is a need to look into the behaviour of individual VMs. Therefore, by monitoring the energy profile of individual system components, Anne at al. [29] observed that nodes still consumed energy even when they are turned off, due to the embedded card controllers which are used to wake up the remote nodes. Bhanu Priya et al., [30] studied the cloud computing metrics and propose different energy models to reduce the power consumption and CO2 emission in order to make a cloud greener. This paper focused on three major factors to be taken into consideration for any cloud to be green. These factors are virtualization, work load distribution and software automation. Hosman and Baikie et al. [31] tackle the topic of solar energy and how it can play a vital role in data centers. They proposed a small level Cloud data center which is the combination of the three technologies: less power consumption platform, energy efficient cloud computing and data center power distribution. Singh et al., [32] proposed a set of: (1) architectural principles for energy efficient management of Clouds and (2) energy efficient resource allocation strategies. Energy based Efficient Resource Scheduling Framework (EBERSF) proposed by Singh et al, used the energy as a QoS parameter to schedules the resources. By taking energy as a QoS parameter EBERSF gives the facility of resource scheduling to the Cloud consumer for optimum results and better services and guarantees the avoidance of violations in service level agreements. Moreover, it assists organizations in reducing power consumption and contributes directly to the company growth and institution progress. The Green Route (G-Route), proposed and implemented by Itani et al. [33] is an autonomic service routing protocol for constructing energy-efficient provider paths in collaborative cloud architectures. The proposed G-Route approach selects the optimal set of composite service components supporting the most efficient energy consumption characteristics among a set of providers for executing a particular service request. The routing decision engine is designed to operate by processing accountable energy measurements extracted securely from within the cloud data centers using trusted computing technologies and cryptographic mechanisms which ensure the liability of the system. The G-Route service routing protocol is composed of the following entities.    

A cloud customer. A cloud service provider (CSP). The Service Router (SR) Energy Metric Repository (EMR)

The energy model implemented in this work is based on three main components in the system: The central processing unit (CPU), the disk, and the network interface card (NIC).The proposed G-Route implemented in a real cloud computing environment using the Amazon EC2 cloud platform and the experimental results demonstrates the capability of the energy-aware path selection in G-Route to achieve major energy and cost savings per service request. More energy efficient network contributions were proposed by different researchers. Fisher et al. studied the work of turning off individual cables and NICs [27]. The proposed three heuristics Fast Greedy Heuristic, Exhaustive Greedy Heuristic, and Bi-level Greedy Heuristic applied on Abilene traffic data and contribute to 79% of energy savings for bundles of five cables in each link. While the Elastic Tree proposed by Heller et al. [43], dynamically manage the power emission of the data center network by determining the set of active network elements such as switches and links and turned off unused ones. The proposed method implemented and tested using Open Flow switches, which observes a 50% reduction of network energy. Different research has addressed and surveyed various aspects of improving the energy efficiency of data centres. Yuan et al. [58] reviewed different strategies for efficient energy utilization, ranging from server virtualization and consolidation to optimal operation of fans and other cooling equipment. They discussed in more detail the case of cloudbased multimedia services, which posed specific challenges, such as larger storage and bandwidth requirements. Moreover, green communication techniques in mobile networks have also been studied across academia. Wang et al. [57] carry out a comprehensive survey of various techniques for improving mobile networks energy efficiency including architectural designs, communications schemes, power saving mechanisms and effective transmissions. III. ENERGY CONSUMPTION FACTOR IN DATA CENTERS In this section, we investigate the various Cloud elements that contribute to Data Center’s power consumption. These elements could be directly supporting the cloud service, such as software applications, network devices and servers, or could be indirectly supporting this service such as cooling systems, power generators and other electro-mechanical devices.

The following sections comprehensively discuss the energy consumption of these devices. A. Software Applications One of the factors that contributes to energy consumption is software applications and the way they are designed and implemented. These applications are owned by an individual user or offered by the Cloud provider using SaaS. However, their energy consumption depends on the application being long running with high CPU and memory requirements leading to high-energy consumption. Therefore, energy consumption due to software applications is directly proportional to the application’s profile which results from inaccurate design and implementation. This is due to the fact that, energy efficiency factors are not implemented during the design of an application which become a major problem especially when we consider mobile devices with limited computational and battery resources [45, 20]. B. Network Devices The network system is another area of concern and a key enabling component for cloud computing since it allows communication between computing resources and the end user. These devices consume a significant portion of the total power consumption [38]. The energy consumed by power-hungry network devices becomes a serious problem for many data center owners since these devices consume around 20% ~ 30% of the whole data center energy consumption [44]. This ratio will continue to grow with the rapid development of data centers worldwide that rely on fast connectivity network devices to satisfy the consumers’ need. Theoretically as shown in Figure 1, the energy consumption should grow with the increasing of the network load, while an idle switch should consume no power. However, according to the conducted experiments [45, 38], the energy consumption by network devices at low network load reaches around 90% of that at peak hour load, which means that these network devices are not energy proportional to the network load. Therefore, a significant amount of energy is wasted due to large number of idle network devices in high-density data center networks which designed and implemented in a way to handle the worst case scenario.

Power Consumption

The EMR is considered the chief contribution of this work. It role is to maintain energy consumption records for the different services published by registered CSPs in the cloud. These records are generated by trusted authorities upon accountably monitoring the service energy consumption at the providers’ sites. Therefore, the energy profiling process is controlled by a third party trusted by the cloud providers and consumers.

Ideal Reality

Trafic load (b/s) Fig. 1: Ideal vs. Reality, network devices energy consumption

C. Servers A data center contains hundreds to thousands of servers and storage units, these servers are the main causative for the largest amount of energy consumption [46]. The main parts contributing to the power consumption by a server are the CPU, memory and storage devices [47]. However, the major server’s energy inefficiency is due to the fact that an idle server consumes about 70% of its peak power. Therefore, reducing this waste of energy by the servers become the major task for providing energy efficiency in cloud computing. That is why, the demanding need for dynamic resource scheduling, server consolidation and efficient resource allocation to save energy in data centers by switching off idle servers remains the main concern of the Intel’s Cloud computing vision in 2015 [48]. D. Other Data Center Equipment From the discussion above, it is clearly observed that servers, networks and software applications are not the only infrastructure elements that consume energy in the data center. In reality, equipment such as power supplies, cooling, lightning, UPS and the data center building itself consumes equivalent amount of energy as shown in Figure 2. These bring us to the statement of Ranganathan [49] who finds that a dollar will be paid on a cooling system for every dollar spent on other data center’s computing and electrical devices. Therefore, different devices other than the actual IT services equipments contribute to the majority of power consumption in a data center. Servers and storage, 28%

Power Lightning, Conversions 1% , 11% Power Conversions

Several technologies, techniques and concepts are applied by Cloud providers leading to lower carbon emission and energy reduction in their IT infrastructure. Although different technologies were applied, virtualization is still the key driver technology that contributes to a significant improvement in Cloud data centers energy efficiency. Using various efficient energy saving green IT methods, Cloud computing reduces the energy usage, carbon emissions and save energy by at least 30% - 50% in Cloud data centers [20, 47,49, 50]. The sections below discuss some of the energy conserving strategies. A. Use Renewable Energy Sources Usually, Cloud data centers use generators to provide backup power. Though, to generate electricity to fulfil the power and cooling requirement of data centers, renewable energy sources should be used such as wind, hydro or solar energy which save energy and decrease the CO2 emission [52]. Apple, Facebook, and Google, according to Greenpeace [36], have committed to being completely powered by renewable energy sources. However, Apple renewable energy project expected to cover 87 percent of Apple's global operations [51]. B. Virtualization, Consolidation and Live Migration Virtualization remains the key current technology for energy-efficient operation of servers in data centers. Depending on user needs and management decisions, virtualized services can be created, deleted, moved or copied. Therefore, the VMs reduce the active hardware numbers by emulating the physical environment [54].

Lightning Cooling, 40%

Cooling Network Hardware

Network Hardware, 20%

Servers and storage Fig. 2: Data centre energy consumption

IV. TOWARDS CLOUD ENERGY EFFICIENCY TECHNIQUES IN DATA CENTERS The previous discussion presents the need to develop a comprehensive approach for energy efficient Cloud computing data centers that involve all system layers and aspects, including application layer, physical nodes, cooling systems, communication protocols, networking hardware, servers, and services themselves. However, to achieve a power reduction in Cloud data centers, standard metrics are needed to compute the amount of energy consumption by cooling and other overheads. PUE [50] is one of the powerful standards that is used to scale the amount of energy being usefully deployed versus that spent as overhead.

Live migration of VMs is done by moving a running VM from high load servers to low load ones, then it switches off the unused ones and hence, energy can be saved [53]. Various benefits can be provided by live migration such as power management, load balancing and transparent IT maintenance without losing quality of service [55]. Therefore, Virtualization, Consolidation and Live Migration managing physical servers. Using energy efficient virtual scheduling algorithms, improves resource utilization, load balancing and non-negligible amount of energy could be saved. C. Application Software and Compiling Technology Despite the fact that, when a software code designed and implemented by developers, the energy consumption is not one of their concerns; software plays an important role as hardware in optimizing the power consumption in Cloud data centers, while energy efficiency can be accomplished by optimizing the software code. Therefore, more work in fewer clock cycles can make better use of the underlying hardware capabilities [56].

Moreover, to increase the application performance, compilers should be optimized to take better advantage of the underlying processor architecture and to to minimize the system or processor operation power consumption. Multithreading and polarization, that converts the sequential software code into multiple “threads” which can run concurrently on parallel processors can play a major role in the energy saving activity [41]. D. Network Devices Research has shown that the network platform and its devices are one of the largest consumers of energy, therefore it needs some energy management strategies based on optimizing and developing new network protocols and devices that deal with energy saving. The increasing demand for scalable and cost-efficient computer networks has led to the rise of what is called Software-Defined Networking (SDN), which enables centrally managing network switches and its traffic management through programmable central controller. This new emerging technology can improve QoS and energy efficiency in Cloud network data centers by efficiently performing dynamic bandwidth allocation per flow, network efficient virtualization and traffic management and consolidation [2, 42, 39]. E. Data Center Building and Cooling Management The Data center building design, power supply and cooling unit are other infrastructures that need to be designed in an energy efficient manner. Therefore, by a smart construction of the data center where the electricity can be generated using solar and renewable sources, a non-negligible amount of electricity and energy could be reserved. However, with respect to cooling unit, free cooling systems is a newly adopted technique instead of mechanical ones which has a great impact in reducing the energy consumption in data center and in optimizing the cooling infrastructure level [40]. Hence, an efficient use of computing resources in clouds can help in achieving green Cloud computing. V. AN ENERGY-EFFICIENT SERVICE MODEL FOR GREEN CLOUD COMPUTING The above study reveals that, different efforts have been proposed to help enabling green computing and making Cloud computing environments energy efficient. However, Cloud computing still lacks a unified picture that should be adapted by Cloud providers for energy sustainability. Therefore, this study aims at providing a unified solution to enable a greener Cloud computing environment. The proposed Green Cloud model takes into account enforcing green computing in the provider’s cloud data centers. Therefore, this architecture aims at making a Cloud green from both the user and the provider’s perspective. Figure 3 presents a view of the proposed Green Cloud architecture.

Fig. 3: Green Cloud Computing Architecture

The Green Cloud framework is made up of the following Layers: 1- End User/ Cloud customer  Users of deployed services  Company deploying an application  User/ Company that request a service from the infrastructure of a certain cloud service provider 2- Green mediator Secured third party that acts as an intermediary between the cloud customer and the cloud service providers. It redirects the user request to the CSP that offer the greenest requested service. It contains the following directories:  Energy Monitor Directory (calculates cost and carbon footprint of services (using a special energy model )  Contains data on PUE, cooling efficiency, carbon footprint, network cost by each CSP registered in the green mediator  Observes energy consumption caused by VMs and physical machines and provides this information to the VM manager of CSP to make optimization and energy-efficient resource allocation decisions.  Green Directory  Lists of different services published by registered CSPs with their energy consumption records  Helps users to select cloud services with minimum carbon footprint  Objectives: Incentive for providers, users and government  Providers: Advertising tool to increase the market share, e.g. Google  Users: providing services with minimum prices  Government: by enforcing policies such as Carbon Tax  User Request Analyzer It request the cloud customer profile Interprets and analyze the service requirements of a submitted request.

Finalize the SLAs with specified prices and penalties between the cloud provider and consumers depending on consumer’s QoS requirements. 3- Cloud Service Provider  Register their services in the green offer directory  Optimize their infrastructure to reduce the carbon footprint by applying a number of policies: SaaS: energy efficient software through design, implementation, and deployment. PaaS: by optimizing the compiler level IaaS 1. Observe and determine which physical machine to power on/off 2. Virtual machine management, migration and resource allocation 3. VMs efficient scheduling for power and thermal aware 4. Network optimization and virtualization 5. Lightning, air conditioning …. The proposed green Cloud framework will be designed in a way to keep track of the overall energy consumption by cloud provider data centers. This gives incentives to Cloud providers to make their services (SaaS, PaaS, and IaaS) “Greener” to serve the user with the most efficient energy consumption services. Note that the green mediator can be constructed at the top of the cloud architecture for each CSP. However, one objective of the proposed model above aims at developing a set of secure mechanisms for continuously measuring the service energy consumption in cloud sites; this secured third party could be adapted by cloud customers and providers in order to keep the cloud computing environment green and efficient in terms of CO2 emissions. VI. CONCLUSION This paper, studied the Cloud computing environment and the impact of this emerging technology on the world energy and carbon dioxide emission. Then it analyzed the components of the Cloud that contribute to carbon emission and the research efforts and technologies proposed by researchers to help the Cloud environment going “Green”. Moreover, a green Cloud framework is proposed to help in the energy sustainability of the Cloud computing environment. Even though the proposed framework is not implemented yet, however it gives a view for a new green ICT environment that can enforce and reduce the amounts of CO2 emissions for the sake of better Cloud computing sustainability. REFERENCES [1] M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R.H. Katz, A. Konwinski, G. Lee, D.A. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, Above the Clouds : A Berkeley View of Cloud Computing. 2009. [2] K. Zheng, X. Wang, L. Li, and X. Wang. Joint power optimization of data center network and servers with correlation analysis. 2014 IEEE Conference on Computer Communikations, INFOCOM 2014, Toronto, Canada, April 27 - May 2, 2014, 2014, pp. 2598–2606.

[3] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic. Cloud Computing and Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility. Future Generation Computer Systems, vol. 25, pp. 599-616, 2009. [4] ——, “Growth in data center electricity use 2005 to 2010,” Analytics Press, Tech. Rep., 2011. [5]Gartner, Inc., “Gartner estimates ICT industry accounts for 2 percent of global CO2 emissions,” 2007, (accessed on 9/02/2015). [Online]. Available: http://www.gartner.com/it/page.jsp?id=503867 [6] J. G. Koomey. Estimating total power consumption by servers in the US and the world. Lawrence Berkeley National Laboratory, Tech. Rep., 2007. [7] Open Compute Project, “Energy efficiency,” (accessed on 9/02/2015). [Online]. Available: http://opencompute.org/about/energy-efficiency/ [8] A. Ankita, J. Nikita and N. Iyengar. A Study on Green Cloud Computing. International Journal of Grid and Distributed Computing, Vol.6, No.6 (2013), pp.93-102 [9] R. Miller. Data Centers Scale Up Their Solar Power. 2012. [10] JL Berral, Í Goiri, TD Nguyen, R Gavalda, Building Green Cloud Services at Low Cost, (ICDCS), 2014 IEEE, 2014 [11] Si-Yuan Jing · Shahzad Ali ·Kun She · Yi Zhong. State-of-the-art research study for green cloud computing. Springer Science+Business Media, LLC 2011 [12] U.S Environmental Protection Agency. (2006). Report to Congress on Server and Datacenter Energy Efficiency Public Law [Online]. Available: http://hightech.lbl.gov/documents/data_centers/epa-datacenters.pdf [13] Greenpeace. (2011). Greenpeace “Likes” Facebook’s New Datacenter, But Wants a Greener Friendship [Online]. Available: http://www.greenpeace.org/international/en/press/releases/Greenpeace-likesFace- books-new- datacentre-but-wants-a-greener-friendship [14] U. Awada, L. Keqiu, S. Yanming. Energy Consumption in Cloud Computing Data Centers. International Journal of Cloud Computing and Services Science (IJ-CLOSER) Vol.3, No.3, June 2014 ISSN: 2089-3337 [15] Ministry of Economy, Trade and Industry, Establishment of the Japan data center council, Press Release. [16] J. Hamilton. Cooperative expendable micro-slice servers (CEMS): low cost, low power servers for internet-scale services. In: Proceedings of CIDR’09, California, USA [17] The green grid consortium, 2011. URL: http://www.thegreengrid.org. [18] R. Buyya, A. Beloglazov, J. Abawajy. Energy-efficient management of data center resources for cloud computing: a vision, architectural elements, and open challenges. Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications, PDPTA 2010, Las Vegas, USA, 2010. [19] Apple, “Apple and the Environment,” 2013, http://www.- apple.com/environment/renewable-energy/. [20] GARG, K. Saurabh and R. Buyya. Green cloud computing and environmental sustainability. In: MURUGESAN, San; GANGADHARAN, G. Harnessing green IT: principles and practices. Oxford: Wiley Press, 2012. [21] S. K. Garg, C. S. Yeo and R. Buyya. Green Cloud Framework for Improving Carbon Efficiency of Clouds. Proceedings of the 17th International European Conference on Parallel and Distributed Computing (2011), AugustSeptember, Bordeaux, France. [22]M. N. Hulkury and M. R. Doomun. Integrated green cloud computing architecture. In Proceedings of the 2012 International Conference on Advanced Computer Science Applications and Technologies, ser. ACSAT ’12. Washington, DC, USA: IEEE Computer Society, 2012, pp. 269–274. [Online]. Available: http://dx.doi.org/10.1109/ACSAT.2012.16 [23] A Beloglazov, J Abawajy, R Buyya ,Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing Future generation computer systems, 2012 [24] L. Liu, H. Wang, X. Liu, X. Jin, W.B. He, Q.B. Wang, and Y. Chen. GreenCloud: a new architecture for green data center. Proceedings of the 6th international conference industry session on Autonomic computing and communications industry session, Barcelona, Spain: ACM, 2009, pp. 29-38.

[25] A. Khosravi, S. K. Garg, and R. Buyya. Energy and carbon-efficient placement of virtual machines in distributed cloud data centers. Euro-Par 2013 Parallel Processing. Springer, 2013, pp. 317–328. [26] F. Caglar, S. Shekhar and A. Gokhale. iPlace: An Intelligent and Tunable Power- and Performance-Aware Virtual Machine Placement Technique for Cloud-based Real-time Applications. In 17th IEEE Computer Society Symposium on Object/component/service-oriented real-time distributed Computing Technology (ISORC '14). (Reno, NV, USA, June 2014), IEEE. [27]W. Fisher, M. Suchara, and J. Rexford. Greening backbone networks: reducing energy consumption by shutting off cables in bundled links. Green Networking, 2010, pp. 29–34. [28] L. Shang, L.S. Peh, and N. K. Jha. Dynamic voltage scaling with links for power optimization of interconnection networks. In the 9th International Symposium on High-Performance Computer Architecture (HPCA 2003), Anaheim, California, USA, 2003, pp. 91-102. [29]A.C. Orgerie, L. Lefevre, and J.P. Gelas. Demystifying energy consumption in Grids and Clouds. Green Computing Conference, 2010 International, vol., no., pp.335-342, 15-18 Aug. 2010. [30] B. Priya, E. S. Pilli and R. C. Joshi. A Survey on Energy and Power Consumption Models for Greener Cloud. Proceeding of the IEEE 3rd International Advance Computing Conference (IACC), (2013), February 22-23; Ghaziabad. [31] L. Hosman and B. Baikie. Solar-Powered Cloud Computing datacenters. IEEE Computer Society, vol. 2, no. 15, (2013). [32] S. Singh and C. Inderveer. Energy based Efficient Resource Scheduling: A Step Towards Green Computing. International Journal of Energy, Information & Communications 5, no. 2 (2014). [33] W. Itani, C. Ghali, A. Chehab, A. Kayssi and I. Elhajj. G-Route: An Energy-Aware Service Routing Protocol for Green Cloud Computing. Cluster Computing (2015) [34] McKinsey & Company. Revolutionizing data center efficiency. http://uptimeinstitute.org, 2008. [35] G. Cook. How clean is Your Cloud? Greenpeace International Tech. Rep., April, 2012 [36] G. Cook. Clicking Clean.How Companies are Creating the Green Companies? Greenpeace International Tech. Rep., April, 2014 [37] Reducing Data Center Energy Consumption, CERN, Intel® Xeon® Processor [38] T. Mastelic, A. Oleksiak. Cloud computing: Survey on Energy Efficiency. [39] J. Son, A. Dastjerdi, R. Calheiros, X. Ji, Y. Yoon and R. Buyya. CloudSimSDN: Modeling and Simulation of Software-Defined Cloud Data Centers”, IEEE/ACM CCGrid 2015, 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May 4-7, 2015 –Shenzhen, Guangdong, China [40] D. Borah, S. Muchahary, K. Singh and Janmoni. Power Saving Strategies in Green Cloud Computing Systems. International Journal of Grid Distribution ComputingVol.8, No.1 (2015), pp.299306http://dx.doi.org/10.14257/ijgdc.2015.8.1.28 [41] R. Zhao. A Multithreaded Compiler Optimization Technology with Low Power. Journal of Software, 2002 pp. 1123-1129. [42] H. Jin, T. Cheocherngngarn, D. Levy, A. Smith, D. Pan, J. Liu, and N. Pissinou, “Joint host-network optimization for energy-efficient data center networking,” in Proceedings of the 2013 IEEE 27th International Symposium on Parallel Distributed Processing (IPDPS), May 2013, pp. 623–634. [43] B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma, S. Banerjee, and N. McKeown. Elastictree: saving energy in data centre Networks. 7th USENIX conference on Networked Systems design and implementation, 2010. [44] Y. Shang, D. Li, and M. Xu. Energy-aware routing in data center network. 1st ACM SIGCOMM Workshop on Green Networking, 2010 [45] L. Huang, Q. Jia, X. Wang, S. Yang, and B. Li .Pcube: Improving power efficiency in data center networks. IEEE CLOUD, 2011, pp. 65–72. [46] Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems. A White Paper from Experts in Business-Critical Continuity.

http://www.emersonnetworkpower.com/documentation/en-us/latestthinking/edc/documents/white paper/ energylogicreducingdatacenterenergyconsumption.pdf [47] Energy Efficient Data Center and Cloud Computing, http://www.intel.com/content/www/us/en/cloud-computing/cloud-computingefficient-data-center-paper.html, White Paper: Performance optimization strategies, refreshes, and instrumentation for energy efficient data center management and cloud computing. [48] A. Prasher, R. Bhatia. A Review On Energy Efficient Cloud Computing Algorithms. Volume 3, Issue 4, April 2014 ISSN 2319 – 4847 [49] P. Ranganathan. Recipe for Efficiency: Principles of Power-Aware Computing. Communications of the ACM, 53(4), Apr 2010. [50] S. Greenberg, E. Mills, B. Tschudi, P. Rumsey and B. Myatt. Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers. ACEEE Summer Study on Energy Efficiency in Buildings. http://eetd.lbl.gov/emills/PUBS/PDF/ACEEE-datacenters.pdf [51] G. Chapman. Renewable energy vital for Internet lifestyles: Greenpeace. May 12, 2015 [52] R. Perrons. How the energy sector could get it wrong with cloud computing. ENERGY EXPLORATION & EXPLOITATION · Volume 33 · Number 2 · 2015 pp. 217–226 [53] W. Huang, X. Li, Z. Qian. An energy efficient virtual machine placement algorithm with balanced resource utilization. 2013 Seventh International Co [54] C. Ghribi, M. Hadiji and D. Zeghlache. Energy efficient VM scheduling for cloud data centers: exact allocation and migration algorithm. 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing (CCGrid) pp.671-678, may 2013. [55] A. Strunk. Costs of virtual machine live migration: A survey. Services (SERVICES), 2012 IEEE Eighth World Congress on, june 2012, pp. 323 – 329. [56] R.N. Calheiros, R. Ranjan, A. Beloglazov, C.A.F.D. Rose, R. Buyya. CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Software: Practice and Experience 41 (1) (2011) 23–50 [57] X. Wang, A. Vasilakos, M. Chen., Y. Liu.,T. Kwon. A survey of green mobile networks: Opportunities and challenges. J. Mob. Netw. Appl. 2012, 17, 4–20. [58]H. Yuan, C. Kuo, and I. Ahmad, “Energy efficiency in data centers and cloud-based multimedia services: An overview and future directions,” in Proc. IEEE International Conference on Green Computing, pp.375–382,2010

Suggest Documents