Green Cloud Computing Schemes based on

0 downloads 0 Views 422KB Size Report
In this paper, this is a survey on green cloud computing schemes based on networks. ..... downloads/ whitepapers/ SledgehammerPowerHeat20411.pdf, 2002.
Green Cloud Computing Schemes based on Networks: A Survey Naixue Xiong1, 3, Wenlin Han2, Art Vandenberg3 1 Department of Computer Science, Georgia State University, Atlanta, GA, USA 2 The 722th Research Institute, China Shipbuilding Industry Corporation, 430079, China 3 Information Systems and Technology, Georgia State University, Atlanta, GA, USA Email: [email protected], [email protected], [email protected]

Abstract We are particularly aware that Green Cloud Computing (GCC) is a broad rang and a hot field. The distinction between “consumer of” and “provider of” cloud-based energy resources may very important in creating a world-wide ecosystem of GCC. A user simply submits its service request to the cloud service provider with the connection of Internet or wired/wireless networks. The result of the requested service is delivered back to the user in time, while the information storage and process, interoperating protocols, service composition, communications, and distributed computing are all smoothly interactive by the networks. In this paper, this is a survey on green cloud computing schemes based on networks. We first introduce the concept and history of Green computing, and then focus on the challenge and requirement of Cloud computing. Cloud computing needs to become green, which means provisioning cloud service while considering energy consumption under a set of energy consumption criterions and it is called GCC. Furthermore, we introduce recent work done in GCC based on networks, including microprocessors, task scheduling algorithms, virtualization technology, cooling systems, networks and disk storage. After that, we present the works on GCC from our research group in Georgia State University. Finally, we give the conclusion and some future works. Keyword: Green Cloud Computing, Networks, Optimization, Quality of Service

1 Introduction Here we are particularly aware that Green Cloud Computing (GCC) is a broad rang and a hot field. The distinction between “consumer of” and “provider of” cloud-based energy resources may very important in creating a world-wide ecosystem of GCC. A user simply submits its service request to the cloud service provider with the connection of Internet or wired/wireless networks. The result of the requested service is delivered back to the user in time, while the information storage and process, interoperating protocols, service composition, communications, and distributed computing are all smoothly interactive by the networks. In this section, we first introduce the concept and history of Green computing, and then focus on the challenge and requirement of Cloud computing. Cloud computing needs to become green, which means provisioning cloud service while considering energy consumption under a set of energy consumption criterions and it is called GCC.

1.1 Green computing: concept and history Green computing or green IT, refers to environmentally sustainable computing or IT. San Murugesan defines the field of green computing as "the study and practice of designing, manufacturing,

 

1

using, and disposing of computers, servers, and associated subsystems-such as monitors, printers, storage devices, and networking and communications systems-efficiently and effectively with minimal or no impact on the environment [1]." Modern IT systems are complicated because all of them rely on so many factors such as applications or software, people, networks and hardware. A solution may also need to address end user satisfaction, management restructuring, regulatory compliance, and return on investment (ROI). To reduce the use of hazardous materials, maximize energy efficiency during the product's lifetime, and promote the recyclability or biodegradability of defunct products and factory waste are the main goals of Green computing, which can be attained by making the use of computers as energy-efficient as possible and designing algorithms and systems for efficiency-related computer technologies. In 1992, the U.S. Environmental Protection Agency (EPA) launched a voluntary labeling program named Energy Star, which is designed to promote energy-efficiency technologies in monitors, climate control equipment and so on. The term "green computing" was probably coined shortly after the Energy Star program began. Concurrently, the Swedish organization TCO Development launched the TCO Certification program. At first, this program was to promote low magnetic and electrical emissions from CRT-based computer displays, and was later expanded to include criteria on energy consumption, ergonomics, and the use of hazardous materials in construction.

1.2 Cloud computing: challenges and requirements “If computers of the kind I have advocated become the computers of the future, then computing may someday be organized as a public utility just as the telephone system is a public utility. The computer utility could become the basis of a new and important industry.” John McCarthy, who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence (AI), said at the MIT Centennial in 1961. This vision of computing utilities based on a service provisioning model that computing services will be readily available on demand, like other utility services (e.g. electricity) available in today’s society. Users pay only when they access the computing services, and no longer need to build and maintain complex IT infrastructure.

Fig. 1 Cloud Computing is as a service Fig. 1 show cloud computing based on networks is as a service. This a model is called utility computing, or recently as Cloud computing or Clouds [3]. Cloud computing delivers infrastructure,

 

2

platform, and software as services under the pay-as-you-go model, which provide both physical and virtualized cloud resources and are referred to as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) respectively. The cloud computing has three kinds of conditions: Public clouds, Hybrid clouds, and Private clouds, which are connected by networks. In Clouds, businesses and users access services based on their requirements from anywhere in the world without regard to where the services are hosted. Many computing service providers, including Microsoft, Yahoo, Google and IBM are rapidly deploying data centers in various locations around the world to deliver Cloud computing services. These data centers host a variety of applications on shared hardware platforms. And the applications include serving requests of web applications that only run for a few seconds, large data set processing that run for longer periods of time, distributed databases that need real-time response and internet banking that requires security guarantees. The need to manage multiple applications in a data center creates the challenge of on-demand resource provisioning and allocation in response to time-varying workloads. In order to maintain isolation and provide performance guarantees, high performance becomes the sole concern in data center deployments, and data center resources are statically allocated to applications based on peak load without paying much attention to energy consumption. A large data center may require many megawatts of electricity, enough to power thousands of homes [4]. Organizations such as Google, Microsoft, Amazon, Yahoo!, and many other operators of large networked systems cannot ignore their energy costs. A back-of-the-envelope calculation for Google suggests it consumes more than $38M worth of electricity annually. A modest 3% reduction would therefore exceed a million dollars every year [27]. Cloud service providers are taking measures to ensure that their profit margin is not dramatically reduced due to high energy costs. For instance, Google, Microsoft, and Yahoo are building large data centers in barren desert land surrounding the Columbia River, USA to exploit cheap and reliable hydroelectric power [5]. As energy costs are increasing while availability dwindles, we cannot only focus on optimizing data center resource management for pure performance but also optimizing energy efficiency. Besides the expensive maintaining cost, data centers are unfriendly to the environment. Data centers are made of hundreds of thousands of servers, which lead to high energy costs and huge carbon footprint for powering and cooling. Data centers now drive more in carbon emissions than both Argentina and the Netherlands [4]. As we all know, carbon footprints result in a significant impact on climate change, it raises the concern from Governments worldwide to reduce carbon footprints. For example, the Japanese government has established the Japan Data Center Council to address the soaring energy consumption of data centers [6]. Leading computing service providers have also recently formed a global consortium known as The Green Grid [7] to promote energy efficiency for data centers and minimize their environmental impact. The pervasive demands of users are growing so quickly that larger servers and disks, more powerful chips are needed to process them fast enough within the required time period. Cloud computing with increasingly interacting between front-end client devices and back-end data centers will cause an enormous escalation of energy usage. To lower the energy usage of data centers while meeting service provision guarantees is the big challenge and requirement that Cloud computing faces. This is essential to ensure that the future growth of Cloud computing is sustainable. The future data center resources need to be managed in an energy-efficient manner, that means Cloud resources need to be allocated not only to satisfy Quality of Service (QoS) requirements specified by users via Service Level Agreements (SLA), but also to meet a criterion of energy usage.

 

3

2 Green Cloud Computing The future of Cloud computing must be green, here it is called Green Cloud computing. Green Cloud computing is envisioned to achieve not only high performance processing and utilization of computing infrastructure, but also has a set of criterions to limit energy consumption. The Green Grid is developing metrics to measure data center productivity as well as efficiency metrics for all major power-consuming subsystems in the datacenter. In 2007, the Green Grid proposed the use of Power Usage Effectiveness (PUE) and its reciprocal, Datacenter Efficiency (DCE) metrics, which enable datacenter operators to quickly estimate the energy efficiency of their datacenters, compare the results against other datacenters, and determine if any energy efficiency improvements need to be made [29]. And later in 2008, they redefine DCE as datacenter infrastructure efficiency (DCiE) [30]. Now PUE and DCiE have received broad adoption in the industry, including companies like AMD, APC, Dell, HP, IBM, Intel and Microsoft, to name a few. It is reported that the most part of power consumption in datacenters comes from computation processing, disk storage, network and cooling systems. Nowadays, there are new technologies and methods proposed to reduce energy cost in datacenters. The following subsections will introduce recent work done in these fields.

2.1 Computation processing, microprocessors It is well-known that microprocessors play a key role in computation processing. With shrinking dimensions of the transistors in the microprocessors, leakage currents consume more power than the actual computational processes. In the late 2000s, new materials were introduced to reduce this burden so as to cut down the energy consumption of datacenters. Most notably, the replacement of the SiO2 gate oxide, which is only a few atomic layers thick, with a physically thicker layer of a hafnium-based oxide enabled an appreciable reduction of the gate tunneling currents while maintaining the electrical performance of the transistor [8-9]. For the newest members of microprocessor families, sophisticated circuit architectures have been introduced, which allow the power associated with computational processes and also the leakage power to be adapted [10-11]. These innovations, that the microprocessor frequency can be adjusted and circuit blocks can be temporarily turned down completely when not in use, lead to energy savings for a computational load that comes in bursts or that is bound to memory latency or I/O operations. G. Semeraro et al. [31] describe a Multiple Clock Domain (MCD) processor, in which the chip is divided into four (coarse-grained) clock domains, corresponding to the front end (including L1 instruction cache), integer units, floating point units, and load-store units (including L1 data cache and L2 cache), within which independent voltage and frequency scaling can be performed. Against traditional singly-clocked globally synchronous systems, the multiple Clock Domain (MCD) processor with DVFS can lower energy consumption obviously.

2.2 Computation processing, task scheduling In order to reduce the power consumption of computation processing, much work has been done in task scheduling of data centers. Thermal aware resource management for data centers has recently attracted much research interest from high performance computing communities. The computational fluid dynamics (CFD) [12] models may be the most elaborate thermal aware schedule algorithms for tasks in data centers, they presents a detailed 3-dimensional Computational

 

4

Fluid Dynamics based thermal modeling tool, called ThermoStat, for rack-mounted server systems. Because the CFD based model is too complex and is not suitable for online scheduling, some researchers develop several less complex online scheduling algorithms. For example, Q. Tang et al. [14] show through formalization that minimizing the peak inlet temperature allows for the lowest cooling power needs. Using a low-complexity, linear heat recirculation model, they define the problem of minimizing the peak inlet temperature within a data center through task assignment (MPIT-TA), consequently leading to minimal cooling requirement. J. D. Moore et al. [15] develop an alternate approach, which leverages the non-intuitive observation that the source of cooling inefficiencies can often be in locations spatially uncorrelated with its manifested consequences, providing additional energy savings. And there are other more examples like sensor-based fast thermal evaluation model [16-17], Generic Algorithm & Quadratic Programming [14, 18], and the Weatherman – an automated online predictive thermal mapping [19].

2.3 Computation processing, virtualization Virtualization of computer resources [20] is one of the key technologies in Cloud Computing. Traditionally, an organization purchases its own computing resources and deals with maintenance and upgrade of the outdated hardware, resulting in additional expenses. Virtualization technology allows one to create several Virtual Machines (VMs) on a physical server and, therefore, reduces amount of hardware in use and improves the utilization of resources. Organizations can outsource their computation needs to the Cloud, thereby eliminating the necessity to maintain own computing infrastructure. A Virtual Power approach has been proposed by Nathuji and Schwan [21] for online power management to support the isolated and independent operation assumed by guest virtual machines (VMs) running on virtualized platforms and to make it possible to control and globally coordinate the effects of the diverse power management policies applied by these VMs to virtualized resources. The resource management is divided into local and global policies. On the local level, the system leverages guest operating system’s power management strategies. Consolidation of VMs is handled by global policies applying live migration to reallocate VMs. Kusic et al. [22] implement and validate a dynamic resource provisioning framework for virtualized server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a look-ahead control (LLC) scheme. The proposed model requires simulation-based learning for the application-specific adjustments. Consolidation of applications in cloud computing environments presents a significant opportunity for energy optimization. After study the inter-relationships between energy consumption, resource utilization, and performance of consolidated workloads, Srikantaiah et al. [23] have proposed a heuristic for multidimensional bin-packing problem as an algorithm to handle problem of requests scheduling for multi-tiered web-applications in virtualized heterogeneous systems in order to minimize energy consumption, while meeting performance requirements. Song et al. [24] propose a multi-tiered resource scheduling scheme which automatically provides on-demand capacities to the hosted services via resources flowing among VMs. The resource allocation to applications accords to their priorities in multi-application virtualized cluster to optimize resource allocation among services in data center.

2.4 Cooling systems  

5

Data center power consumption and cooling are two of the biggest energy issues that confront IT organizations today. Cooling systems consume nearly half of the electricity energy of data centers [25].Traditionally, the cooling infrastructure of data centers is to remove heat by forced circulation of large amounts of chilled air. Now, it is reported that scientists are trying to use chilled-liquid cooling in high-end mainframes and densely packed servers to cope with the high heat fluxes. Microchannel heat sinks can be designed such that the thermal resistance between the transistor and the fluid is reduced to the extent that even cooling water temperatures of 60 to 70 C ensure no overheating of the microprocessors [26]. Using this hot water cooling, chillers are no longer required year-round, that means the data-center energy consumption can be reduced by up to 50%. And more attractively, direct utilization of the collected thermal energy becomes feasible, either using synergies with district heating or specific industrial applications. With such an appealing waste-heat recovery system, the green diligence of data centers would be upped substantially [30].

2.5 Network Over the past years, both energy cost and network electrical requirements show a continuous growth, besides a more widespread sensitivity to ecological issues, which arouse the interest on energy efficient networking. Traditionally, networks, links and devices are provisioned for peak load in order to meet high performance requirements, which typically exceed their average utilization by a wide margin, resulting in flat energy wastes. Nowadays, telecoms, network equipment manufacturers and the networking research community mainly regard the innovative criteria and technologies, which are able to dynamically adapt network capacities and resources to current traffic loads and requirements to save energy. In order to develop today’s network equipments, Raffaele Bolla et al. [28] explore and try to evaluate the feasibility and the impact of power management policies that can well suit a heterogeneous set of highly modular architectures. The proposed policies aim at optimizing the power consumption of each device component with respect to its expected network performance. The conventional approach to reduce energy costs has been to reduce the amount of energy consumed. Asfandyar Qureshi et al. [27] analyze a new method to reduce the energy costs of running large Internet-scale systems. They found that electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. And they characterize the variation due to fluctuating electricity prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains.

2.6 Disk storage Disk storage is vital for the green cloud computing networks. Storage is one of the biggest consumers of energy among various components of a data center. A recent industry report [32] presents that storage devices account for almost 27% of the total energy consumed by a data center. In a recent study, Fusion-io, the manufacturers of the world's fastest Solid State Storage devices, manages to reduce the carbon footprint and operating costs of MySpace data centers by 80% [33], while increasing performance speeds beyond the value, which had been attainable via multiple hard disk drives in Raid 0. In response to MySpace, it was able to permanently retire several of their servers including all their heavy-load servers and further reducing their carbon footprint. Smaller form hard disk drives often spend less power per gigabyte than physically larger does. It is not like hard disk drives, solid-state drives store data in flash memory or DRAM. Without moving parts, power consumption may be

 

6

decreased for low capacity flash based devices. Because the hard drive prices are reduced, storage centers are tending to extend their capacity to make more data available online. This covers most backup data, which formerly are saved on tape or other offline storage. The extendibility of online storage can increase power consumption. Decreasing the power consumed by large storage arrays, while still presenting the benefits of online storage, is a subject of ongoing research.

3 Green Cloud Computing in GSU Here we focus on our works in Georgia State University (GSU), which are supported by IBM. We have two aspects: one is designing a multi-Cloud computing scheme for sharing limited computing resources to satisfy a mass of cloud user requirements; the other one is using failure detection to effectively assign users’ requirements to different servers, and make sure the servers could complete the computing on time.

3.1 Sharing limited resources In this section, we focus on an IBM cloud computing platform in GSU, and also based on this model, we design a feedback control scheme for sharing limited computing resources to satisfy a mass of cloud user requirements. Because on the one hand, the for new users, we do not need to buy new servers, just use the cloud computing platform, this will save lots of energy; on the other hand, in the university, there are a mass of cloud user requirements, which trend to increase, while the limited server resources are not always enough. We should try to use the limited resources to serve more users based on their maximum ability, if it is possible. This will reduce power consumption to the minimum, while still supporting the users’ benefits. Analyze the system model for computing labs [34]. The target is to understand the full costs of student labs and how performance-cost model may be improved by using Apache.org Virtual Computing Lab (VCL) [35], an open source incubator project being developed for implementation of cloud computing.

Fig. 2 North Carolina State University VCL model [34]

 

7

Fig. 2 shows the North Carolina State University VCL model [34], users and applications are connected to VCL managers, and then connected to servers. The VCL model suggests an option where student labs run with thin clients or reduced numbers of seats, rather than outfitted with the standard “fully loaded” PCs, which are typically configured to serve students who lack access to applications and systems except via a physical lab PC. We describe a system model for computing lab costs. A VCL approach provides an alternative to computing labs whereby a user web interface and access to an application is broker by the VCL manager node offering up computing cycles on available servers. Our system model represents relevant parameters of student computing labs costs including factors such as number of seats, hardware platforms, software suites, lab space, hours of availability and other associated costs. This lab cost model is used to analyze alternative costing enabled by a virtual computing implementation of student labs. We summarize 9 years of data on university Student Tech Fee funding (about $4 Million per year) and present an analysis of that funding with particular focus on using the lab cost model to simulate and evaluate VCL return on investment in university student lab implementations. Our model is based on a conceptual view of users and their applications that require computing cycles. A traditional lab tends to implement a single person per seat computer environment where students physically attend specific lab locations, select a dedicated computer platform, and have exclusive usage to that platform for a period of time. Such a computer platform, while connected in a shared mode to a network for web access and perhaps print functions, none-the-less functions as a local, private, dedicated platform. Such dedicated platform solutions are configured to accommodate expected profiles where the number and cost of computer stations are determined by some estimated concurrent usage of application suites running under explicit operating system images. Such configurations are often managed per lab (rather than per user) and offer general, one-size-fits-all images. Further challenges are maintaining currency of hardware performance and warranties with recurring costs for upgrading based on 3 or 4-year cycles and providing appropriate operating system and application versions. We have 9 years of student technology fee data related to proposals for and funding of technology requests, showing that hardware and software costs account for about 33% and 62% of an annual $4 Million in funding. Analysis of hardware, software, personnel and other costs is presented and is used as input to the system model for computing labs. Simulation of this model demonstrates the potential value of VCL in achieving improved performance cost for student computing labs and demonstrates the benefit of VCL in reducing costs and improving return on investment. Design control schemes for sharing computing resources [34]. Recently multi-cloud computing is increasingly becoming important with significant effort devoted towards solutions for sharing cloud resources to satisfy large numbers of applications. Here we first propose a system model for a multi-cloud environment, which we define as a network of clouds that may interoperate to serve their individual user bases in each local intra-cloud. Local clouds may share services among other clouds in order to load balance or meet peak demands. Using this model, we propose an effective proportional and integral feedback control scheme based on control theory to share limited resources to satisfy requirements of multiple users wanting fast response and/or high utilization of cloud resources. Finally, we provide a theoretical analysis of the system stability and give guidelines [34] for selection of feedback control parameters to stabilize the resource utilization at a desirable target level. Simulations have been conducted to demonstrate that this proposed scheme can be an effective multi-cloud computing controller for ensuring fast response and high resource utilization.

 

8

3.2 Failure Detection in cloud computing It has been increasingly important for cloud computing as a solution to serve the dynamically scalable cloud networks. Services in the cloud computing networks may be virtualized with specific servers which host abstracted details. Some of the servers are active and available, while others are busy or heavy loaded, and the remaining is offline for various reasons. Users would expect the right and available servers to complete their application requirements. Therefore, in order to provide an effective control scheme and parameter guidance, failure detection is essential to meet users' service expectations. It can resolve possible performance bottlenecks in providing the virtual service for the cloud computing networks. Because the service from failure detection, user and manager could effectively assign the users’ requirements and complete them on time. Then this will improve the service efficiency from cloud computing, and save lots of power consumption, while still providing service to the users. Tuning Adaptive Margin Failure Detector (TAM FD) [36]: They first compare QoS metrics of several adaptive FDs, discuss their properties and their relation, and then propose one optimization over the existing methods, called TAM FD. This scheme significantly improves the QoS when the network is unstable, especially in the aggressive range. After that, they address the problem on most adaptive schemes’ requirement (a large window of samples). So they also analyze the impact of memory size on the performance of FDs, and then prove that the presented scheme is designed to use a very limited amount of memory for the distributed system. Their experimental results over several kinds of networks (Cluster, WiFi, LAN, Intercontinental WAN) show that the properties of the existing adaptive failure detectors, and demonstrate that the presented optimization is reasonable and acceptable. Finally, the extensive experimental results show what the effect of memory size is on the overall QoS of each adaptive failure detector. For this TAM FD, the effect of window size on their QoS is very small and can be negligible. Exponential Distribution Failure Detector (ED FD) [37]: This paper first presents a general traffic-feature analysis scheme for optimizing the existing FDs in fault-tolerant wired and wireless networks. Based on the general method, they propose a novel ED FD, This scheme considers the probability distribution properties of arrival interval periods as exponential distribution, not traditional normal distribution. In addition, they not only prove that ED FD belongs to the class Diamond Pac (accruement property and upper bound property), but also analyze the impact of history-track-record length about the performance of ED FD. Finally, extensive experiment results were carried out in several kinds of networks (a wireless network, and seven representative WAN cases). The experiments demonstrate that the impact of memory usage on the overall QoS is very small and can be negligible, and also demonstrate that the proposed ED FD outperforms the existing FDs according to short detection time, low mistake rate, high query accuracy probability, and good scalability. Thus, their presented general traffic-feature analysis is effective. Self-tuning Failure Detector (SFD) [38]: Recently most existing Failure Detector (FD) schemes do not automatically adjust their detection service parameters for the dynamic network environments, thus they couldn't be used for real applications. Given that the networks are dynamic and unexpected, they explore FD properties with relation to the actual and self-turning fault-tolerant cloud computing networks, and find a general automatic analysis method to self-tune the corresponding parameters to satisfy user requirements. Based on this general non-manual method, we propose a specific and dynamic SFD, as a major breakthrough in the existing schemes. They implement actual and extensive

 

9

experiments to compare the QoS performance between the SFD and several other existing FDs. Their experiments demonstrate that their scheme can automatically adjust SFD control parameters to obtain corresponding services and satisfy users’ requirements, while also maintain good performance. This SFD can be extensively applied to industrial and commercial usage, and also significantly benefit the cloud computing networks.

4 Conclusion This is a survey on GCC based on networks. We first introduces the recent research schemes on GCC: (I) Green computing concept and history; (II) Cloud computing challenge and requirement; (III) Recent research done in Green Cloud Computing, including: microprocessors, task scheduling algorithms, virtualization technology, cooling systems, networks and disk storage. Furthermore, we introduce recent work done in GCC based on networks, including microprocessors, task scheduling algorithms, virtualization technology, cooling systems, networks and disk storage. After that, we present the works on GCC from our research group in Georgia State University. Finally, we give the conclusion and some future works. Our future work will focus on the engineering implementation of the above control scheme in a multi-cloud computing environment that is being developed by several universities as part of the VCL open source community. We observe, finally, that our proposed model can be expanded to address additional parameters related to other aspects of multi-cloud computing, including, for instance, the level of computing security that user applications may require, or even storage availability or dependability. We also try to carry out our above FD schemes in IBM Cloud Computing platform, where there are some actual problems that should be solved before they are used in our living.

Reference [1] San Murugesan, “Harnessing Green IT: Principles and Practices,” IEEE IT Professional, January–February 2008, pp 24-33. [2] L. Kleinrock. A Vision for the Internet. ST Journal of Research, 2(1):4-5, Nov. 2005. [3] A. Weiss. Computing in the Clouds. netWorker, 11(4):16-25, ACM Press, New York, USA, Dec. 2007. [4] R. H. Katz, “Tech Titans Building Boom,” IEEE Spectrum, February 2009. [5] J. Markoff & S. Hansell. Hiding in Plain Sight, Google Seeks More Power. New York Times, June 14, 2006. [6] Ministry of Economy, Trade and Industry, Government of Japan. Establishment of the Japan Data Center Council. Press Release, 4 Dec. 2008. [7] The Green Grid Consortium, http://www.thegreengrid.org (Accessed on Feb 26, 2010). [8]K. Mistry et al., IEEE IEDM 2007 Tech. Digest, 10.2 (2007). [9]M. Chudzik et al., IEEE VLSI 2007 Tech. Digest, 11A-1 (2007). [10]N. A. Kurd et al., IEEE ISSCC 2010 Tech. Digest, 5.1 (2010). [11]M. Ware et al., IEEE HPCA 2010 Tech. Digest, 6.4 (2010). [12] J. Choi, Y. Kim, A. Sivasubramaniam, J. Srebric, Q. Wang, and J. Lee, “A CFD-Based Tool for Studying Temperature in Rack-Mounted Servers,” IEEE Trans. Comput., vol. 57, no. 8, pp. 1129–1142, 2008. [13] A. H. Beitelmal and C. D. Patel, “Thermo-Fluids Provisioning of a High Performance High

 

10

Density Data Center,” Distributed and Parallel Databases, vol. 21, no. 2-3, pp. 227–238, 2007. [14] Q. Tang, S. K. S. Gupta, and G. Varsamopoulos, “Energy-Efficient Thermal-Aware Task Scheduling for Homogeneous High-Performance Computing Data Centers: A Cyber-Physical Approach,” IEEE Trans. Parallel Distrib. Syst., vol. 19, no. 11, pp. 1458–1472, 2008. [15] J. D. Moore, J. S. Chase, P. Ranganathan, and R. K. Sharma, “Making Scheduling “Cool”: Temperature-Aware Workload Placement in Data Centers,” in USENIX Annual Technical Conference, General Track. USENIX, 2005, pp. 61–75. [16] Q. Tang, T. Mukherjee, S. K. S. Gupta, and P. Cayton, “Sensor-Based Fast Thermal Evaluation Model For Energy Efficient High-Performance Datacenters,” in Proceedings of the Fourth International Conference on Intelligent Sensing and Information Processing, Oct. 2006, pp. 203–208. [17] Q. Tang, S. K. S. Gupta, and G. Varsamopoulos, “Thermal-aware task scheduling for data centers through minimizing heat recirculation,” in CLUSTER, 2007, pp. 129–138. [18] T. Mukherjee, Q. Tang, C. Ziesman, S. K. S. Gupta, and P. Cayton, “Software Architecture for Dynamic Thermal Management in Datacenters,” in COMSWARE, 2007. [19] J. Moore, J. Chase, and P. Ranganathan, “Weatherman: Automated, Online, and Predictive Thermal Mapping and Management for Data Centers,” in the Third IEEE International Conference on Autonomic Computing. Los Alamitos, CA, USA: IEEE Computer Society, June 2006, pp. 155–164. [20] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield. Xen and the art of virtualization. In Proceedings of the 19th ACM symposium on Operating systems principles, page 177, 2003. [21] R. Nathuji and K. Schwan. Virtualpower: Coordinated power management in virtualized enterprise systems. ACM SIGOPS Operating Systems Review, 41(6):265–278, 2007. [22] D. Kusic, J. O. Kephart, J. E. Hanson, N. Kandasamy, and G. Jiang. Power and performance management of virtualized computing environments via lookahead control. Cluster Computing, 12(1):1–15, 2009. [23] S. Srikantaiah, A. Kansal, and F. Zhao. Energy aware consolidation for cloud computing. Cluster Computing, 12:1–15, 2009. [24] Y. Song, H. Wang, Y. Li, B. Feng, and Y. Sun. Multi-Tiered On-Demand resource scheduling for VM-Based data center. In Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid-Volume 00, pages 148–155, 2009. [25] R. Sawyer, “Calculating Total Power Requirements for Data Centers,” American Power Conversion, Tech. Rep., 2004. [26] G. I. Meijer, “Cooling Energy-Hungry Data Centers,” 16 April 2010 VOL328 SCIENCE [27] Asfandyar Qureshi, Rick Weber, Hari Balakrishnan, John Guttag and Bruce Maggs. Cutting the Electric Bill for Internet-Scale Systems. SIGCOMM’09, August 17–21, 2009, Barcelona, Spain. [28] Raffaele Bolla, Roberto Bruschi, Franco Davoli and Andrea Ranieri. Energy-Aware Performance Optimization for Next-Generation Green Network Equipment. PRESTO’09, August 21, 2009, Barcelona, Spain. [29] The Green Grid technical committee. Green Grid Metrics: Describing Data Center Power Efficiency. White paper of the Green Grid, February, 2007. [30] Christian Belady, Andy Rawson, John Pfleuger, Tahir Cader. Green Grid Datacenter Power Efficiency Metrics PUE and DCIE. White paper of the Green Grid, 2008.

 

11

[31] G. Semeraro, G. Magklis, R. Balasubramonian, D. H. Albonesi, S. Dwarkadas, and M. L. Scott. Energy-efficient processor design using multiple clock domains with dynamic voltage and frequency scaling. In Proceedings of the 8th International Symposium on High-Performance Computer Architecture, 2002. [32] Power, heat, and sledgehammer. White paper, Maximum Institution. Inc., http://www.max-t.com/ downloads/ whitepapers/ SledgehammerPowerHeat20411.pdf, 2002. [33] http://www.scribd.com/doc/28697765/Green-Computing [34] N. Xiong, A. Vandenberg, M. L. Russell, and K. P. Robinson. A Multi-Cloud Computing Scheme for Sharing Computing Resources to Satisfy Local Cloud User Requirements Networks. Appear to International Journal of Cloud Computing (IJCC), 2011. [35] http://incubator.apache.org/projects/vcl.html [36] N. Xiong, A. V. Vasilakos, L. T. Yang, L. Song, Y. Pan, R. Kannan, Y. Li. Comparative Analysis of Quality of Service and Memory Usage for Adaptive Failure Detectors in Healthcare Systems. IEEE JSAC, IEEE Journal on Selected Areas in Communications, 27(4): 495 - 509, May 2009. [37] N. Xiong, A. V. Vasilakos, Y. R. Yang, and so on. An Effective Failure Detector Based on General Traffic-Feature Analysis in Fault-Tolerant Networks. Technical Report. [38] N. Xiong, A. V. Vasilakos, Y. R. Yang, and so on. A Class of Practical Self-tuning Failure Detection Schemes for Cloud Communication Networks. Technical Report.

 

12