Document not found! Please try again

Two-Phase Provisioning for HPC Tasks in Virtualized Datacenters

4 downloads 738 Views 788KB Size Report
architecture and provisioning techniques are described in. sectionIII. ..... Available:http://portal.acm.org/citation.cfm?id=795694.798057. [10] N. Kappiah, V. W. ...
International Conference on Emerging Trends in Computer and Electronics Engineering (ICETCEE'2012) March 24-25, 2012 Dubai

Two-Phase Provisioning for HPC Tasks in Virtualized Datacenters Nawfal A. Mehdi, Hussam Ali, Ali Amer, Ziyad.T.Abdul-Mehdi

Abstract—This paper deals with running the lowest number of virtualized datacenter machines with the least completion time and efficient consumption of electricity.Cloud computing is based on the concept of offering computation services that are executed in datacenters. These datacenters require huge amount of power if they are at the peak load or the tasks are not distributed efficiently in their machines. Reducing power consumption has been an essential requirement for cloud resource providers not only to decrease operating costs, but also to improve the system reliability.This paper addresses the issue of minimizing power consumption in datacenter servers and improving their load balancing simultaneously. An algorithm for task scheduling in virtualized datacenters is presented, that works in two phases; firstly lowering the number of running machines to decrease power consumption and secondly reducing the total datacenter load. An empirical study was done to simulate the proposed algorithm using a discrete events simulator. The results show an improvement in the datacenter load and power consumption.

The datacenter consists in its infrastructure form set of servers, switches, cables, storage unites, etc, where each part consumes an amount of power. Fig. 2 shows the distribution of power consumption in a datacenter. It is clear that the servers consume about 70% of total datacenter energy use. Therefore, there is a need to reduce the number of running machines to decrease the total power consumption. For example, when multiple applications contend for the same resource pool, so, it is important to develop scheduling algorithms that allocate resources in a fair and efficient manner. There is a growing interest in improving the energy efficiency of large-scale enterprise datacenters and cloud computing environment. Various kinds of jobs have been used under cloud computing where the power consumption plays a role in their mapping to the servers. Recent studies [5], [6] indicate that the jobs executions’ costs associated with power consumption, cooling requirements, etc., of servers, over their lifetime are significant. One of the most adopted and used jobs are the High Performance Computing (HPC). In the past few years, there has been a rapid increase in the use of HPC infrastructure to run business and consumer based IT applications [7]. HPC tasks come with hard deadlines, which are urgent and require immediate attention and quick mapping to resources. Also, the compute-centric nature of HPC tasks consumes more power in their execution period. Therefore, an efficient scheduling technique hybrid with a powerschemathattakesintoaccounttheurgent HPC deadlines and schedules to lower power consumption is required.

Keywords—Datacenter, Power consumption, virtualization, HPC.

I. INTRODUCTION

T

he current significant shift to cloud computing as a computation provider, has led to increase the popularity of using the datacenters. This huge growth in the numberof datacenters triggered the issue of power consumption. Cloud computing, the new computational environment, comes with an infrastructure that is different from other paradigms. It uses the datacenter as a basic unit in its architecture [1], [2], which is viewed as a collection of massively parallel distributed datacenters [3]. A datacenter (or server’s farm as it called sometimes) is a huge, centralized repository for storage, computation, and data management [4]. It is a farm for hosting a large number of servers or processing elements, clusters, and/or huge amount of storage to serve customers requests. Datacenter power consumption has been highlighted in the literatures as shown in Fig. 1 that illustrates the growth in energy consumption in datacenters since 1996.It is evident that power consumption in 2010 increased more than five times from 2003,intandem withthecloudcomputingrevolution. Nawfal A. Mehdi and Ali Amer are PhD students in Univeristy Putra Malaysia, Serdang, 43400, Malaysia. Email: [email protected] Hussam Ali is a lecturer in department of computer science, Baghdad University, Baghdad,Iraq. Ziyad.T.Abdul-Mehdi is an assistant professor in Department of Information, Sohar College for Applied Science, Sohar, Sultanate of Oman.

Fig. 1 Growth of Energy Consumption

Dynamic Voltage Scaling (DVS) is an efficient power schema to manage dynamic dissipation during computation [8], [9]. The dynamic power consumption can be reduced by

29

International Conference on Emerging Trends in Computer and Electronics Engineering (ICETCEE'2012) March 24-25, 2012 Dubai

lowering the supply voltage of systems. The DVS scheme reduces dynamic power consumption by adjusting the supply voltage in an appropriate manner.

[17] investigated optimal resource allocation and power management in virtualized datacenters with time-varying workloads and heterogeneous applications. They used the lyapunov optimization technique to design an online admission control, routing, and resource allocation algorithm for a virtualized datacenter. In our work, the tasks are urgent, and have deadlines for their execution. Kim et al. [18] took the problem of power-aware provisioning of VMs for real-time services. They modeled a real-time service as a real-time VM request. Their model provisions VMs in cloud datacenters using dynamic voltage frequency scaling schemes. They propose several schemes to reduce power consumption by hard real-time services and power-aware profitable provisioning of soft real-time services. In [19], high-performance clusters, taking into account power consumption have been designed and developed. Work presented in [20] provides power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems to minimize power consumption as well as to meet the deadlines specified by application users. They provided the DVS scheduling algorithms for both timeshared and space-shared resource sharing policies. Li et al. in their paper [21], proposed a novel approach named EnaCloud, which enables application live placementdynamically, taking into consideration energy efficiency in acloud platform. They used a VM to encapsulate the applicati-on,which supports applications scheduling and live migrationtominimize the number of running machines, so as to saveenergy. The application placement is abstracted as a binpacking problem, and an energy-aware heuristic algorithm isproposed to get an appropriate solution. Lorch et al. in their work [22] proposed a new approach to dynamic voltage scaling called PACE. This work discussed the hard and soft deadlines. Work presented in [18] investigated power-aware provisioning of virtual machines for real-time services. Their approach is (i) to model a real-time service as a real-time virtual machine request; and (ii) to provision virtual machines in cloud data-centers using dynamic voltage frequency scaling schemes. In our paper, we propose an algorithm to reduce power consumption of executing hard or soft deadlines tasks in virtualized environment using two phases schema. Another work [23] proposed a heuristic algorithm, applying the DVS technique in a distributed system. It placed high importance on communication time between processors. At the same time, it concentrated on a large number of tasks. This work divided a dependent task into many subtasks, which could be concurrently executed on processors in distributed system. After the shortest deadline has been met, their power aware scheduling algorithm will reduce the energy consumption without affecting its performance. In this paper, the proposed algorithm is inspired by MCT algorithm, which is studied by [14]. MCT sends the task to aserver hosts a virtual machine that gives the minimum completion time, which consists of the input data fetching, processing time, and sending output data to the destinations. The proposed algorithm prefers the running server from idle

Fig. 2 Distribution of Energy Consumption in a Datacenter

Currently much research [10], [11], [12], [13] has been done to provide power-aware cluster computing by using the DVS scheme. This paper addresses the problem of running unnecessary machines in datacenters, where the consumption of power could be saved if there is efficient task-machine mapping.In this paper, a two-phase scheduling algorithm is proposed. This algorithm is inspired by the Minimum Completion Time (MCT) algorithm that is studied in [14]. In the first phase, the proposed algorithm scans the running machines to see if they have VMs that could execute the current task within its deadline. If all the running machines are unable to fulfill the task deadline, the second phase begins, which is checking the idle machines to nominate a host machine. The key contributions of this paper are: 1) Propose two-phase MCT scheduling algorithm, which reducesthe power consumption in datacenters running HPC tasks. 2) Analysis the scheduling of time-critical HPC tasks on three-tierdatacenter. 3) Exhaustive empirical study on hard and soft deadlinesjobs. The rest of the paper is organized as follows. Section IIoutlines the related work in this area, while datacenter architecture and provisioning techniques are described in sectionIII. The scheduling technique and the proposed algorithm are given in section IV. The simulation process, simulator configurations, andthe results are explained in section V. Section VI provides theconclusion of this study. II. RELATED WORK This section presents the work most pertinent to the discussion of this paper in the field of power management, virtualization technologies, and cloud computing. Kliazovich et al. [15] define cloud computing datacenterfrom the energy efficiency perspective as a pool of computing and communication resources organized in the way to trans-form the received power into computing or data transfer work to satisfy user demands. Work presented in [16], 30

International Conference on Emerging Trends in Computer and Electronics Engineering (ICETCEE'2012) March 24-25, 2012 Dubai

consists of consumption by CPU, disk storage and network interfaces. Compared to other system resources, CPU consumes a larger amount of energy, hence in this study we focus on managing its power consumption and efficient usage. DVS is one of the key knobs adjusting the server power states. It [26], [27] is a highly effective technique for reducing system power dissipation, which introduces a tradeoffbetween computing performance and the energy consumed by the server. The DVS is based on the fact that switching power in a chip decreases proportionally to V 2 ∗ 𝑓𝑓, where V is the voltage, and𝑓𝑓is the switching frequency(i.e. current load). Moreover, voltage reduction requires frequency down shift.This implies a cubic relationship from𝑓𝑓 in the CPU power consumption. In this paper, each host has a peak load value𝑃𝑃𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 , which is equal to the maximum amount of power consumed at the maximum load on the host. The idle host’s power consumptions 𝑃𝑃 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 is about 2/3 of the peak load [15], [28]. Now, if𝑓𝑓 is the current load, then the current power consumption 𝑃𝑃𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 is calculated using 3 as follows [29], [28]: (3) 𝑃𝑃𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = 𝑃𝑃𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 + 𝑃𝑃𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 ∗ 𝑓𝑓 3

server. It is merged with DVS power schema to improve the datacenter consumption.

Fig.3. Three-Tier Datacenter

Minimum completion time is widely used in mapping the online tasks or the tasks the need fast attention from the scheduler. This is due to its light time complexity comparing to other scheduling heuristics that need large amount of time for mapping decision. III. SYSTEM MODEL

C. Time-Critical HPC Task Model In this paper, the HPC task is the smallest unit of users’ requests. All the tasks are independent without inter-tasks communication. These tasks have strict deadlines to adhere to, and any completion after the deadline is useless. Formally, letT = {t1 , . . , t ℕ }is set ofℕtasks. Each taskis defined as t i = {Si , ini , out i , Di }, where Si is the task sizein Million Instructions (MIs), ini andout i are the amount ofinput and output data respectively, and Di is the task deadline.

A. Datacenter Architecture The farm of servers in current datacenters has more than 100,000 hosts, where about 70% of the communications are performed internally [24]. The most common datacenter architecture is the three-tier architecture. It has three layers, namely, core network, aggregation network, and access network [25]. Fig. 3 depicts the three-tier datacenter with 4 network cores. The availability of the aggregation layer facilitates the increase in the number of server nodes to over 10,000 servers. However, this large number of servers consumes a huge amount of power if all the servers work at the same time. Therefore, an efficient task to servers mapping is required. Formally, let H = {h1 , . . . , hM } is the set of Mhostsexist in a datacenter. Each host hi has nominal MIPS(N), which represents the maximum computing capability ofthe server at the maximum frequency and the host load (C)represents the current load of the server in MIPS. The load on the serverh iscomputed using equation 1, which is equal to the ratio of current load to the maximum computing capability. 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙(ℎ) =

𝐶𝐶(ℎ) 𝑁𝑁(ℎ)

D.Research Problem This paper evaluates the scheduling of urgent HPC tasks where their execution time is critical. Most of the previous work focused on increased systemthroughput although the total power consumption increased as well. In this paper, the user sends the tasks to the cloud provider to be executed. The cloud provider has to find the best host machine that can meet tasks deadlines. Sending the jobs to any host might increase the amount of power consumption. Mapping an urgent job to resources while taking into account power consumption is the issue for this study.

(1)

IV. TIME-CRITICAL HPC TASKSCHEDULING The proposed algorithm needs one phase in the best case or two-phases in the worst case. When the client sends a task to the cloud provider, the algorithm scans the running servers only and tries to find a server with a VM can give the nearest completion time. This represents the first phase. If the algorithm cannot find such a server, it starts the second phase by checking the rest of the idle servers and computes the time needed to boot a fresh VM. In this algorithm, each server is examined once (i.e. the machine is either working or idle) to reduce the provisioning complexity.

The datacenter load is computed using 2, which is equal to the average load on all its hosts.

𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙(𝐷𝐷𝐷𝐷) =

∑∀ℎ∈𝐻𝐻

𝑀𝑀

𝐶𝐶(ℎ)

𝑁𝑁(ℎ)

(2)

B. Power Model Power consumption by computing nodes in datacenters

31

International Conference on Emerging Trends in Computer and Electronics Engineering (ICETCEE'2012) March 24-25, 2012 Dubai

Fig. 4 depicts the proposed system model. It shows the system’s mechanism as the new arrival jobs are sent to the running machines if they have VMs capable to serve the jobs within their deadlines.

V. EMPIRICALSTUDY In this section we present a case study of simulating an energy-aware datacenter in three-tier architecture. Simulation is the process of imitation of the real systems. Due to the difficulties in testing the proposed system in a real system, a simulation was conducted to evaluate its performance. GreenCloud simulator [15] is used, which is an extension to the network simulator Ns2 [30], developed for the study of cloud computing environments. The GreenCloud offers users a detailed fine-grained modeling of the energy consumed by the elements of the datacenter, such as servers, switches, and links. Moreover, GreenCloud offers a thorough investigation of workload distribution.

Fig. 4 System Model

A. Two Phase Provisioning Algorithm The pseudo code for the main steps for the proposed two phase minimum completion algorithm (2PMC) is depicted in Fig.5. The first phase is coded in steps 4-13, while the second phase is coded in steps 15-24. Two temporary variables are used, namely,𝛼𝛼 and𝑣𝑣�, which are the minimum completion time and the nominated VM respectively. Although the system is implemented on dynamic arrival tasks, algorithm 1 scans the current list of tasks as shown in step 1. The first phase starts in step 4 that is followed by testing the host of it is running or not at step 5 using the function 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑖𝑖𝑖𝑖𝑖𝑖 (ℎ). The list of current running VMs is retrieved in step 6using 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉(ℎ) function. This list is scanned in the loop statement in step 7. The estimated time to execute task 𝑡𝑡 on VM𝑣𝑣is computed in step 8, using 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒(𝑣𝑣, 𝑡𝑡) function-it is depicted in Fig.6- to be checked with task’s deadline in step 9. If the current VM has the ability to execute the current task within its deadline, then it is nominated as shown in steps 9-13. If the first phase is unable to select any VM, the second phase commences (step 14). Steps 15-24 do the same functionality as above but on idle hosts where the current load is equal to zero. The task is rejected if no VM is selected as shown in step 25; otherwise, it is mapped to the nominated VM as shown in steps 28 and 29. B. Estimated Completion Time Algorithm Algorithm 2 estimates the completion time of task𝑡𝑡 on VM 𝑣𝑣. After initialing the temporary variable𝛽𝛽 in step 1, the algorithm scans all the current assigned tasks to this VM to find the latest task completion time, which is equal to the VM ready time as shown in steps 2-4. If the list is empty, the the current time is the ready time for VM 𝑣𝑣. The completion time is the total processing time, getting all the input files (i.e. stagein process), and sending out all the results to their destination (i.e. stageout).

Fig. 52PMC Algorithm

32

International Conference on Emerging Trends in Computer and Electronics Engineering (ICETCEE'2012) March 24-25, 2012 Dubai TABLE I SIMULATION SETUP PARAMETERS

Fig. 6Estimate Algorithm

Value

Core nodes (C1) Servers (S) DC Computing Capacity Simulation Time DVS Task Size Task generation rate Total tasks submitted Load

16 96 192000000 MIPS 60min Enabled 100000 57.6 tasks-per-second 3337 0.1,0.2,...,1.0

Energy consumption (kW*h)

The workload is generated randomly, where each task is represented by a set of characteristics, namely, task size, input-data size, output-data size, and deadline. The physical machine CPU values were selected from a set of CPUs, which is (1,2,3,4) MHz. Table I lists the main setup parameters that are used in the simulation. The first set of experiments was done to test the ability of the proposed algorithm to decrease the amount of power consumption in hard deadline jobs. The proposed algorithm is compared with Round Robin algorithm (RR) and Green algorithm that is proposed in [15].Fig.7 depicts the amount of power required to execute the set of jobs over three different algorithms. The worst algorithm for power consumption is RR, due its ability to distribute the load to all servers that leads to request for more servers, which in turn consumes more power. The proposed algorithm gives lower power consumption than Green algorithm because of task distribution over the servers, as can be seen in the following figures. Fig.8 shows the distribution of 100 tasks over 96 servers in the datacenter. In this figure, the 2PMC algorithm sends more jobs to less number of servers. Also, it is possible to notice the behavior of RR algorithm that scans many servers and sends the job to all of them. This is the explanation for high power consumption caused by this algorithm. It could be seen that the Green algorithm asks for more servers than 2PMC algorithm. The experiment was repeated on 500 tasks over 96 servers as depicted in Fig.9. It could be observed that the three algorithms showed the same behavior pattern in task distribution over the servers. RR sends almost the same amount of tasks to each server, which means more power consumption. The proposed algorithm requests less number of servers as can be seen in servers 11, 12, and 20, 21,while it sends more jobs to servers 8 and 9 compared to Green algorithm. Another experiment was done on 1000 tasks, as shownin Fig.10. The behavior of RR differs from above, as the servers’ heterogeneity causes the algorithm to send the jobs to the server with higher specifications.2PMC algorithm still gives better results than Green and RR algorithm due to its ability to check the running rather than the idle machines from completion time perspective. The overall system load on the datacenter is shown in Fig. 11. It depicts the supplied load on datacenters servers. As expected, RR gives the highest load compared to others, due to its tasks distribution among the servers.

Parameter

350

Green

300

RR

2PMC

250 200 150 100 50 0 0.1

0.2

0.3

0.4

0.5 0.6 Load

0.7

0.8

0.9

1

Fig. 7 Power Consumption on Execution Hard Deadlines

30

Green

RR

2PMC

25 Tasks/server

20 15 10 5 0 1

11

21

31

41

51 Servers

61

71

81

91

Fig. 8 Distribution of Tasks on Servers in 100 Input Tasks 50

Green

RR

2PMC

Tasks/server

40 30 20 10 0 1

11

21

31

41

51 Servers

61

71

81

91

Fig. 9 Distribution of Tasks on Servers in 500 Input Tasks

33

International Conference on Emerging Trends in Computer and Electronics Engineering (ICETCEE'2012) March 24-25, 2012 Dubai 70

Green

60

RR

hard deadlines. The results show the ability to reduce power consumption further if some deadlines can be relaxed. The best explanation for this case is that with soft deadlines, the scheduler cloud sends a job to a running server instead of starting an idle one. Fig. 13 shows the tasks distribution on the servers with soft and hard deadline constraints. It is possible to see small differences in tasks assignment that lead to lower power consumption in the case of soft constraints.

2PMC

Tasks/server

50 40 30 20 10 0 1

11

21

31

41

51 Servers

61

71

81

91

VI. CONCLUSION

Fig. 10 Distribution of Tasks on Servers in 1000 Input Tasks 7%

Green

6%

RR

This paper examines the problem of mapping jobs to a three-tier datacenter which has a set of physical machines to minimize total power consumption. An algorithm is proposed that scans the running machine first to find a suitable place to execute. The proposed algorithm uses the minimum completion time strategy to select the machine. Exhaustive experiments have proved that spreading the load over the servers might increase power consumption more than expected. Also, it can be concluded that two phase mapping technique with MCT strategy affects power consumption. Lastly, jobs with relax deadlines could be executed with less power than those with hard deadlines. It has been proved by the exhaustive experiments that spreading the load over the servers might increase power consumption more than expected. Also, it is clear to conclude the affect of two phase mapping technique with MCT strategy on the power consumption. Lastly, jobs with relax deadlines can be executed with less power than those with hard deadlines.

2PMC

DC Load

5% 4% 3% 2% 1% % 0.1

0.2

0.3

0.4

0.5 0.6 Load

0.7

0.8

0.9

1

Energy consumption (kW*h)

Fig. 11 Total Load on the Datacenter with Various Input Loads

80

2PMC

2PMCS

60

REFERENCES 40

[1]

P. McFedries, “The cloud is the computer,” IEEE Spectrum Online, August, 2008. [2] R. Buyya, C. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility,”Future Generation Computer Systems, vol. 25, no. 6, pp. 599–616, 2009. [3] A. Weiss, “Computing in the clouds,” NetWorker, vol. 11, no. 4, pp.16– 25, 2007. [4] P. Stryer, “Understanding Data Centers and Cloud Computing,”Global Knowledge Instructor, 2010. [5] A. Greenberg, J. Hamilton, D. A. Maltz, and P. Patel, “The cost of a cloud: research problems in data center networks,” SIGCOMM Comput. Commun. Rev., vol. 39, pp. 68–73, December 2008. [Online].Available: http://doi.acm.org/10.1145/1496091.1496103 [6] X. Fan, W.-D. Weber, and L. A. Barroso, “Power provisioning for a warehouse-sized computer,” SIGARCH Comput. Archit. News, vol. 35, pp. 13–23, June 2007. [Online]. Available:http://doi.acm.org/10.1145/1273440.1250665 [7] S. K. Garg, C. S. Yeo, A. Anandasivam, and R. Buyya, “Energyefficient scheduling of hpc applications in cloud computing environments,”CoRR, vol. abs/0909.1146, 2009. [8] I. Hong, D. Kirovski, G. Qu, M. Potkonjak, and M. B. Srivastava, “Power optimization of variable voltage core-based systems,” in Proceedings of the 35th annual Design Automation Conference, ser. DAC ’98. New York, NY, USA: ACM, 1998, pp. 176–181. [Online].Available: http://doi.acm.org/10.1145/277044.277088 [9] T. D. Burd and R. W. Brodersen, “Energy efficient cmos microprocessor design,” in Proceedings of the 28th Hawaii International Conference on System Sciences. Washington, DC, USA: IEEE Computer Society, 1995, pp. 288–. [Online]. Available:http://portal.acm.org/citation.cfm?id=795694.798057 [10] N. Kappiah, V. W. Freeh, and D. K. Lowenthal, “Just in time dynamic voltage scaling: Exploiting inter-node slack to save energy in

20 0 0.1

0.2

0.3

0.4

0.5 0.6 Load

0.7

0.8

0.9

1

Fig. 12 Power Consumption on Execution Soft Deadlines 70

2PMC

60

2PMCS

Tasks/server

50 40 30 20 10 0 1

11

21

31

41

51 Servers

61

71

81

91

Fig. 13 Distribution of Tasks on Servers in 1000 Input Tasks with Soft Deadlines

Another set of experiments were done to evaluate the soft deadline constraint on power consumption. Fig. 12 depicts the power consumption of a set of jobs executed with soft and 34

International Conference on Emerging Trends in Computer and Electronics Engineering (ICETCEE'2012) March 24-25, 2012 Dubai

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26] [27]

2007international symposium on Low power electronics and design. ACMNew York, NY, USA, 2007, pp. 207–212. [28] G. Chen, W. He, J. Liu, S. Nath, L. Rigas, L. Xiao, and F. Zhao, Energyaware server provisioning and load dispatching for connectionintensiveinternet services,” in Proceedings of the 5th USENIX Symposiumon Networked Systems Design and Implementation, ser. NSDI’08.Berkeley, CA, USA: USENIX Association, 2008, pp. 337–350. [Online].Available: http://portal.acm.org/citation.cfm?id=1387589.1387613 [29] Y. Chen, A. Das, W. Qin, A. Sivasubramaniam, Q. Wang, andN. Gautam, “Managing server energy and operational costs in hostingcenters,”SIGMETRICS Perform. Eval. Rev., vol. 33, pp. 303– 314, June2005. [Online]. Available: http://doi.acm.org/10.1145/1071690.1064253

mpiprograms,” in Proceedings of the 2005 ACM/IEEE conference on Supercomputing, ser. SC ’05. Washington, DC, USA: IEEE Computer Society, 2005, pp. 33–. [Online]. Available:http://dx.doi.org/10.1109/SC.2005.39 R. Ge, X. Feng, and K. W. Cameron, “Performance-constrained distributed dvs scheduling for scientific applications on power-aware clusters,” in Proceedings of the 2005 ACM/IEEE conference on Supercomputing, ser. SC ’05. Washington, DC, USA: IEEE Computer Society, 2005, pp. 34–. [Online]. Available:http://dx.doi.org/10.1109/SC.2005.57 C.-h. Hsu and W.-c. Feng, “A power-aware run-time system for highperformance computing,” in Proceedings of the 2005 ACM/IEEE conference on Supercomputing, ser. SC ’05. Washington, DC, USA: IEEE Computer Society, 2005, pp. 1–. [Online]. Available:http://dx.doi.org/10.1109/SC.2005.3 Y. Hotta, M. Sato, H. Kimura, S. Matsuoka, T. Boku, and D. Takahashi, “Profile-based optimization of power performance by using dynamic voltage scaling on a pc cluster,” in Proceedings of the 20th international conference on Parallel and distributed processing, ser. IPDPS’06. Washington, DC, USA: IEEE Computer Society, 2006, pp. 298–298. [Online]. Available:http://portal.acm.org/citation.cfm?id=1898699.1898825 F. Xhafa, J. Carretero, L. Barolli, and A. Durresi, “Immediate mode scheduling of independent jobs in computational grids,” Advanced Information Networking and Applications, International Conference on,vol. 0, pp. 970–977, 2007. D. Kliazovich, P. Bouvry, and S. Khan, “GreenCloud: a packet-level simulator of energy-aware cloud computing data centers,” The Journal of Supercomputing, pp. 1–21, 2010. X. Fan, W. Weber, and L. Barroso, “Power provisioning for a arehousesized computer,” in Proceedings of the 34th annual international symposium on Computer architecture. ACM, 2007, pp. 13–23. R. Urgaonkar, U. Kozat, K. Igarashi, and M. Neely, “Dynamic resource allocation and power management in virtualized data centers,” in Network Operations and Management Symposium (NOMS), 2010 IEEE.IEEE, 2010, pp. 479–486. K. Kim, A. Beloglazov, and R. Buyya, “Power-aware provisioning of virtual machines for real-time Cloud services,” Concurrency and Computation: Practice and Experience, 2011. R. Ge, X. Feng, and K. W. Cameron, “Performanceconstraineddistributed dvs scheduling for scientific applications on power-aware clusters,” in Proceedings of the 2005 ACM/IEEE conferenceon Supercomputing, ser. SC ’05. Washington, DC, USA:IEEE Computer Society, 2005, pp. 34–. [Online]. Available:http://dx.doi.org/10.1109/SC.2005.57 K. Kim, R. Buyya, and J. Kim, “Power aware scheduling of bag-of-tasks applications with deadline constraints on DVS-enabled clusters,”in Proceedings of the seventh IEEE international symposium onclustercomputing and the grid. Citeseer, 2007, pp. 541–548. B. Li, J. Li, J. Huai, T. Wo, Q. Li, and L. Zhong, “EnaCloud: an energysaving application live placement approach for cloud computing environments,” in 2009 IEEE International Conference on Cloud Computing.IEEE, 2009, pp. 17–24. J. Lorch and A. Smith, “Pace: A new approach to dynamic voltagescaling,” Computers, IEEE Transactions on, vol. 53, no. 7, pp. 856–869, 2004. X. Chen, Q. Liu, and J. Lai, “A new power-aware scheduling algorithmfor distributed system,” inProceedings of the 2010 IEEE/ACM Int’l Conference on Green Computing and Communications & Int’l Conferenceon Cyber, Physical and Social Computing. IEEE Computer Society,2010, pp. 338–343. P. Mahadevan, P. Sharma, S. Banerjee, and P. Ran-ganathan, “Energy aware network operations,” in Proceedingsof the 28th IEEE international conference on ComputerCommunications Workshops, ser. INFOCOM’09. Piscataway, NJ,USA: IEEE Press, 2009, pp. 25–30. [Online]. Available:http://portal.acm.org/citation.cfm?id=1719850.1719855 J. Baliga, R. Ayre, K. Hinton, and R. Tucker, “Green cloud computing:Balancing energy in processing, storage, and transport,”Proceedings ofthe IEEE, vol. 99, no. 1, pp. 149–167, 2011. L. Shang, L. Peh, and N. Jha, “Dynamic voltage scaling with links forpower optimization of interconnection networks,” 2003. G. Dhiman and T. Rosing, “Dynamic voltage frequency scaling formulti-tasking systems using online learning,” inProceedings of the

[30] N. Simulator, “2 (NS2),” URL: http://www. isi. edu/nsnam, 2005.

35

Suggest Documents