A Combined Frequency Scaling and Application ...

5 downloads 12824 Views 903KB Size Report
Jul 3, 2014 - Cloud computing, Energy-efficiency, Quality of Service, Virtualization, Frequency scaling, Application elasticity. 1. Introduction ..... also help tune the performance of an application. ...... Hsu, “Green Supercomputing in a Desk-.
A Combined Frequency Scaling and Application Elasticity Approach for Energy-Efficient Cloud Computing S.K. Tesfatsiona,∗, E. Wadbroa , J. Tordssona a Department

of Computing Science, Umeå University, SE-90187, Umeå, Sweden

Abstract Energy management has become increasingly necessary in large-scale cloud data centers to address high operational costs and carbon footprints to the environment. In this work, we combine three management techniques that can be used to control cloud data centers in an energy-efficient manner: changing the number of virtual machines, the number of cores, and scaling the CPU frequencies. We present a feedback controller that determines an optimal configuration to minimize energy consumption while meeting performance objectives. The controller can be configured to accomplish these goals in a stable manner, without causing large oscillations in the resource allocations. To meet the needs of individual applications under different workload conditions, the controller parameters are automatically adjusted at runtime based on a system model that is learned online. The potential of the proposed approach is evaluated in a video encoding scenario. The results show that our combined approach achieves up to 34% energy savings compared to the constituent approaches—core change, virtual machine change, and CPU frequency change policies, while meeting the performance target. Keywords: Cloud computing, Energy-efficiency, Quality of Service, Virtualization, Frequency scaling, Application elasticity 1. Introduction Cloud computing delivers configurable computing resources on-demand to customers over a network in a self-service fashion, independent of device and location [1]. The resources re-

ticity, where the number of VMs is changed during a service’s operation and vertical elasticity, where hardware allocation of a running VM, typically in terms of CPU and RAM, are changed dynamically.

quired to provide the necessary Quality-of-Service (QoS) levels

Today, the number and size of data centers are growing fast.

are virtualized, shared, rapidly provisioned, and released with

According to a report [2], there are about 500 000 data centers

minimal service provider interaction. Modern data centers use

worldwide and one report [3] estimates that Google runs more

virtualization to provide application portability and facilitate

than a million servers in total. At present, reducing power con-

dynamic sharing of physical resources while retaining appli-

sumption is a major issue in the design and operation of these

cation isolation. The virtualization technologies enable rapid

large-scale data centers. A recent survey reports that world-

provisioning of Virtual Machines (VMs) and thus allow cloud

wide, data centers use about 30 billion watts of electricity [4].

services to scale up and down resources allocated to them on-

The effects of high power consumption manifest in a high op-

demand. This elasticity can be achieved using horizontal elas-

erational cost in data centers. Another study presents that the

∗ Corresponding

author. Tel.:+46722184762 Email addresses: [email protected] (S.K. Tesfatsion),

[email protected] (E. Wadbro), [email protected] (J. Tordsson) Preprint submitted to Elsevier

cost of energy has steadily risen to capture 25% of total operating costs, and is among the largest components of the overall cost [5]. In addition, the power consumption in large-scale data July 3, 2014

centers also raises many other serious issues including exces-

rely on heuristics. Recently, however, feedback and control the-

sive carbon dioxide emission and system reliability, e.g., run-

ory have been successfully applied to power management [15],

ning a single high performance 300-watt server during a year

[16], [45] . For example, work by Lefurgy et al. [19] shows that

can emit as much as 1300 kg CO2 [6].

control-theoretic power management outperforms a commonly used heuristic solution by having a more accurate power control

Autonomous resource provisioning in the cloud has been

and better application performance.

widely studied to guaranty system-wide performance, that is, to optimize data center resource management for pure perfor-

In this paper, we present a fine-grained scaling solution to

mance [7], [8]. Green cloud computing is envisioned to achieve

the dynamic resource-provisioning problem with the goal of

not only efficient processing and utilization of a computing in-

minimizing energy consumption while fulfilling performance

frastructure, but also to minimize energy consumption [9]. This

objectives. In contrast to existing works, we combine virtual-

is essential for guaranteeing that the future growth of cloud

ization capabilities—horizontal and vertical scaling and a hard-

computing is sustainable. Lowering the energy consumption

ware technique—scaling the frequency of physical cores and

may result in performance loss and it is thus important to be

apply control-theoretic approach. In summary, the contribu-

able to guarantee application QoS while minimizing the energy

tions of this work are:

consumption. To achieve this goal, a well designed trade-off

• An evaluation of performance and power impact of the differ-

between energy savings and system performance is necessary.

ent management capabilities available in modern data center

In the research literature, a large body of works applies

servers and software.

Dynamic Voltage and Frequency Scaling (DVFS) and Vary-

• Design of an online system model to dynamically determine

On/Vary-Off (VOVO) power management mechanisms. DVFS

the relationship between performance and power for the var-

changes the operating frequency and voltage of a given com-

ious configurations of the system. Our adaptive model cap-

puting resource. VOVO turns servers on and off to adjust the

tures variations in system dynamics due to differences in op-

number of active servers according to the workload. Studies

erating regimes, workload condition, and application types.

show that these techniques can reduce power consumption con-

• An architectural framework for the energy-efficient manage-

siderably [10], [11], [12], and [13]. However, there is limited

ment of cloud computing environments. Our framework inte-

work on the use of virtualization capabilities to dynamically

grates software techniques—horizontal and vertical elasticity

scale resources of a service with the main objective of saving

and a hardware technique—CPU frequency scaling.

energy while meeting the performance target. Moreover, cur-

• Design of feedback controller that determines a configuration

rent elasticity implementations offered by IaaS providers, e.g.

to minimize energy consumption while meeting performance

Amazon EC2 [14], which uses VMs as the smallest scaling unit,

targets. The controller can be configured to handle trade-offs

may be too coarse-grained and thus cause unnecessary over-

among energy minimization, guaranteeing application per-

provisioning and waste of power. One approach to address the

formance, and avoiding oscillations in resource allocations.

problem of resource provisioning is to provide a fine-grained

• An evaluation of the proposed framework in a video encod-

resource allocation by adapting the VM capacities to the re-

ing scenario. Our combined approach achieves the lowest

quirements of the application. It is also important to consider

energy consumption among the compared three approaches

the cost of changing resources in terms of performance penalty

while meeting performance targets.

and system reliability. Therefore, reconfiguration cost should also be considered.

The rest of the paper is organized as follows. In Section 2, we present the design of the system model. Section 3 describes

Traditionally, adaptive power management solutions mainly 2

the architecture of our proposed system. This is followed by a

instead change the number of cores of a VM without mod-

detailed description of the design of the optimal controller in

ifying the hypervisor scheduler.

Section 4. In Section 5, we present our experimental setup and

• Hard power scaling. DVFS has been applied within clus-

the evaluation results. Section 6 surveys related work. Conclu-

ters and supercomputers to reduce power consumption

sion and future directions are covered in Section 7.

and achieve high reliability and availability [17], [32], [36], [37], [45].

2. System Model

In this work, we do Dynamic Fre-

quency Scaling(DFS)—changing the frequencies of the

In control theory, system models play an essential role in

cores based on demand. We also consider turning indi-

analysis and design of feedback systems [33]. They charac-

vidual cores on and off in the power scaling decision.

terize the relationship between the inputs—control knobs and 2.2. System Model Experimentation

the outputs of the system—metrics being controlled. In this section, we provide a description of dimensions or knobs that

After identifying the above dimensions, we performed a set

can be used to manage systems in an energy-efficient manner.

of experiments to understand the relationship between these

Then we describe a set of system modeling experiments we

and their impact on application’s performance and power us-

performed to analyze the impact of each management action

age. Figure 1 illustrates an input-output representation of the

(input) on performance and power consumption (outputs) and

system we are controlling. The inputs to the system are server

design a system model for the dynamic behaviour of the appli-

CPU frequency, cores, and number of VMs. Two outputs are

cation under various configurations.

considered, power usage (Watt) and performance (throughput, although other performance metrics could also be used). The

2.1. Power Management Dimensions

experimentation was performed to see how performance and

• Horizontal scaling. This technique exploits the hypervi-

power vary with changes in CPU frequency, number of cores,

sor’s ability to add or remove VMs for a running service.

and number of VMs as well as to determine a model for the

The power impact of a VM can be considered in terms of

system behavior.

the dynamic power used by components like CPU, mem-

CPU freq

ory, and disk. Thus, VM power consumption greatly de-

Core

pends on the extent to which these resources are utilized

VM

and also differs from one application type to another.

Power consumption System

Performance

Figure 1: An input-output model for our considered system.

• Vertical scaling. This technique exploits the hypervisor’s ability to change the size of a VM in terms of resources

2.3. Experimental Setup

like cores and memory. We select core or CPU as a re-

The experiments are performed on a ProLiant DL165 G7 ma-

source for management as it dominates the overall power

chine equipped with 32 AMD Opteron(TM) Processors (2 sock-

usage profile of a server. The CPU can draw up to 58% of

ets, 4 nodes, and 32 cores in total, no hyper-threading) and 56

the relative non-idle dynamic power usage by a data cen-

GB of physical memory. The server runs Ubuntu 12.04.2 with

ter server [34]. CPU-bound applications, in general, ben-

Linux kernel 3.2.0. The frequency of cores can be scaled from

efit more from this type of scaling. Some research [16],

1.4 GHz to 2.1 GHz. In order to adjust the CPU frequency, we

[35] show energy consumption savings by modifying the

utilize the cpufrequtils [38] package. The package contains two

hypervisor’s scheduling attributes to change guest’s max-

utilities for inspecting and setting the CPU frequency through

imum time slice on a core. In contrast, in this work we

CPUFreq kernel interfaces provided by the kernel on hardware. 3

We changed the default ondemand CPU frequency scaling pol-

without pinning. In general, pinning VMs to particular physi-

icy to a usersspace policy that gives us full control to change

cal cores decreases the flexibility and could result in decreased

CPU frequency. A rack-mounted HP AF525A Power Distri-

resource utilization [35]. To address this issue, we change the

bution Unit (PDU) is used for distributing electric power to

frequency of all the cores of a server by running one applica-

servers. A special power meter attached to the PDU is used

tion per server. Here, we first identified the impact of changing

for measuring power usages of servers. We used the Simple

the frequency of the unused or idle cores on the power usage

Network Management Protocol (SNMP) to extract power con-

of the server. This simplification avoids CPU pinning. An ad-

sumption of the server.

ditional simplification is that running a single application on a

The virtual machines run on Debian 2.6.32 with kernel ver-

node eliminates the impact of changing the CPU frequency of

sion 2.6.32. The version of the KVM hypervisor is KVM

physical cores that might be shared by other applications that

QEMU 1.0.0. A maximum of 10 VM instances are run at the

run on the same server.

same time. The maximum of virtual core allocation for each 2.4. Experimental Results

VM is 32 and and exact number of cores depend on specific experiments. The amount of memory allocated to a VM is set to

We ran different experiments by combining from 1 to 10

5 GB.

VMs, a number cores that range from 1 to 32, and server CPU

To investigate CPU impact on power usage, we chose the

frequencies running at 1.4, 1.5, 1.7, 1.9, and 2.1 GHz. We got

CPU-bound benchmark from the SysBench [11] package which

a total of 3885 results for all possible combinations of configu-

consists of configurable number of events that calculates prime

rations.

numbers from 1 to N (user-specified) for different number of re-

Figure 2 shows the results of management dimensions

quests. The application was loaded with 144 000 requests while

with highest impact on performance and power usage. Fig-

the average event handling time for each VM was measured.

ures 2(a), 2(b), and 2(c) show the energy efficiency, that is the

We used this metric to calculate the average throughput. The

quotient between throughput (request per second) and power

experiments were repeated at 4 different times to get more ac-

usage (Watt), as a function of the number of active cores, num-

curate results. In order to get fairer performance the number of

ber of running VM, and CPU frequency, respectively. Increas-

threads in the client were changed proportionally to the number

ing number of cores, number of VMs, and frequency results in

of cores a VM used. We argue that a large class of CPU-bound

the increase of performance and power consumption. The fig-

applications can be represented by tuning the system model dy-

ures reflect the importance of these three dimensions in terms

namically based on the initial parameterization obtained from

of power usage. On the contrary, turning cores on/off only

the SysBench experimentation. Similarly, for applications with

marginally impacts power consumption. The measured differ-

other characteristics, creating alternative system models from

ence between a VM using 1 core and 31 of the remaining cores

representative benchmarks and updating the models online is a

being idle and a VM using 1 core when the 31 idle cores are

promising approach [39], [40].

turned off was less than 1 %. The result also shows the minimal

For vertical elasticity, it turned out not to be possible to

impact of CPU frequency on power consumption for idle cores.

change the cores of a VM dynamically using KVM supervi-

Understanding the underlying architecture of the system can

sor. We address this limitation by stopping and starting VMs

also help tune the performance of an application. In particular,

with different number of cores. Furthermore, although it is pos-

for systems with a Non-Uniform Memory Access (NUMA) ar-

sible to change the frequency of individual CPU cores, the map-

chitecture, pinning of VCPUs to physical processing cores on

ping of guest VM cores to physical cores cannot be controlled

the same NUMA node makes a significant difference. This is 4

124

• •

8 16 Number of cores

32

(a) Varying the number of cores of a VM running at 2.1 GHz CPU frequency.

1

50 45 40 35 30 25 20 15 10 5• 0

1





• •

2

4 8 Number of VMs

(b) Changing the number of 2-core VMs running at 1.9 GHz CPU frequency.

1

10

Efficiency (req/Ws)



Efficiency (req/Ws)

Efficiency (req/Ws)

100 90 80 70 60 50 40 30 20 • 10 • • 0

10 9 8 7 6 5• 4 3 2 1 0









1.4 1.5 1.7 1.9 2.1 server CPU frequency (GHz)

(c) Running a 2-core VM at different CPU frequencies.

1

Figure 2: Impact of different management dimensions on power usage and performance.

because pinning can avoid cross-node memory transports by al-

3. System Architecture

ways allocating memory to the VM that is local to the NUMA node the VM runs on. We observed a performance increase of

In this section, we provide a high-level description of the sys-

35% with a power increase of only 1% in the pinned system

tem architecture. Figure 3 shows the overall system architec-

as compared to one which is not. Locking a VM to a particu-

ture, in which the main components are the following:

lar NUMA node offers this benefit if that node has enough free

• The System Model describes the relationship between per-

memory for that VM. Howerver, to keep our work as general as

formance and power usage of the application for the vari-

possible, we have selected the default non-pinning virtualiza-

ous combinations of cores, VMs, and CPU frequencies as

tion policy in the rest of this work.

described in Section 2. • The System Monitor periodically measures application

We also compared the difference between a VM with X number of cores and a set of VMs whose total sum of cores is X.

performance and server power usage. We refer to this

The results show that these two configurations exhibit similar

period as the control interval. The monitor can collect a

behavior in performance and power usage. From this, we can

number of performance metrics. For the system model, we

say that the most important factor in energy efficiency decision

measure performance in terms of throughput, but the mon-

is changing the number of cores. However, this interesting be-

itor is generic can handle any performance metric. The

havior may not hold for all applications as different classes of

performance data for the application is then sent to the op-

applications can have more other resource requirements than a

timal controller as feedback.

certain number of cores. Besides, changing the number of cores

• The Workload Monitor gathers information about the re-

might not always be possible due to technical issue or physical

quired application performance. This performance metric

constraints like overpassing the capacity of a physical server.

is used as a reference signal to the controller.

Due to this reason and for completeness sake, we have thus also considered changing number of VMs.

• The Optimal Controller compares the application perfor-

mance target from the workload monitor with the mea-

To summarize, the set of experiments shows that perfor-

sured performance from the system monitor. Based on the

mance and power consumption are functions of number of

discrepancy, it determines the optimal reconfiguration, if

VMs, number of cores, and the CPU frequency used, with all

any, for the next control interval to minimize energy con-

other factors having marginal impact.

sumption and meet the performance requirement. The new 5

configuration may result in changes in CPU frequency,

That is, A x represents all possible combinations of VMs with

number of cores, and/or VMs.

different number of cores such that the total number of cores is less than or equal to the number of cores in the server.

3.1. Online system model

Although exactly corresponding to the desire of minimizing

As shown in Figure 3 the optimal controller takes actions

energy consumption while attaining a prescribed performance

based on the system model to meet the performance demand of

target, the formulation in Equation (1) is inflexible. In some

the application. In general the system dynamics may vary due

cases, it might be acceptable to miss the performance target

to differences in operating regimes, workload conditions, and

by some small fraction in order to save energy. We make our

application types. To deal with these variations, techniques that

model more flexible by assigning a penalty for the deviation of

can dynamically update a model at run time are desirable.

the application’s measured performance from its target. In this

In this work, we adopt an approach that updates the model

way, performance lower than the target is feasible but becomes

online based on real-time measurements of performance and

increasingly expensive. Incorporating this energy-performance

power usage. By feedback directly from the application at each

tradeoff, our minimization problem now becomes:

control interval, the model is updated using moving averages.

min W(t; x, f ) + p(Ptarget (t), P(t; x, f )),

(x, f )∈A

(4)

where p is a penalty function that accounts for the costs in-

4. Optimal Controller Design

curred for any deviation from the target performance.

4.1. Conceptual Design

The control formulation in Equation (4) minimizes energy

This section details the design of the optimal controller. We

consumption by responding to performance changes, but it is

first introduce some notations. Let N denote the total number

prone to oscillations if the performance target changes fre-

of cores on the server and let M = log2 (N). Let xi denote the

quently. An additional goal can be set to accomplish the mini-

number of VMs with 2i cores, i = 0, 1, ..., M. Let f denote the

mization in a stable manner, without causing large oscillations

CPU frequency of the server, t denote the control time, W(t)

in the resource allocations and a resulting higher delay in re-

denote energy consumption and P(t) the performance at time

configuration. Assume that we at time tˆ have an updated per-

t. Assume that at each point in time, we are given a minimum

formance P(tˆ) and energy consumption W(tˆ). We are now in-

performance target Ptarget (t). The task of minimizing the en-

terested in finding the best reconfiguration so that given a per-

ergy consumption, at each time t while attaining at least the

formance goal Ptarget (t) is met at time t = tˆ + 4t. Our control

pre-specified performance target can now be formulated as the

strategy for the server is to set the new configuration to:

constrained optimization problem: min W(t; x, f ) subject to P(t; x, f ) ≥ Ptarget (t),

(x, f )∈A

argmin W(t; x, f ) + p(Ptarget (t), P(t; x, f )) + r(∆ f, ∆x). (x, f )∈A

(1)

The function r is the control cost that penalizes big changes in

where the set of possible server configurations is given by A = {(x, f ) | x ∈ A x and f ∈ A f }.

the resource allocation in a single interval and thus contributes towards system stability. The control penalty used here is a sum

(2)

of three terms:

Here A f is the set of clock frequencies available on the server and the set of feasible VM combinations, A x , is given by   M   X     i i Ax =  x | x ∈ N, i = 0, 1, ..., M and 2 x ≤ N .  i    

(5)

• The first term is proportional to the change in CPU frequency, |∆ f |.

(3)

i=0

6

• The second term is the control penalty corresponding to

Requests

Measured Performance, power

System Model

Workload Monitor

Target Performance

Optimal Controler

New configuration Change frequency Add/remove VM Add/remove core

System Monitor

Target System

Measured Performance

Figure 3: Overall system architecture.

mance P(t; x, f )/Ptarget (t). This piecewise constant function

the running cores, which here is proportional to the numM X 2i |∆xi |. ber of cores affected by the reconfiguration,

is defined by k SLA penalty levels S i , i = 1, 2, . . . , k, and k + 1 conformance interval limits Li , i = 0, 1, . . . , k, satisfy-

i=0

• The second term also takes into account the costs for bootM X ing new VMs and is proportional to tboot (2i ). It is

ing 0 = L0 < L1 < . . . < Lk−1 < Lk = ∞. The cost of the SLA

violation during the time interval T is given by:

{i|∆xi >0}

assumed that VMs of the same size boot equally fast. In-

CTSLA = ∆tS i ,

corporating VM booting time into stability decision can

(7)

where i is chosen such that:

also help to differentiate configurations having the same total number of cores but with different VM type config-

Li−1 Ptarget (t) ≤ P(t; x, f ) < Li Ptarget (t).

urations, hence different booting times. For instance, the

(8)

The configuration penalty in terms of server CPU frequency

ability to differentiate between starting one 2-core VM and

change, number of core changes, and VM booting time is also

starting two 1-core VMs.

converted using a linear cost function: 4.2. Mapping penalties into Monetary Costs

config

CT

To make the overall minimization formulation dimensionally

= a|∆ f | + b

M X i=0

2i |∆xi | + c

M X

tboot (2i ),

(9)

{i|∆xi >0}

consistent, we translate the energy consumption, performance,

where a, b, and c represent the cost of changing server CPU

and configuration penalties into monetary costs over the control

frequency, a unit cost per core change, and a constant used to

interval T = (tˆ, tˆ + ∆t). In our model, we convert energy usage

calculate the cost for starting new VMs, respectively.

to monetary costs as power

CT

The optimization problem thus considers the costs of enZ =m

W(s; x, f ) ds,

ergy consumption, performance penalties, and configuration

(6)

T

changes, over the the time interval T and it is formulated as:

where m is a constant representing the energy cost, that is, the

power

min CT

power cost per time unit. Here, we assume that the cost of power per time unit is constant.

config

+ CTSLA + CT

,

(10)

To allow human operators specify which of the the three

There are multiple ways to relate performance penalty to

objectives (energy, performance, or configuration stability) is

monetary values and thus a variety of penalty functions can

more important, we introduce three multipliers in Equation (10)

be chosen [41].

and the final optimization problem is formulated as:

In this work, we use a step-wise penalty

function as it provides a more intuitive interpretation of SLAs

power

min αCT

and is a more natural choice used in the real-world contracts.

config

+ βCTSLA + γCT

,

(11)

The SLA penalty function p is here assumed to be a piece-

where α, β, and γ represent priorities given to energy savings,

wise constant decreasing function of the performance confor-

performance conformance, and system stability, respectively. 7

In cases where energy minimization cannot be achieved without

For distributing videos to VMs, we set up a job queue con-

violating the SLA, the α and β parameters control the trade-off

sisting of multiple videos and implement a simple load balanc-

between energy efficiency and performance. The γ parameter

ing scheme where a video from a queue is assigned to a free

controls the stability of the system. This parameter provides an

VM. We gather the average performance within a control in-

intuitive way to control the trade-off between the controller’s

terval from each VM and aggregate the performance from all

stability and its ability to respond to changes in the workloads

machines to obtain the system-wide performance. A synthetic

and performance. For a larger γ value, the controller stabilizes

workload is used in which the average performance demand

the system by reacting slowly to the performance changes. As

in a control interval is used as a reference for making auto-

the value of γ decreases the controller responds more rapidly

configuration decision. The power consumption is recorded ev-

to changes in performance resulting in a higher oscillation in

ery second on the server and its average power usage in one

the new configuration. In general, setting the priorities (α-β-γ

control interval is fetched and used accordingly.

values) is done by hand by a human operator, who is the one

To find appropriate parameters for the controller (see equa-

deciding about the tradeoff. However, tuning of the energy-

tions (6)– (9)), we set energy consumption cost based on current

performance-reconfiguration tradeoff is done automatically by

electricity price of $0.2 per KWatt hour over control interval T.

the controller according to the operator specified policy.

The cost used for SLA penalties is inspired by existing SLAs for availability of various systems and it is shown in Table 1 . Reconfiguration costs are assessed by measuring the down-

5. Experimental Evaluation

time of the various actions (starting VMs, turning on cores, and In this section, we evaluate the designed controller in a vari-

changing CPU frequencies) and combining that with resulting

ety of scenarios.

SLA penalties caused by the downtime. Since it is very quick to change CPU frequency (less than 10 ms) we omit this overhead from further consideration, see Equation (9). A cost of $0.1 is

5.1. Experimental Setup The potential of the proposed approach is evaluated by using

used per core change. We instrumented the VM boot process

a video encoding application, the freely available x264 encod-

by using the bootchart [43] tool to measure VM booting time.

ing software [42] that is CPU-intensive and thus representative

This varied between 4 and 10 seconds depending on the CPU

for the type of applications our system model was designed for.

frequency and the number of cores used. This does not include

The application was instrumented in order to extract the sys-

the video encoding start-up time. A cost of $0.01 is used for

tem performance (number of encoded frames per second) at the

starting new VMs. We adopted an optimization control loop

completion of each frame block encoding.

with a 120 seconds activation interval.

We used the same experimental testbed as in Section 2.3 with 5.2. Evaluation Results

the same power measurement, CPU frequency management, and virtualization techniques. The controller is hosted on a sep-

5.2.1. Online model update

arate but identical server from where the VMs are running. The

To evaluate the difference between the base model and the

nodes are in the same local network with a connection speed

running application, we perform measurements at different con-

of 1Gb/s. This makes sensing and actuation delays small com-

trol intervals to see how the system converges to the desired

pared to the control interval. We do not mandate this placement,

performance and power usage. Figure 4 shows an initial devia-

however, and the data center operator can choose to host the op-

tion of 368% for 2-core VM running at 1.4 GHz frequency for

timal controller in the physical node it controls.

constant workload inputs. We can see from the figure that the 8

change policy adds/removes VMs of same sizes as described in

Table 1: Cost of SLA violation (dollar)

Performance con-

Section 2 under horizontal scaling. Allocating fixed-size VMs

cost($)

may result in higher performance and energy usage than re-

formance(%)

quired. The CPU frequency change policy relates to the CPU

>=100

0

99:100

0.1

98:99

0.4

97:98

0.8

96:97

1.6

implements the third management dimension, i.e., changing the

95:96

3.2

number of cores of a VM. This policy yields more energy sav-

< 95



ings than the first two policies as it provides a more fine-grained

frequency scaling management action and changes the CPU frequency of the server. This approach may also result in higher performance and energy usage because of a limited CPU frequency scaling capabilities of a server. The Core change policy

control upon load changes. Whether this also applies to other error reduces by half at the next control interval (deviation of

types of machines depends on the number of cores available

11% at the 5th control interval). For power consumption, sim-

in the system, the type of application, and the extent to which

ilar to performance, the error reduces by half in each interval.

a VM can beo scaled vertically. The Combined policy com-

Notably, the initial error for power usage is small (deviation of

bines the VM, core, and CPU frequency policies. For the VM

3%). The reason behind this is the power consumption for a

change policy, we considered 8-core fixed size VMs running at

particular configuration at full utilization does not vary greatly

2.1 GHz. Three 8-core VMs were taken for the CPU frequency

between applications. A small difference is expected due to the

change policy. For the core change policy, we considered a VM

nature on how (phases of) the application is using the different

running at 2.1 GHz CPU frequency. Figure 5 shows the re-

components of the processor [44]. The rest of the results pre-

sults of these four different energy saving policies. Figure 5(b)

sented here are obtained after this initial tuning of the controller.

shows that that the combined approach saves the most energy

400 350 300 250 200 150 100 50 00

fps watt

3 2.5 2 1.5 1 0.5

120 240 360 480 6000 Time (sec)

applied in isolation while meeting the performance target (with

Percentage of error (watt)

Percentage of error (fps)

(up to 34% energy savings) compared to when each policy is an average deviation of 8% from target performance) as shown in Figure 5(a). This is because it has more knobs to control than any of the other polices and thus provides a very fine-grained scaling option that meets the performance target while minimizing energy usage. Table 2 summarizes the target performance deviation and energy usage for all four policies.

5.2.3. Performance-Energy trade-off

Figure 4: Performance and power usage errors.

The extent to which the controller can save energy depends on how much SLAs can be violated. By varying the values of constants α (weight for energy saving) and β (weight for guar-

5.2.2. Approaches to energy savings We compared four different policies: changing CPU frequen-

anteeing application performance) in Equation (11) we can get

cies, number of VMs, and number of cores with our approach

different energy consumption and performance values as shown

that combines CPU frequency, VM, and core changes. The VM

in Figure 6. With a focus on energy minimization (α= 0.9, 9

40

30 25 20 15 10

300 250 200 150

5 0

Frequency VM Core Combined

350

Power (watt)

Throughput (fps)

400

Frequency VM Core Combined Target

35

0

240

480

720

960

1200

1440

100

1680

0

240

480

Time (sec)

720

960

1200

1440

1680

Time (sec)

(a) Performance.

(b) Power usage.

Figure 5: Achieved performance and power for four different policies.

28

310

24 22 20 18 16

300 290 280 270 260

14

250

12

240

10

0

Lower power savings:α-0.1, γ-0.9 Higher power savings: α-0.9, γ-0.1

320

Power (watt)

Throughput (fps)

330

Target Lower power savings:α-0.1, γ-0.9 Higher power savings: α-0.9, γ-0.1

26

240 480 720 960 1200 1440 1680 1920 2160 2400 2640

230

0

240 480 720 960 1200 1440 1680 1920 2160 2400 2640

Time (sec)

Time (sec) (a) Performance.

(b) Power usage.

Figure 6: Impact of performance–energy trade-off.

10

larger changes the controller can be tuned to accomplish the

Table 2: Impact of different techniques on performance and power usage.

optimization in a stable manner. Figure 7 illustrate the tradeAverage deviation from target performance (%)

off between controller stability and responsiveness by setting Average

α=0.1and β=0.9, and adjusting the constant γ in Equation (11).

power (Watt)

Figure 7(a) shows the achieved throughput for γ values of 0

Frequency

149

270

VM

116

298

Core

31

243

Combined

8

196

and 10. For γ=0 the controller reacts to the workload change more quickly and aggressively (average deviation of 8% from target), resulting in large oscillations in the number of cores allocated and thus also in performance. For γ=10, the controller does not adjust resource allocations fast enough to always track the performance targets (average deviation of 62% from target)

β=0.1, γ=0), the controller reacts aggressively to minimize en-

but achieves a much more stable operation. Figure 7(b) shows

ergy consumption. With a focus on performance (α=0.1, β=0.9,

the corresponding changes on the number of cores used for the

γ=0), the controller responds more quickly to performance de-

same experiment. For γ=0 the controller makes 233% more

viation, resulting in higher energy consumption. Figure 6(b)

core changes than for γ=10 during the 27 control interval peri-

and 6(a) show up to a 4% of energy savings while meeting 97%

ods. Table 4 summarizes the target performance deviation and

of performance target respectively by setting α to 0.9 and β to

number of core changes for the two stability weights.

0.1. Table 3 shows the target performance deviation and energy usage for the two weights. In general, the most suitable tradeTable 4: Impact of configuration cost on performance and number of core

off between energy saving and meeting performance target is a

changes.

matter of user preference, but our controller simplifies handling of these conflicting goals. Table 3: Impact of Performance–Energy trade-off on performance and power usage.

from target performance(%) (α = 0.1,β = 0.9)

No. of

from target

core

performance(%)

changes

8

80

62

24

Configuration cost Average deviation

Lower energy weight

Average deviation

Lower stability

Average

weight (γ=0)

power (watt)

1.6

276

2.7

264

Higher stability weight (γ=10)

Higher energy weight

In addition to reducing the number of core changes, the controller minimizes VM booting time when making a control de-

(α = 0.9,β = 0.1)

cision by differentiating configurations having different combinations of VM types but with the same total number of cores. For instance, the controller selects a 2-core VM which takes

5.2.4. Configuration cost Minimizing energy consumption may cause large oscilla-

only 6 seconds to boot instead of selecting two 1-core VMs that boots 9 seconds.

tions in the system as evidenced in Figure 6. To deal with 11

35

Number of core changes

Throughput (fps)

14

Target Lower stability weight:γ-0 Higher stability weight:γ-10

30 25 20 15 10 5 0

0

360

720

1080

1440

1800

2160

2520

Lower stability weight:γ-0 Higher stability weight:γ-10

12 10 8 6 4 2 0

2880

0

360

720

1080

Time (sec)

1440

1800

2160

2520

2880

Time (sec)

(a) Performance.

(b) Cores.

40

Target Control interval-30sec 20 Control interval-60sec Control interval-90sec Control interval-120sec 15

Number of cores used

Throughput (fps)

Figure 7: Impact of configuration cost on performance and cores.

10

5

0

0

30

60

Control interval-30sec 35 Control interval-60sec Control interval-90sec Control interval-120sec 30 25 20 15 10 5 0

90 120 150 180 210 240 270 300 330 360 390 420 450 480 510 540

Time (sec)

0

30

60

90 120 150 180 210 240 270 300 330 360 390 420 450 480 510 540

Time (sec)

(a) Performance.

(b) Number of cores.

Figure 8: Impact of control interval on performance and number of cores.

12

demand. Niyato et al. [12] propose an optimal power man-

5.2.5. Impact of control interval We consider the impact of different control intervals on

agement to adjust the number of active servers for maximum

system responsiveness and stability. The difference becomes

energy savings. Lu et al. [13] propose a tracing method for

greater when there are drastic variations in workloads. Fig-

multi-tier services to diagnose energy inefficiency and detect

ure 8 shows how the system reacts more quickly on workload

bottlenecks. Their corresponding management system com-

changes when the interval is shorter (30 s). For a longer inter-

bines an offline performance model with online tracing and

val (120 s) the system becomes more stable and adapts slowly

DVFS. While a considerable amount of power can be saved

to changes in workloads. Figures 8(a) and 8(b) show the im-

by turning servers on and off, the main goal of this work is

pact of the different control intervals on performance and num-

to achieve autonomous resource configuration by online adjust-

ber of core changes respectively. The 30 second control inter-

ing service’s capacity according to power usage and users’ SLA

val makes 217% more core changes while reacting quicker to

requirements.

workload changes (average deviation of 7% from target per-

There are also much previous work in the area of elasticity on

formance) than the 120 second control interval that on average

cloud infrastructures. Work on horizontal elasticity [23, 24, 25]

deviates 127% from target performance. Table 5 shows the tar-

address resource provisioning by allocating and releasing vir-

get performance deviation and number of core changes for the

tual server instances of predefined sizes. Regarding vertical

four studied control intervals.

elasticity, the work by Kalyvianaki et al. [26] integrate Kalman filter into feedback controllers to dynamically allocate CPU re-

Table 5: Impact of control interval length on performance and number of core

sources to VMs. Dawoud et al. [27] compare the benefits

changes.

Average deviation

No. of

from target

core

performance(%)

changes

30 s

7.4

108

60 s

81

73

90 s

132

36

120 s

127

34

and drawbacks of vertical elasticity with respect to horizontal elasticity and focus on vertical elasticity. The aforementioned works do not address the combined vertical and horizontal elasticity. In addition, power consumption is not taken into account in these works. There are several studies on autonomous resource provisioning in the cloud. Rao et al. [8] apply reinforcement learning to adaptively adjust resource allocations based on SLA conformance feedback. Padala et al. [7] present a self-adaptive automated control policy in virtualized computing system, in

6. Related Work

which an auto-regression moving average model utilize previ-

Significant research efforts have been expended on applying

ous configurations to select future configurations to optimize

DVFS to computing systems in order to reduce power consump-

performance. However, they do not consider power usage in

tion while meeting performance constraints [10, 20, 21, 22, 13].

decision making. Stoess et al. [28] propose a power manage-

Elnozahy [10] investigates using cluster-level reconfiguration

ment framework in multi-layer virtualized system architecture

in conjunction with DVFS. Laszewski et al. [20] propose a

but their work ignores the system-wide performance. Jung et al.

scheduling algorithm to allocate virtual machines in a DVFS-

[29] propose a framework to dynamically coordinate power ef-

enabled cluster by dynamically scaling the supplied voltages.

ficiency and performance, which is based on a heuristic method.

Chase et al. [11] consider how to reduce power consumption

Another set of heuristics methods, to schedule virtual machines

in data centers by turning servers on and off based on resource

to minimize energy consumption for memory, is proposed by 13

Jang et al. [? ]. Beloglazov et al. [30] propose energy-aware

consumption and performance). We adopt an online approach

allocation heuristics for the provision of data center resources

to update the system model based on real-time measurements

to client applications in a way that improves energy efficiency

of performance and power usage. By this, the system behav-

of the data center, while delivering the negotiated QoS. Zhang

ior adapts to needs of individual applications under different

et al. [31] propose an adaptive power management framework

workload conditions. We also present a feedback controller that

using reinforcement learning to achieve autonomic resource al-

determines an optimal configuration to minimize energy con-

location and optimal power efficiency while fulfilling SLAs.

sumption while meeting the performance target. The controller

Control-theoretic techniques have recently shown a lot of

can be configured to accomplish energy minimization in a sta-

promise [15, 16, 19, 45] for power management thanks to their

ble manner, without causing large oscillations in the resource

characteristic such as accuracy, stability, short settling time, and

allocations.

small overshoot. They also provide a better quantitative control

Our experimental evaluation in a video encoding scenario

analysis when the system is suffering unpredictable workload

shows that the controller saves the most energy while still ful-

variations. Padala et al. [16] apply a proportional controller

filling SLAs when the resulting configuration combines CPU

to dynamically adjust CPU shares to VM-based multi-tier web

frequency changes with changes in the number of cores and

applications. Gao et al. [15] design an integrated multi-input,

VMs. The controller can handle trade-offs between energy min-

multi-output performance controller and an energy optimizer

imization and fulfilling SLAs. Finally, we show that the con-

management solution that takes advantage of both VM resiz-

troller can be configured to accomplish the minimization with-

ing and server consolidation to achieve energy efficiency and

out causing large oscillations in the resource allocations.

QoS in virtualized data centers. Minerick et al. [45] apply a

For future work, we are investigating strategies for energy

feedback controller to adapt the proper power setting (adjusting

and performance management in scenarios where more than

the voltage/frequency of a processor) based on previous energy

one application is sharing the same server. One promising

consumptions. Although all these works use control theory to

strategy is to emulate hardware frequency scaling by provid-

manage power consumption, they do not consider both hard-

ing a VM the right amount of resource using the hypervisor’s

ware and software solutions in power management.

scheduling capability [35]. The change may include control-

In contrast to the discussed existing works, we use both hor-

ling shares on CPU. The work can also be extended to include

izontal elasticity, vertical elasticity, and CPU frequency scaling

resources types like memory and disk. By analyzing the be-

solutions and apply a control-approach to power and perfor-

havior of these resources and their impact on performance and

mance management.

energy consumption even more energy can be saved. For finding the optimal control action, the current exhaustive search of 3885 possible configurations takes around 13 ms. With more re-

7. Conclusions and Future work

source types and more configurations, the current method may

In this paper we present a system that integrates hardware

be too slow and other efficient optimization techniques such as

and software management techniques to improve energy effi-

branch and bound may need be adapted.

ciency. We combine horizontal and vertical scaling to provide a richer choice for applications with different requirements and

Acknowledgement

combine with a hardware approach to change the server CPU frequency. We experimentally determine a system model that

This work is supported in part by the Swedish Research

captures the relationship between the inputs (CPU frequency,

Council under grant number 2012-5908 for the Cloud Con-

number of VMs, and number of cores) and the outputs (power

trol project. We thank Tomas Forsman for technical assistance 14

regarding virtualization and power usage monitoring and Erik

[14] “Amazon: Amazon elastic compute cloud,,” Accessed: November 2013, http://aws.amazon.com/ec2/.

Elmroth for overall feedback.

[15] Y. Gao, H. Guan, Z. Qi, B. Wang, and L. Liu, “Quality of service aware power management for virtualized data centers.” Journal of Systems Architecture - Embedded Systems Design, vol. 59, no. 4-5, pp. 245–259,

References

2013. [16] P. Padala, K. G. Shin, X. Zhu, M. Uysal, Z. Wang, S. Singhal,

[1] P. Mell and T. Grance, “The NIST definition of cloud computing,” Tech.

A. Merchant, and K. Salem, “Adaptive control of virtualized resources

Rep., Accessed: Feburary 2013. [2] “How

many

data

centers?”

Accessed:

March

in utility computing environments,” in Proceedings of the 2nd ACM

2013,

SIGOPS/EuroSys European Conference on Computer Systems 2007, ser.

http://www.datacenterknowledge.com/archives/2011/12/14/how-many-

EuroSys ’07.

data-centers-emerson-says-500000/. September

http://www.extremetech.com/extreme/131962-google-compute-

Maciejewski, H. J. Siegel, B. Khemka, S. Bahirat, A. Ramirez, and

[3] “Google uses over one million servers,” Accessed: 2013,

ACM, 2007, pp. 289–302.

[17] B. D. Young, J. Apodaca, L. D. Briceno, J. Smith, S. Pasricha, A. A.

engine-for-2-millionday-your-company-can-run-the-third-fastest-

Y. Zou, “Energy-constrained dynamic resource allocation in a heteroge-

supercomputer-in-the-world/.

neous computing environment.” in ICPP Workshops.

[4] “Power,

pollution and the internet,”

Accessed:

298–307.

April 2013,

[18] R. J. Minerick, V. W. Freeh, and P. M. Kogge, “Dynamic power man-

http://www.nytimes.com/2012/09/23/technology/data-centers-waste-

agement using feedback,” in In proceedings of Workshop Compilers and

vast-amounts-of-energy-belying-industry-image.html. [5] “5

data

center

trends

for

2013,”

Accessed:

IEEE, 2011, pp.

March

Operating Systems for Low Power (COLP), 2002.

2013,

[19] C. Lefurgy, X. Wang, and M. Ware, “Server-level power control,” in In

http://www.informationweek.com/hardware/data-centers/5-data-center-

Proceedings of the IEEE International Conference on Autonomic Com-

trends-for-2013/240145349.

puting (ICAC), 2007.

[6] R. Bianchini and R. Rajamony, “Power and energy management for server

[20] G. von Laszewski, L. Wang, A. J. Younge, and X. He, “Power-aware

systems,” Computer, vol. 37, no. 11, pp. 68–74, 2004.

scheduling of virtual machines in DVFS-enabled clusters.” in CLUSTER.

[7] P. Padala, K.-Y. Hou, K. G. Shin, X. Zhu, M. Uysal, Z. Wang, S. Singhal,

IEEE, 2009, pp. 1–10.

and A. Merchant, “Automated control of multiple virtualized resources.”

[21] M. Elnozahy, M. Kistler, and R. Rajamony, “Energy conservation policies

in EuroSys, W. Schr¨oder-Preikschat, J. Wilkes, and R. Isaacs, Eds. ACM,

for web servers,” in In Proceedings of the 4th USENIX Symposium on

2009, pp. 13–26.

Internet Technologies and Systems, 2003.

[8] J. Rao, X. Bu, C.-Z. Xu, L. Y. Wang, and G. G. Yin, “Vconf: a reinforce-

[22] P. Pillai and K. G. Shin, “Real-time dynamic voltage scaling for low-

ment learning approach to virtual machines auto-configuration.” in ICAC, S. A. Dobson, J. Strassner, M. Parashar, and O. Shehory, Eds.

power embedded operating systems,” 2001, pp. 89–102.

ACM,

[23] G. Lu, J. Zhan, H. Wang, L. Yuan, Y. Gao, C. Weng, and Y. Qi, “Pow-

2009, pp. 137–146. [9] R. Buyya, A. Beloglazov, and J. H. Abawajy, “Energy-efficient manage-

erTracer: Tracing requests in multi-tier services to diagnose energy inef-

ment of data center resources for cloud computing: A vision, architectural

ficiency”, in 9th ACM international conference on Autonomic computing

elements, and open challenges,” CoRR, vol. abs/1006.0308, 2010.

2012, pp. 97–102. [24] A. Ali-Eldin, J. Tordsson, and E. Elmroth, “An adaptive hybrid elasticity

[10] E. N. Elnozahy, M. Kistler, and R. Rajamony, “Energy-efficient server clusters.” in PACS, ser. Lecture Notes in Computer Science, B. Falsafi

controller for cloud infrastructures.” in NOMS.

and T. N. Vijaykumar, Eds., vol. 2325.

212.

Springer, 2002, pp. 179–196.

IEEE, 2012, pp. 204–

[11] J. S. Chase, D. C. Anderson, P. N. Thakar, and A. M. Vahdat, “Managing

[25] H. C. Lim, S. Babu, and J. S. Chase, “Automated control for elastic stor-

energy and server resources in hosting centers,” in In Proceedings of the

age,” in Proceedings of the 7th international conference on Autonomic computing, ser. ICAC ’10.

18th ACM Symposium on Operating System Principles (SOSP, 2001, pp. 103–116.

namic provisioning of multi-tier internet applications,” ACM Trans. Au-

[12] D. Niyato, S. Chaisiri, and L. B. Sung, “Optimal power management for

ton. Adapt. Syst., vol. 3, no. 1, pp. 1:1–1:39, Mar. 2008.

server farm to support green computing,” in Proceedings of the 2009 9th

[27] E. Kalyvianaki, T. Charalambous, and S. Hand, “Self-adaptive and self-

IEEE/ACM International Symposium on Cluster Computing and the Grid, ser. CCGRID ’09.

ACM, 2010, pp. 1–10.

[26] B. Urgaonkar, P. Shenoy, A. Chandra, P. Goyal, and T. Wood, “Agile dy-

configured cpu resource provisioning for virtualized servers using kalman

IEEE Computer Society, 2009, pp. 84–91.

filters,” in Proceedings of the 6th international conference on Autonomic

[13] L. Yuan, J. Zhan, B. Sang, L. Wang, and H. Wang, “Powertracer: Tracing

computing, ser. ICAC ’09.

requests in multi-tier services to save cluster power consumption,” Com-

ACM, 2009, pp. 117–126.

[28] C. M. Wesam Dawoud, Ibrahim Takouna, “Elastic virtual machine for

puting Research Repository (CoRR), vol. abs/1007.4890, 2010.

15

fine-grained cloud resource provisioning,” in Proceedings of the 4th Inter-

request scheduling in clouds.” in CCGRID.

IEEE, 2010, pp. 15–24.

national Conference on Recent Trends of Computing, Communication &

[44] “x264,” Accessed: May, 2013, http://www.videolan.org/developers/x264.html.

Information Technologies (ObCom 2011), ser. CCIS, no. 0269. Springer,

[45] “Bootchart,” Accessed: Feburary 2013, http://www.bootchart.org/.

12 2011.

[46] A. Kansal, F. Zhao, J. Liu, N. Kothari, and A. A. Bhattacharya, “Vir-

[29] J. Stoess, C. Lang, and F. Bellosa, “Energy management for hypervisor-

tual machine power metering and provisioning,” in Proceedings of the 1st

based virtual machines,” in Proceedings of the 2007 USENIX Technical

ACM symposium on Cloud computing, ser. SoCC ’10.

Conference, Jun. 17–22 2007, publication.

ACM, 2010, pp.

39–50.

[30] G. Jung, M. A. Hiltunen, K. R. Joshi, R. D. Schlichting, and C. Pu, “Mis-

[47] M. Marzolla and R. Mirandola, “Dynamic power management for qos-

tral: Dynamically managing power, performance, and adaptation cost in

aware applications,” Sustainable Computing: Informatics and Systems,

cloud infrastructures.” in ICDCS.

vol. 3, no. 4, pp. 231 – 248, 2013.

IEEE Computer Society, 2010, pp.

62–73. [31] J-W. Jang, M. Jeon, H-S. Kim, H. Jo, J-S. Kim, and S. Maeng, “Energy Reduction in Consolidated Servers through Memory-Aware Virtual Ma-

Selome Kostentinos Tes-

chine Scheduling” in IEEE Transactions on Computers, vol. 60, no. 4, pp. 4:552-4:564, Apr. 2011.

fatsion is currently a Ph.D.

[32] A. Beloglazov, J. Abawajy, and R. Buyya, “Energy-aware resource allo-

student at the department of

cation heuristics for efficient management of data centers for cloud computing,” Future Gener. Comput. Syst., vol. 28, no. 5, pp. 755–768, May

Computing Science, Umeå

2012.

University.

[33] Z. Zhang, Q. Guan, and S. Fu, “An adaptive power management frame-

received her M.S. degree in

work for autonomic resource configuration in cloud computing infrastructures.” in IPCCC.

In 2013 she

Computing

IEEE, 2012, pp. 51–60.

[34] A. Benoit, R. G. Melhem, P. Renaud-Goud, and Y. Robert, “Assessing

Science

from

the same university.

Her

the performance of energy-aware mappings.” Parallel Processing Letters,

main research interests are

vol. 23, no. 2, 2013.

energy management, cloud

[35] K. J. Åstr¨om and R. M. Murray, Feedback Systems: An Introduction for Scientists and Engineers.

computing,

Princeton University Press, 2008.

virtualization,

performance modeling, and data center management.

[36] A. Kansal, F. Zhao, J. Liu, N. Kothari, and A. A. Bhattacharya, “Virtual machine power metering and provisioning,” in Proceedings of the 1st ACM symposium on Cloud computing, ser. SoCC ’10.

ACM, 2010, pp.

39–50. [37] R. Nathuji and K. Schwan, “Virtualpower: coordinated power manage-

Eddie Wadbro received

ment in virtualized enterprise systems,” in Proceedings of twenty-first

his M.Sc. degree in Math-

ACM SIGOPS symposium on Operating systems principles, ser. SOSP

ematics at Lund University

’07.

ACM, 2007, pp. 265–278.

in 2004 and his Licentiate

[38] W.-c. Feng, A. Ching, and C.-h. Hsu, “Green Supercomputing in a Desktop Box,” in 3rd IEEE Workshop on High-Performance, Power-Aware

and Ph.D. degrees in Scien-

Computing (in conjunction with the 21st IPDPS), March 2007.

tific Computing at Uppsala

[39] W. chun Feng, X. Feng, and R. Ge, “Green supercomputing comes of

University in 2006 and 2009,

age,” IT Professional, vol. 10, no. 1, pp. 17–23, 2008. [40] “Cpufrequtils,”

Accessed:

December,

respectively.

2012,

Since 2009,

he works as an assistant

http://packages.debian.org/sid/cpufrequtils. [41] E. Elmroth and J. Tordsson, “A grid resource broker supporting ad-

professor at the Department

vance reservations and benchmark-based resource selection.” vol. 3732.

of Computing Science, Umeå

Springer, 2004, pp. 1061–1070. [42] A. Lastovetsky, Parallel Computing on Heterogeneous Networks.

University.

John

His research interests concerns mathematical

modeling, optimization, as well as development and analysis

Wiley & Sons, 2003. [43] Y. C. Lee, C. Wang, A. Y. Zomaya, and B. B. Zhou, “Profit-driven service

of efficient numerical methods for differential equations. 16

Johan Tordsson is Assistant Professor at Umeå University, from which he also received his PhD in 2009.

After a period as

visiting postdoc researcher at Universidad Complutense de Madrid, he worked for several years in the RESERVOIR, VISION Cloud, and OPTIMIS European projects, in the latter as Lead Architect and Scientific Coordinator. Tordsson’s research interests include autonomic resource management for cloud and grid computing infrastructures as well as virtualization.

17

Suggest Documents