Journal of Network and Computer Applications 80 (2017) 165–180
Contents lists available at ScienceDirect
Journal of Network and Computer Applications journal homepage: www.elsevier.com/locate/jnca
Virtual machine management system based on the power saving algorithm in cloud
MARK
⁎
Chao-Tung Yang , Jung-Chun Liu, Shuo-Tsung Chen, Kuan-Lung Huang Department of Computer Science, Tunghai University, Taichung City 40704, Taiwan ROC
A R T I C L E I N F O
A BS T RAC T
Keywords: Power saving VM management Live migration Virtualization Cloud IaaS
This work uses the open source codes and PHP web programming to implement a resource management system with power saving method for virtual machines. We propose a system integrated with open source software, such as KVM and Libvirt, to construct a virtual cloud management platform. This system can detect the status of cloud resources via SNMP, calculate the operation efficiency of the overall system, allocate virtual machines through the live migration technology, and turn off extra machines in the cloud to save energy. According to our proposed power saving method, we have constructed a power efficient virtualization management platform in the cloud. Our objective is to provide enterprises or end users with power saving private cloud solutions. In this work we have also built a web page to allow users to easily access and control the cloud virtualization resources, i.e., users can manage virtual machines and monitor the status of resources via the web interface. From analysis of the experimental results of live migration of virtual machines, this work demonstrates that efficient use of hardware resources is realized by the power saving method, and the aim of power saving for cloud computing is achieved.
1. Introduction Cloud computing, one of today's hottest topics, is being developed very fast. Many large cloud companies, such as Google, Amazon, Yahoo, offer multiple cloud services and have a large number of users. In order to provide these services, the cloud infrastructure grew nearly 42.4% in 2012, and is expected to grow nearly 47.3% in 2013. A large cloud infrastructure usually includes cloud servers and cloud storage. The global cloud infrastructure is estimated to consume about 30 billion kilowatts of electricity in one hour, but 90% of it is wasted due to inefficient use of cloud resources; hence, how to reduce electricity consumption waste on the cloud infrastructure is a topic worth exploring. Cloud computing is a concept, in which computers over networks are able to cooperate with each other to provide far-reaching network services (CloudCom, 2012; Endo et al., 2010; Wikipedia, 2013a; Figuerola et al., 2009; Nurmi et al., 2009; Wang et al., 2013). The basic approach to computing through the Internet by terminal operations of cloud computing moves hardware, software, and shared information from users to the server side; in this way, the original waste of redundant resources on personal computers is evaded to improve computing and resource efficiency (Yang et al., 2011; Zhao and Huang, 2009; Alhamazani et al.; Willmann et al., 2007; Wang
⁎
et al., 2012; Whitaker et al., 2005; Wikipedia, 2013c; Rosenblum and Garfinkel, 2005). For increasingly high demand of the cloud today, a typical application of cloud computing is the large-scale data center, which operates with tremendous amount of power consumption (Adams and Agesen, 2006; Yang et al., 2006; Dong et al., 2006; Raj and Schwan, 2007; Uhlig et al., 2005; KVM, 2013; Kivity et al., 2007; Wikipedia, 2013d; Barham et al., 2003). The power consumption of large data centers today shares about 0.5% of the global carbon emissions; with the deepening of cloud computing in the future, it is predicted to account for 2%, which represents carbon emissions of large data centers of some large enterprises, and may even beyond 2% for some countries. Such huge amount of power consumption, especially with today's emphases on power conservation and carbon reduction, is a major problem that cannot be ignored. How to maintain the growth of cloud computing technology, but also take into account of the efficiency of power usage, comprises the main research direction of this work. The energy demand of the cloud environment issues cannot be ignored (Yamagiwa and Uehara, 2012; Valentini et al., 2013; Kim et al., 2012; Lin et al., 2011; Khan et al., 2013; Corradi et al., 2011; Banerjee et al., 2012; Huang et al., 2011; Wu and Wang, 2010, 2010; Hasebe et al., 2010; Witkowski et al., 2013; Wikipedia, 2013b; Wang and Ullah Khan, 2013; Chen et al., 2013; Yang and Huang, 2013). Through live
Corresponding author. E-mail addresses:
[email protected] (C.-T. Yang),
[email protected] (J.-C. Liu),
[email protected] (S.-T. Chen),
[email protected] (K.-L. Huang).
http://dx.doi.org/10.1016/j.jnca.2016.11.026 Received 3 August 2016; Received in revised form 29 October 2016; Accepted 27 November 2016 Available online 29 November 2016 1084-8045/ © 2016 Elsevier Ltd. All rights reserved.
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
servers, network, memory, and storage, are abstractly presented after conversion, so that users can apply those resources in a better way than the original configuration. Virtualization is commonly referred to virtualized resources including computing power and data storage; in this paper virtualization is specifically referred to server virtualization. Server virtualization software technology refers to the use of one or more of the host hardware settings. It has the flexibility to configure the virtual hardware platform and operating system, like real hardware. In this way, a variety of different operating environments (for example, Windows, Linux) can operate simultaneously on the same physical host, and be independent as being operating in different physical hosts. Virtualization solutions can be broadly divided into three categories: full virtualization, para-virtualization, and hardware-assisted virtualization.
migration of the virtual machine (VM) technology, the need to postpone and restart servers for operations is eliminated and the purpose of power-saving is achieved. However, if one only partially focuses on the VM side, the quality of service in the cloud environment is rendered to reduce, contrary to the intent of the cloud computing. In the commercial cloud computing environment, emphasis is on the service-level agreement (SLA). An effective management tool can help the service provider and the client using services with satisfying service level expectations of each other; in other words, it can help enterprises to construct channels of communication with communication plans to establish consistency and reduce conflicts, and methods to measure service performance (Sempolinski and Thain, 2010; MorenoVozmediano et al., 2009; libvirt, 2013; Wikipedia, 2013e; Clark et al., 2005; Zhang and Dong, 2008; Milojičić et al., 2000; Hansen and Jul, 2000; Sotomayor et al., 2009). In Yang and Huang (2013), an architecture design for virtual machine management in cloud is implemented. The authors applied the open source codes and PHP web programming to implement a resource management system integrated with open source software, such as KVM and Libvirt, to construct a virtual cloud management platform. Their system can detect the status of cloud resources via SNMP and turn off extra machines in the cloud to save energy. However, power consumption model in migration and the corresponding power-saving method are not listed. In addition, they do not breftly explain how to establish the main functions: the status monitoring functions, power consumption recording function, and the power-saving method function. This work uses the open source codes and PHP web programming to implement a resource management system with power saving method for virtual machines. We propose a system integrated with open source software, such as KVM and Libvirt, to construct a virtual cloud management platform. This system can detect the status of cloud resources via SNMP, calculate the operation efficiency of the overall system, allocate virtual machines through the live migration technology, and turn off extra machines in the cloud to save energy. According to the implementation of the status monitoring functions, power consumption recording function, and the power-saving method function, we construct a power efficient virtualization management platform in the cloud. Moreover, power consumption model in migration and the corresponding power-saving method are listed. We also provide a web page to allow users to easily access and control the cloud virtualization resources, i.e., users can manage virtual machines and monitor the status of resources via the web interface. Finally, the experimental results show the relationship between VM's vCPU and migration time, the relationship between Server's CPU loading and migration, and the effect by applying live migration on VMs to avoid waste of electricity. The experimental results show that the proposed power-saving method can successfully detect cloud environment resource usage in realistic usage scenarios, turn off unnecessary servers to reduce power waste in the case of low resource use, and maintain a certain degree of cloud service quality for rapid deployment of resources usage; indeed, it is proven to have achieved a certain degree of energy saving effect. The rest of the paper is as follows. Section 2 describes some background information, including cloud computing, virtualization, live migration, and hardware information measurement method. Section 3 introduces our experimental environment and methods, and the overall architecture. Section 4 presents and analyses experimental results. Finally, Section 5 summarizes this article by pointing out its major contributions and directions for future work.
2.1.1. Full virtualization In full virtualization, Hypervisor replaces the traditional core of the operating system in the Ring 0 privilege level, via the Binary Translation technology used in full virtualization (Fig. 1). The guest operating system requires direct access to the hardware Ring 0 level instruction by the hypervisor conversion to submit requests further to hardware. Hypervisor virtualizes all hardware components. The guest operating system works as the individual real host, with a high degree of independence. But the Binary Translation technology will make the performance of the VM decreased significantly. One example of software using virtualization technology is the VMware Workstation. The full virtualization is totally dependent on the virtual hardware layer constructed Guest OS that can use hardware resources, only limited by the virtual hardware resources, and less able to misallocate of the entity's computer hardware. In addition to the central processor, memory, graphics card, etc. can be virtualized if the proprietary driver is installed first. The benefits of full virtualization are, regardless of how the hardware environment is, the Guest OS is able to sustain a consistent compatibility, and can use different operating systems for the physical machine. The disadvantage of full virtualization is that it is a great burden of the physical machine.
Fig. 1. Full virtualization.
2.1.2. Para-virtualization Para-virtualization, also known as parallel virtualization such as Xen, is not all hardware device virtualization. Compared to the full virtualization VM executing instructions via Binary Translation, paravirtualized VM invokes hardware devices through Dom0, and manages each VM access to physical resources. The VM performance is significantly improved, but the hardware driver binding in Dom0, and the core of the operating system in the VM must go through a special modification; thus the independence of the VM is relatively low. The para-virtualization does not configure the virtual layer on top of the hardware, but use more than one memory location program calls at different times. Para-virtualization allows the Guest OS to share
2. Background review and related work This section reviews background and related work for later use. 2.1. Virtualization With virtualization, the computer's physical resources, such as 166
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
Fig. 4. Kernel-based virtual machine. Fig. 2. Para-virtualization.
2.2.2. Xen Xen is a native or bare-metal hypervisor. It runs in a more privileged CPU state than any other software on the machine. Responsibilities of the hypervisor include memory management and CPU scheduling of all VMs (or domains), and for launching the most privileged domain, i.e., Dom0 that is the only VM which by default has direct access to hardware. From the Dom0 the hypervisor can be managed and unprivileged domains, or so called DomU can be launched. The Dom0 domain is typically a modified version of Linux, NetBSD or Solaris. User domains may either be unmodified open-source or proprietary operating systems, such as Microsoft Windows (if the host processor supports x86 virtualization, e.g., Intel VT-x and AMD-V), or modified, para-virtualized operating system with special drivers that support enhanced Xen features (Fig. 5). On x86 Xen with a Linux Dom0 runs on Pentium Pro or newer processors. Xen boots from a boot loader such as GNU GRUB, and then usually loads a para-virtualized host operating system into the host domain (Dom0).
Fig. 3. Hardware-assisted virtualization.
hardware resources (Fig. 2). The advantage is that the hardware will not need to waste performance on the hardware layer; however, the drawback is that the operating system in the VM must be consistent with the physical environment. 2.1.3. Hardware-assisted virtualization Hardware-assisted virtualization, as shown in Fig. 3, is used to overcome the dilemma that the VM operating system kernel cannot be placed in the processor Ring 0 privilege level. Since both the hypervisor and virtual operating system kernel can be issued in Ring 0, the hypervisor will be automatically intercepted by the need to deal with the instruction of the virtual operating system directly by the hardware processing; therefore the full virtualization binary translation or paravirtualization Hypercall operations are no longer in need. Hardware assisted virtualization basically eliminates the difference between full virtualization and para-virtualization. On the basis of hardwareassisted full virtualization or para-virtualization program, VMs are high performance and independent. The representative program of the hardware-assisted virtualization VMware is ESXi with open source Kernel-based Virtual Machine (KVM). KVM needs a host operating system, whose core provides virtualization services; whereas VMware ESXi is the equivalent of a specialized virtualization thin client operating system, installed directly on a physical machine.
Fig. 5. Xen.
2.3. Virtual machine management 2.2. Hypervisor
To set up a standard cloud service, we often need more than one VM. When a large number of VMs created through the virtualization technology, it is difficult to manage by native instructions, and a VM management platform is needed. In Fig. 6, the virtual management platform usually includes a VM to create, edit, switch, pause, reply, delete, and do live migration. Next, we will introduce several popular open source virtualization management platforms. We also provide a friendly VM building process through a network interface, more suitable for monitoring a large number of VM states, and easier to manage account permissions and other advantageous features.
The one that builds and manages virtualized environment software is collectively referred to as hypervisor (or Virtual Machine Monitor, VMM). Depending on the virtualization solution, there are different choices for hypervisor: 2.2.1. KVM KVM, as illustrated in Fig. 4, is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, which provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU, although work is in progress to get the needed changes upstream. Using KVM, one can run multiple VMs running unmodified Linux or Windows images. Each VM has private virtualized hardware: a network card, disk, graphics adapter, etc..
2.3.1. Live migration Live migration, as shown in Fig. 7, refers to the process of moving a running VM or application between different physical machines without disconnecting the client or application. Memory, storage, and network connectivity of the VM are transferred from the original host 167
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
management systems to monitor network-attached devices for conditions that warrant administrative attention. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects. SNMP exposes management data in the form of variables on the managed systems, which describe the system configuration. These variables can then be queried and sometimes set by managing applications. In typical SNMP uses, one or more administrative computers, called managers, have the task of monitoring or managing a group of hosts or devices on a computer network. Each managed system executes, at all times, a software component called an agent which reports information via SNMP to the manager. Essentially, SNMP agents expose management data on the managed systems as variables. The protocol also permits active management tasks, such as modifying and applying a new configuration through remote modification of these variables. The variables accessible via SNMP are organized in hierarchies. These hierarchies, and other metadata (such as type and description of the variable), are described by Management Information Bases (MIBs).
Fig. 6. Virtual machine management.
2.5. PDU Fig. 7. Live migration.
A power distribution unit (PDU), as shown in Fig. 8, is a device with multiple outlets designed to distribute the electric power, especially to racks of computers and networking equipment located within the data center. The term, PDU, may refer to two major classes of hardware power devices. The first and typically the general unqualified term refers to the category of relatively higher-cost floor-mounted power distribution devices which transform one or more larger capacity raw power feeds into any number of lower capacity distributed power feeds. These floormounted PDU devices are typically composed of transformers and circuit breakers and may optionally include monitoring controllers using protocols such as Modbus or SNMP. In a typical data center for example, there would be relatively few of these floor-mounted PDU devices, located along the walls or in central locations for larger spaces. Each floor-mounted PDU would feed a much larger number of racks and rows of racks. The second class of device is a much smaller and lower cost device with multiple appliance outlets designed to distribute the electric power within a rack, especially to computers and networking equipment located within a data center. The second type of PDU is sometimes called a Smart-PDU, Rack-based PDU, Intelligent PDU or simply Power Strip by various IT professionals.
machine to the destination. Two techniques for moving the VM's memory state from the source to the destination are pre-copy memory migration and post-copy memory migration.
• •
•
Warm-up phase: In pre-copy memory migration, the hypervisor typically copies all the memory pages from source to destination while the VM is still running on the source. If some memory pages change (become dirty) during this process, they will be re-copied until the rate of re-copied pages is not less than page dirtying rate. Stop-and-copy phase: After the warm-up phase, the VM will be stopped on the original host, the remaining dirty pages will be copied to the destination, and the VM will be resumed on the destination host. The time between stopping the VM on the original host and resuming it on destination is called down-time, ranging from a few milliseconds to seconds according to the size of memory and applications running on the VM. There are some techniques to reduce live migration down-time, such as using probability density function of memory change. Post-copy memory migration: Post-copy VM migration is initiated by suspending the VM at the source. With the VM suspended, a minimal subset of the execution state of the VM (CPU registers and non-pageable memory) is transferred to the target. The VM is then resumed at the target, even though most of the memory states of the VM still reside at the source. At the target, when the VM tries to access pages that have not yet been transferred, it generates page faults.
2.6. Power-saving related work In the past years, the research field of green and low power consumption networking infrastructure is of great importance for both service/network providers and equipment manufacturers. The emerging cloud computing technology can be used to increase the utilization and efficiency of hardware equipment, thus, potentially reducing the global CO2 emission. Chang and Wu (2010) proposed virtual network
These faults are trapped at the target and redirected towards the source over the network. Such faults are referred to as network faults. The source host responds to the network-fault by sending the faulted page. Since each page fault of the running VM is redirected towards the source, this technique can degrade performance of applications running inside the VM. However, pure demand-paging accompanied with techniques such as pre-paging can reduce this impact by a great extent. 2.4. SNMP Simple Network Management Protocol (SNMP) is an Internetstandard protocol for managing devices on IP networks. Devices that typically support SNMP include routers, switches, servers, workstations, printers, modem racks, and more. It is used mostly in network
Fig. 8. PDU.
168
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
optimal allocation for the VMs requests. The simulation experiments indicated that the dynamic scheduling policy performs much better than that of the Eucalyptus, Open Nebula, Nimbus IaaS cloud, etc. The tests illustrated that the speed of the IGA almost twice the traditional GA scheduling method in grid environment and the utilization rate of resources always higher than the open-source IaaS cloud systems. Beloglazov and Buyya (2010) proposed an energy efficient resource management system for virtualized Cloud data centers that reduces operational costs and provides required Quality of Service (QoS). Energy savings are achieved by continuous consolidation of VMs according to current utilization of resources, virtual network topologies established between VMs and thermal state of computing nodes. Choon Lee and Zomaya (2012) present two energy-conscious task consolidation heuristics, which aim to maximize resource utilization and explicitly take into account both active and idle energy consumption. Our heuristics assign each task to the resource on which the energy consumption for executing the task is explicitly or implicitly minimized without the performance degradation of that task. The above four papers capture resource usage in a way that is especially worthy for reference. This work uses SNMP to retrieve the desired information, but this method is not limited to only one realization. There are varieties of open source software to monitor the resources status on network, like vmstat, Monit, Monitorix. These software are good tools that can be chosen for testing in the experiment.
architecture for cloud computing. Their virtual network can provide communication functions for virtual resources in cloud computing. They designed an energy aware routing algorithm for virtual routers and an efficient method for setting up the virtual network to fulfill the objective of building a green virtual network in cloud computing. Baliga et al. (2011) presented an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. They show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circumstances cloud computing can consume more energy than conventional computing where each user performs all computing on his personal computer. These two papers also aim for power-saving. They proposed a method with good performance on network. The formula can be used to calculate the method's effect on power-saving. This work also proposed a similar formula to mathematically calculate the power-saving results. Kim et al. (2011) suggested a model for estimating the energy consumption of each VM without dedicated measurement hardware. Their model estimated the energy consumption of a VM based on inprocessor events generated by the VM. Based on this estimation model, they also proposed a VM scheduling algorithm that can provide computing resources according to the energy budget of each VM. The suggested schemes were implemented in the Xen virtualization system, and an evaluation showed that the suggested schemes estimated and provided energy consumption with errors of less than 5% of the total energy consumption. The power-saving method of this paper is quite similar to that in this work; it calculates the resources required for the VM and then schedules the VM to allocate operational resources. The power-saving principle in the experiments has confirmed again in this work. The power consumption of servers and server CPU Loading has linear relationship. The experiment proved that it could have up to 5% power-saving effect. If combined with the proposed power-saving method of this work, it is expected to have more than 20% powersaving effect. To better manage the power consumption of web services in cloud computing with dynamic user locations and behaviors, Wu et al. (2013) proposed a power budgeting design based on the logical level, using a distribution tree. By setting multiple trees, they can differentiate and analyze the effect of workload types and SLAs, e.g. the response time, in terms of power characteristics. Based on these, they introduce classified power capping for different services as the control reference to maximize power saving when there are mixed workloads. They also used the scheduling approach to control the operation of the VM, and achieved power-saving effect. In particular, they took into account the characteristics of the SLA, same with that used in this article. To achieve power-savings by control resource utilization, it is necessary to consider the quality of service and user experience. Consolidation of applications in cloud computing environments presents a significant opportunity for energy optimization. As a first step toward enabling energy efficient consolidation, Srikantaiah et al. (2008) studied the inter-relationships between energy consumption, resource utilization, and performance of consolidated workloads. The study reveals the energy performance trade-offs for consolidation and shows that optimal operating points exist. They modelled the consolidation problem as a modified bin packing problem and illustrated it with an example. They outlined the challenges in finding effective solutions to the consolidation problem. Zhong et al. (2010) investigated the possibility to allocate the VMs in a flexible way to permit the maximum usage of physical resources. They used an Improved Genetic Algorithm (IGA) for the automated scheduling policy. The IGA uses the shortest genes and introduces the idea of Dividend Policy in Economics to select an optimal or sub-
3. System design and implementation This section introduces the design of the proposed power-saving based virtual machine management system and its implementation. System architecture is first present and then its design flow is discussed. Moreover, power consumption model in migration and the corresponding power-saving method are listed. In addition, we implement the proposed system by giving an example for power saving. 3.1. System architecture The system architecture is shown in Fig. 9. The management node consists of the main monitoring function, the applied method, the judgment function of the shared storage system, the VM management platform, and the user interface. We used two computing nodes running VMs, and connected them to the PDU to record power consumption. In order to successfully perform live migration, storages of the two computing nodes are connected to a shared storage via NFS. 3.2. Design flow In the experiments we used the KVM virtualization technology to build ten VMs on three servers (one for the management node, the other two for computing nodes), and SNMP to monitor operations of the two computing nodes and CPU and memory resource usages of the ten VMs. We gradually increased the CPU usage of VMs, and used the cpuburn program to simulate various degrees of usage of the cloud computing environment. The states of the machines were automatically acquired by a PHP program and SNMP. Fig. 10 shows the flow chart of the power-saving method. When the resources required for the VMs were less than those supported by the currently running servers, through the Libvirt dynamic migration instructions, the VMs might be centralized in some servers and some
Fig. 9. System architecture.
169
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
Fig. 11. Capture resource's status by SNMP.
Fig. 12. Display VM list on servers by libvirt.
supports PHP APIs, one can write PHP codes on it very fast. The status monitoring function was developed by the PHP programming language to obtain status data via SNMP, including CPU and memory usages of the physical and VMs. SNMP programs were installed to capture and transmit data. After installing SNMP, the snmpd.conf file in the directory /etc/snmp/ is modified to send the node's SNMP data to the management node so that the application running on the management node can acquire the operating states of the two computing nodes with VMs. The following algorithm shows these procedures.
Fig. 10. Flow chart of the power-saving method.
non-essential servers were turned off in order to achieve the goal of energy saving. And if the VM CPU or memory usages increased, the number of servers in operation should be always sufficient to maintain the quality of service. And if the current CPU or memory resources of running servers were not sufficient to maintain quality of service, some standby servers will be selected to join the computing cluster, and appropriate transfer were made for the VMs on the new servers through the Libvirt dynamic migration instruction. Based on the power-saving method the system will determine whether the current status of the resource usage is well balanced or not. The following will focus on the description of several key programs.
Algorithm 1. STATUS
MONITORING(c)
for i ← 1 to computing node numbers x
⎧ connect to computing node x ⎪ get libvirt list and calculate VM number ⎪ ⎪ do⎨ for o ← 1 to VM numbers y ⎪ ⎧ get VM ′s CPU usage via SNMP ⎪ do⎨ ⎪ ⎩ get VM ′s RAM usage via SNMP ⎩
3.3. System implementation For the implementation of our system design, we uses the PHP language to establish three main functions: the status monitoring functions, power consumption recording function, and the powersaving method function. The detailed description of the three functions is in the following.
3.3.2. Power consumption recording In the proposed system, we retrieved servers’ power consumption data via PDU. Fig. 13 shows the web interface provided by the PDU. Fig. 14 shows the type of data retrieved, including rms current, power factor, and voltage for the ac power supply. The PDU data can also be obtained via SNMP information. Fig. 15 shows the screenshot of retrieving information on the console. The power consumption recording function, developed by the PHP programming language, is an automatic recording program, pdu_recorder.php, running on the management node. PDU supports SNMP
3.3.1. Status monitoring To determine whether to shut down or wake up servers for the current environment, the operating states of VMs and servers are needed. Those data were retrieved via SNMP, which can access many types of information. As shown in Fig. 11, data are obtained through SNMP commands. In order to capture information on each VM, Libvirt was run on all VMs on the server list as shown in Fig. 12. Since Libvirt 170
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
Fig. 13. PDU's web interface.
Fig. 15. Capturing PDU's data by SNMP.
server is shut down to achieve the goal of energy saving. And as the resource requirement of the whole system increases, some of standby servers might be awaken to join the computing cluster, and appropriate live migrations of VMs are performed among the operating servers. The following algorithm shows these procedures. Algorithm 3. POWER-SAVING
sumvm ← all VM's computation sum sumhost ← all host's computation sum maxvmn ← vm with maximum computation on host n maxhost ← host with largest VM's computation sum minhost ← host with smallest VM's computation sum minsumhost ← host with smallest computation sum if sumvm > sumhost then
Fig. 14. PDU's web interface in details.
to transmit electricity consumption data in the experiment. Through SNMP, the automatic recording program on the management node automatically collects PDU power consumption data of computing nodes every minute and records them in the pdurecord.txt file. The following algorithm shows these procedures. Algorithm 2. POWER
METHOD(c)
⎧ start an host in shutdown status ⎨ ⎩ migrate maxvmmaxhost to the host just start else if sumvm < sumhost − minsumhost then
RECORDING(c)
⎧ migrate all VM on minhost to other host ⎨ ⎩ shutdown minhost
while in record range do if second=0 then
⎧ get PDU information via snmp ⎨ ⎩ write into pdurecord . txt
3.4. Power consumption model in migration and its power-saving method This subsection focuses on the relationship between the additional operating costs and the efficiency of the power-saving method. In the design flow, we turn off computing nodes to achieve the purpose of power-saving. However, the transfer of VMs among computing nodes results in overheads of extra time and power consumption. To reduce them, we formulate a model to simulate the operating costs and thus proposed our power-saving method. The net efficiency of the powersaving method is found by subtracting the expected benefit of the
3.3.3. Power-saving method The main function of the power-saving method is to obtain and analyze resource usage information of servers and VMs through the resource status monitoring program. If the required resources of VMs are less than those supported by currently running servers, through Libvirt live migration instructions the VMs are centralized and some 171
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
power-saving method by the operating costs of the method. The following equations and variables are used to define the operating costs.
Table 1 Example 1 without power-saving method. Server name
3.4.1. Migration costs The migration cost is the cost of performing live migration of VMs. The relevant parameters of the migration cost are defined as follows: Cmigration is defined as Costs resulting from migration Pstandby is defined as Average power spend by standby computing node Tdowntime is defined as VM downtime during live migration Pmigration is defined as Extra power cost by processing migration Nmigrationvm is defined as Number of VMs need to be migrated The power-saving method may perform live migration on a number of VMs. In the migration process, no service is provided by a VM during the brief VM downtime, which contributes to additional costs of migration, and the extra cost of electricity can be calculated by multiplying the downtime of the VM with the average standby power consumption of the server. Also we need to include the additional power consumption occurs at the migration action of each VM. Thus, the migration cost can be calculated with the following formula:
Cmigration = [(Pstandby × Tdowntime ) + (Pmigration )] × Nmigrationvm
Node 1
Node 2
Node 3
VM VM VM VM VM VM VM VM VM
10 20 20 10 10 10 20 30 10
11 12 13 21 22 23 31 32 33
Total CPU usage (%)
50
30
60
3.5.1. Merging computing nodes The example of the merging operation is shown in Table 1, nine VMs are maintained in low usage. The power-saving method will automatically perform live migration to move all VMs on the server with the lowest total utilization to other servers, and close that server. In this example, node 2 had the lowest total CPU usage, so VM 21, VM 22 and VM 23 were migrated to node 1 and node 3, and node 2 was closed as shown in Table 2. If the VM's vCPU utilization did not increase, node 2 will remain off to save electricity.
(1)
3.5.2. Expanding computing nodes The example of expanding computing nodes are shown in Table 3. If the VM usage increases and causes the server's required CPU usage be over 100%, the power-saving method will perform live migration on a VM in the highest vCPU usage node and move it to the lowest CPU usage node to prevent degradation of quality of service. For example, following the situation of example 1, if the vCPU usage of VM 23 operating on node 3 suddenly rises, resulting in the required CPU usage of node 3 being over 100%, the power-saving method will wake up node 2 from the shutdown status, and perform live migration on VM 23 to move it to node 2, to maintain quality of service of the cloud environment (Table 4). 3.6. User interface In this work we also used a web interface to allow users to easily manage VMs, perform live migration for VMs, and obtain information of machine states and electric power consumptions (Fig. 16). The user interface shown in Fig. 17 is developed using the PHP programming language, and run on the management node. Its key features include power consumption monitoring (on the left), management of servers and VMs (on the right), and resource monitoring as described in detail below. The power consumption of machines can be viewed in one day or one hour period. By default, the system displays power consumption
(2)
3.4.3. Net power saving Finally, the net power saving can be calculated in the following. Toff is defined as Total time that computing node in power off state Psave is defined as Total power saving by method Turning off unnecessary servers to reduce power consumption is most efficient; thus, the expected power saving is calculated by multiplying the total shutdown time with the average standby power consumption of servers. And the net power saving of the proposed method can be calculated by the following formula:
Psave = (Pstandby / Toff ) − Cmigration − Cmaa
CPU usage (%)
2, and node 3, all running three VMs. Two cases are used for demonstration: computing nodes merging and expanding.
3.4.2. Merging and expanding costs of computing nodes The merging or expanding cost of computing nodes is considered here; its relevant parameters are defined as follows: Cmaa is defined as Costs resulting from merging and expanding computing nodes Tbootup is defined as Time spend in booting up computing nodes Tshutdown is defined as Time spend in shutting-down computing nodes S is defined as Level of service preset by user Pprobability is defined as Probability of processing merging or expanding In the power-saving method, turning off unnecessary servers to reduce electricity is most efficient for power-saving, but frequent switching of servers also results in additional power costs. Since servers do not provide services during the startup and shutdown time, the costs of booting up and shutting down servers are considered as additional costs, and are calculated by multiplying both times with the average power consumption of the standby server. In order not to reduce service quality due to frequent switching of machines, user can follow the Service-Level Agreement (SLA) to control the switching frequency. The higher SLA is, the lower the switching frequency becomes.
Cmaa = [(Tbootup + Tshutdown ) × Pstandby] + Pprobability
VM name
Table 2 Example 1 with power-saving method. Server name
VM name
CPU usage (%)
Total CPU usage (%)
Node 1
VM VM VM VM VM
11 12 13 21 22
10 20 20 10 10
70
VM VM VM VM
23 31 32 33
10 20 30 10
(3)
3.5. An example for power-saving method
Node 2
This section uses two examples to demonstrate the operation of the process of the power-saving method. Assume that the operating environment has three nodes with same specifications, node 1, node
Node 3
172
0
70
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
data recorded every five minutes for the last one hour (see Fig. 19). Alternately, the user can switch the display mode to see power consumption data recorded every hour for the past 24 h (see Fig. 18) . Fig. 20 shows the feature to obtain the VM list, which displays active VMs on every server. Fig. 21 shows the manual migration feature. After obtaining the name of the VM through the query function, name of the VM is typed in the input box, and then the desired destination server is selected. When live migration is complete, the time spent will be displayed below. Fig. 22 shows the server monitoring function, which displays the current CPU and memory usage status. The function can also be used to display data separately for each server, including the CPU usage, idle CPUs, the total size of memory, and the available memory size.
Table 3 Example 2 without power-saving method. Server name
VM name
CPU usage (%)
Total CPU usage (%)
Node 1
VM VM VM VM VM
11 12 13 21 22
10 20 20 10 10
70
VM VM VM VM
23 31 32 33
90 20 30 10
Node 2
Node 3
0
150
Table 4 Example 2 with power-saving method. Server name
Node 1
Node 2 Node 3
4. Experimental environment and experimental results
VM name
CPU usage (%)
VM VM VM VM VM VM VM VM VM
10 20 20 10 10 90 20 30 10
11 12 13 21 22 23 31 32 33
To verify the performance of the proposed system, some experiments are performed in this section. Without loss of generality, experimental environment and flow of experiment are introduced before the experiments.
Total CPU usage (%)
70
4.1. Experimental environment
90
The experimental environment is shown in Table 5. The main monitoring function, the power-saving method, the judgment function of the shared storage system, the VM management platform, and the user interface were built on the management node, which consists of 24 core CPU, 78GB memory and 1TB disk. The VMs were distributed in two servers: computing node 1 and computing node 2, with identical specifications, both of them connected to the PDU to collect electricity consumption data. The management function was evaluated by calculating the functional dependence of such a configuration, management capabilities, and operation efficiency of VMs. The experiment focused on the required resources for operation of the VMs, such as CPU, memory, and power consumption. In order to enhance the reliability of the experimental data, ten VMs with the same specifications were built on the computing nodes, with preloaded SNMP and cpuburn programs. The detailed specifications of the three servers and ten VMs are listed in Table 5, and the detailed specifications of software used including KVM, libvirt, PHP and SNMP are listed in Table 6.
60
Fig. 16. Service architecture.
Fig. 17. Web interface.
173
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
Fig. 18. Power consumption viewed in a day.
Fig. 19. Power consumption viewed in one hour.
Fig. 22. Monitoring servers status.
Table 5 Hardware Specification. Host name Fig. 20. Display VM lists on servers. Management node
Computing node1
Computing node2 Fig. 21. Manual migration function.
VM01-VM10
174
CPU AMD Opteron (TM) Processor 6174× 24 cores AMD Opteron (TM) Processor 6274×32 cores AMD Opteron(TM) Processor 6274×32 cores 8 cores vCPU
Memory
Disk
OS
78GB
1TB
Ubuntu 12.04
48GB
Shared storage
Ubuntu 12.04
48GB
Shared storage
Ubuntu 12.04
8GB
10GB
Ubuntu 12.04
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
Table 6 Software Specification. Software
KVM
Libvirt
PHP
SNMP
Version
3.2.0
0.9.8
6.3.10
5.4.3
Fig. 23. Software architecture.
4.2. Flow of experiment After implementing the complete environment, we performed simulations for several possible scenarios in the cloud environment. We gradually increased the CPU usage of each VM. If the requirement of the CPU usage is less than that offered by the servers, the system may automatically trigger the power-saving method. For comparison, we recorded power consumption data of the overall cloud environment with or without using the power-saving method. The average power consumptions of the two computing nodes with VMs were recorded. In the first 10 minutes, the two servers did not run any other program besides the standby VMs. Then, every five minutes, CPU loading of the VMs on the two computing nodes was one by one turned to be full-loaded; thus at 60 min of the experiment all 10 vCPUs (i.e. virtual CPUs) were full-loaded. Then every five minutes we turned one of the VMs on the computing nodes back to standby; thus at 110 min all 10 VMs were in the standby state. We continued recording power consumptions of the two nodes for another 10 min (Fig. 23). We then performed the above experiment again, but this time applied the power-saving method on the two computing nodes from the onset of the experiment. We expected at the start the 10 VMs would be automatically moved to one of the two computing nodes, and the other computing node will be automatically shut down. We also expected that the power-saving method will automatically wake up the standby computing node when five or more vCPUs were full-loaded. The power-saving method would perform live migration on chosen machines until equilibrium of loads was arrived. We expected when the number of VMs decreased to be five or less, the power-saving method would again automatically move all VMs to one of the computing nodes, and shut down the other computing node (Fig. 24).
Fig. 24. Experimental architecture.
Fig. 25. Relationship between migration time and vCPU loading.
4.3. Experimental results and discussion performing live migration of VMs required is about 48 s; thus there is no direct relationship between migration time and vCPU loading. It rules out the assumption that different CPU loading affect migration time in the experiment of the power-saving method. In the second experiment, we perform live migration of VMs and observe the change of the power consumption of the source host and target host. The experimental results are shown below: From the experiment we know that the average time of VM migrations is approximately 48 s. From Fig. 26, two hosts’ power consumption are very stable during the live migration process, and there is no significant ups and downs. In this experiment, VM's vCPU loading are 100%, and at about 50 s after live migration is finished, the power consumption of the source host and target host's status varied significantly. This experiment also proves that the VM's vCPU loading abruptly effects on the source host and the target host after live migration. The third experiment studies the effect of the live migration time
4.3.1. Prior experiments Since the power-saving method experiment involves several factors, e.g., VM vCPU loading, migration downtime Tdowntime and power consumption Cmigration, Pstandby, Pmigration , Number of VMs to be migrated, Nmigrationvm , and the factors of merging or expanding cost of computing nodes, Cmaa, Tbootup, Tshutdown , etc, we first carried out several experiments to find answers for following questions: 1. Will VM's vCPU loading affect the time required for live migration? 2. Will CPU loading of the source host or target host affect the required live migration time of VMs? 3. Will live migration cause additional power consumption? The following discusses the results of three experiments. In the first experiment, all 10 VMs have various CPU loading 0%, 25%, 50%, 75% and 100%, and live migration of VMs are performed between two computing nodes. From Fig. 25, no matter how much CPU loading is, 175
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
Fig. 28. Power consumption of 10 VMs without the power-saving method. Fig. 26. Change in power consumption during migration.
due to the CPU loading of the source host and target host. VMs' vCPU loading of the hosts in the experiment is changed in the following three ways: 1. Change the source host and target host's CPU loading at the same time. 2. Only change the source host's CPU loading. 3. Only change the target host's CPU loading. The live migration experimental results are shown in Fig. 27 , one observes that migration time is not related to CPU loading of the target host, but is proportional to the CPU loading of the source host; in particular, when CPU loading of the source host and the target host are changed at the same time, it averagely costs extra two seconds in time. The above experiment proves that migration time is affected by the host's CPU loading, but the benefit of the power-saving method is not significantly affected. We will discuss about the time cost later.
Fig. 29. Cumulative power consumption of 2 computing nodes with 10 VMs without the power-saving method.
4.3.2. Performance of power-saving experiments with 10 VMs The experiments of the power-saving method are narrated here. As described in Section 3, the vCPU loading of vm01 to vm10 were sequentially increased up to 100% one by one in every five minutes, and then sequentially decreased the VM vCPU loading back to 0%. The following experimental results shown in Figs. 28–32 were measured for the system without or with the power saving method: Fig. 28 shows curves of power consumptions of the two computing nodes without the power-saving method. The power consumptions of the two computing nodes are observed to have the same trends in
Fig. 30. Power consumption of 10 VMs with the power-saving method.
response of the increase and decrease of loading of VM vCPUs. Fig. 29 shows the cumulative power consumptions of the two computing nodes. From the plot one can see that the cumulative power consumptions of them are similar. Fig. 30 shows curves of power consumptions of the two computing nodes with the power saving method. In the beginning of the first ten minute, the method determined that the ten standby VMs would not need to use all the computing power of the two computing nodes, so it transferred all the VMs, i.e., five VMs, on computing node 2 to computing node 1. Due to the delay of the judgment and migration time for the five VMs (about five minutes) and the shutdown time of computing node 2 (about one minute), computing node 2 continued consuming the electric power until ten minutes of the experiment. A similar situation appeared at 35 minute: as designed, VM05's
Fig. 27. Relationship between migration time and host's CPU loading.
176
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
Fig. 31. Cumulative power consumption of 2 computing nodes using 10 VMs with the power-saving method. Fig. 33. Power consumption of 8 VMs without the power-saving method.
effectively reduced. 2. Because 10 VMs with 8 core vCPU were used in this experiment, in theory a computing node of 32 core CPU can only load 4 VMs with full loading vCPU; therefore, if 10 VM's vCPU are all full, one computing node runs six VMs that cannot be migrated to the other computing node. Continuation of the first point, if the first and second VM selected to reduce the vCPUs loading are both in this node, the experimental data will not be conducive to the powersaving method. 3. Because the number of VMs is 10, and in the experiment the power is recorded in a time interval of five minutes, then transfer 5 VMs or 4 VMs plus the shutdown time does not reflect which one has the more favorable data in the power consumption data. 4.3.3. Performance of power-saving experiments with 8 VMs In order to reduce the effects mentioned above, experiments with similar setting as previous experiments were done; except, this time the number of VMs were reduced to 8. The following are the experimental results: Figs. 33 and 34 respectively show the power consumption without the power-saving method, which is quite similar to the results of experiment with 10 VMs. Similarly, with the vCPU rises and falls, two servers’ power consumptions are also up and down, and the two servers’ total power consumption statistics are very similar. Because the power-saving method is not used, when the vCPU is not in the fully loaded condition, the system does not detect the servers being in a low efficient usage state, thus causing unnecessary power waste.
Fig. 32. Cumulative power consumption difference between systems with 10 VMs with and without the power-saving method.
vCPU would be fully loaded at 30 minute and it should be beyond the maximum CPU support of computing node 1, but due to delay of the power-saving method, the VM selection and migration time, and the booting time of computing node 2, so the power consumption data did not respond to the change of CPU loading until 40 minute of the experiment. We also observed similar delay of change of the curve for computing node 2 at 90 minute of the experiment. Fig. 31 shows the cumulative power consumptions of the two computing nodes with the power-saving method. From the plot one can see that the cumulative power consumptions of the two computing nodes are different and the power saving effect on computing node 2 is obvious. Fig. 32 shows the difference between cumulative power consumptions of the nodes with or without the power-saving method. It is obvious that the cumulative power consumption of nodes with the power-saving method is indeed lower than that of nodes without the power-saving method. In this experiment, the method is found to have net power saving Psave about 7% of the consumption power; unfortunately, the degree of power-saving is not obvious, and following reasons are provided: 1. In the experimental with the power-saving method, when the VM's total vCPU loading is greater than the current available VMs provided by the computing node, then expanding of computing nodes is proceeded and the VMs to be migrated is determined in accordance with the VM vCPU loading and other factors. In our experiments, VMs are randomly selected to reduce the VM's vCPUs loading, thus, CPU loading of some computing node cannot be
Fig. 34. Cumulative power consumption of 2 computing nodes using 8 VMs without the power-saving method.
177
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
tion of the two computing nodes with 8 VMs; and compared to the previous setting with 10 VMs, the power-saving method is indeed more effective and the net power-saving Psave is doubled to 14%. 5. Conclusions and future work For the increasingly high demand of the cloud today, issues of the power demand of the cloud environment cannot be ignored. To effectively manage cloud servers, this work presents a power-saving method that, in the case of low usage of the cloud, automatically shuts down several facilities on the cloud to achieve the power efficiency goal without damaging quality of service.
5.1. Concluding remarks After a number of relevant research and experiments, the following conclusions are obtained from analysis of the experimental results.
Fig. 35. Power consumption of 8 VMs with the power-saving method.
• • • • • •
There is no direct relationship between migration time and vCPU loading. Live migration of VMs will not cause significant additional electricity costs. For the relationship of the live migration time and the server, the time required for live migration is directly proportional to CPU loading of the source server, but is independent of that of the target server. The proposed power-saving method is proven to be effective. About 7–14% saving of power consumption is achieved in our design. The percentage of power saving depends on not only the powersaving method but also the real operation situations of VMs and hosts.
Fig. 36. Cumulative power consumption of 2 computing node using 8 VMs with the power-saving method.
5.2. Future work The proposed power-saving method in this work still has many issues and parts need to be explored and reinforced; in order to strengthen the power-saving effect, in the future we plan to:
• • • •
Fig. 37. Cumulative power consumption difference between systems with 8 VMs with and without the power-saving method.
•
Figs. 35 and 36 show the power consumption with the power-saving method. The curves are similar to the previous experiments. Because the required migration time is shorter, in the 5 minute time point, the power consumption was recorded as 0, meaning that the computing node 2 has been shut down. Similarly, power consumptions recorded in several other time shows the efficiency of the proposed power-saving method. Fig. 37 shows the experimental results of the total power consump-
Test different setting of the experimental environments and more extensive evaluation with real workloads that looks at real costs. Reformulate the power-saving method by taking more factors into considerations, such as the disk I/O and network I/O flows that can be easily captured via SNMP. Try more diverse computing environments for the power-saving method, such as using three or more computing nodes in the computing environments, or a combination of hardware specifications of different computing nodes. Through more accurate power measurement, understand additional factors relating to efficiency of power operations to enhance the power-saving method. Design a more detailed method such that it can be not only operated in a more diverse environment, but also used to achieve better power-saving effect.
Acknowledgment This work is supported in part by the Ministry of Science and Technology, Taiwan ROC, under grants number MOST 104-2221-E029-010-MY3, 105-2622-E-029-003-CC3, and 105-2634-E-029-001.
178
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al.
High Performance Distributed Computing, ACM, pp. 179–188. Rosenblum, Mendel, Garfinkel, Tal, 2005. Virtual machine monitors: current technology and future trends. Computer 38 (5), 39–47. Sempolinski, Peter, Thain, Douglas, 2010. A comparison and critique of eucalyptus, opennebula and nimbus. In: 2010 IEEE Second International Conference on Cloud Computing Technology and Science (CloudCom). IEEE, pp. 417–426. Sotomayor, Borja, Montero, Rubén S., Llorente, Ignacio M., Foster, Ian, 2009. Virtual infrastructure management in private and hybrid clouds. IEEE Internet Comput. 13 (5), 14–22. Srikantaiah, Shekhar, Kansal, Aman, Zhao, Feng, 2008. Energy aware consolidation for cloud computing. In: Proceedings of the 2008 Conference on Power Aware Computing and Systems, vol. 10. USENIX Association. Uhlig, Rich, Neiger, Gil, Rodgers, Dion, Santoni, Amy L., Martins, Fernando CM., Anderson, Andrew V., Bennett, Steven M., Kagi, Alain, Leung, Felix H., Smith, Larry, 2005. Intel virtualization technology. Computer 38 (5), 48–56. Valentini, Giorgio, Lassonde, Walter, Ullah Khan, Samee, Min-Allah, Nasro, Ahmad Madani, Sajjad, Li, Juan, Zhang, Limin, Wang, Lizhe, Ghani, Nasir, Kolodziej, Joanna, Li, Hongxiang, Zomaya, Albert Y., Xu, Cheng-Zhong, Balaji, Pavan, Vishnu, Abhinav, Pinel, Frédéric, Pecero, Johnatan E., Kliazovich, Dzmitry, Bouvry, Pascal, 2013. An overview of energy efficiency techniques in cluster computing systems. Cluster Comput. 16 (1), 3–15. Wang, Lizhe, Ullah Khan, Samee, 2013. Review of performance metrics for green data centers: a taxonomy study. J. Supercomput. 63 (3), 639–656. Wang, Lizhe, Chen, Dan, Zhao, Jiaqi, Tao, Jie, 2012. Resource management of distributed virtual machines. IJAHUC 10 (2), 96–111. Wang, Lizhe, Chen, Dan, Hu, Yangyang, Ma, Yan, Wang, Jian, 2013. Towards enabling cyberinfrastructure as a service in clouds. Comput. Electr. Eng. 39 (1), 3–14. Whitaker, Andrew, Cox, Richard S., Shaw, Marianne, Gribble, Steven D., 2005. Rethinking the design of virtual machine monitors. Computer 38 (5), 57–62. Wikipedia, 2013a. Cloud computing, 〈http://en.wikipedia.org/wiki/Cloud_computing〉. Wikipedia, 2013b. Protocol data unit, 〈https://en.wikipedia.org/wiki/Protocol_data_ unit〉. Wikipedia, 2013c. Simple network management protocol 〈http://en.wikipedia.org/wiki/ Simple_Network_Management_Protocol〉. Wikipedia, 2013d. Virtualization, 〈http://en.wikipedia.org/wiki/Virtualization〉. Wikipedia, 2013e. Live migration, 〈http://en.wikipedia.org/wiki/Live_migration〉. Willmann, Paul, Shafer, Jeffrey, Carr, David, Menon, Aravind, Rixner, Scott, Cox, Alan L., Zwaenepoel, Willy, 2007. Concurrent direct network access for virtual machine monitors. In: IEEE 13th International Symposium on High Performance Computer Architecture, 2007. HPCA 2007. IEEE, pp. 306–317. Witkowski, M., Oleksiak, A., Piontek, T., Weglarz, J., 2013. Practical power consumption estimation for real life {HPC} applications. Future Gen. Comput. Syst. 29 (1), 208–217, Including Special section: AIRCC-NetCoM 2009 and Special section: Clouds and Service-Oriented Architectures. Wu, Zhengkai, Wang, Jun, 2010. Power control by distribution tree with classified power capping in cloud computing. In: Green Computing and Communications (GreenCom), 2010 IEEE/ACM International Conference on International Conference on Cyber, Physical and Social Computing (CPSCom), pp. 319–324. Wu, Zhengkai, Wang, Jun, 2010. Power control by distribution tree with classified power capping in cloud computing. In: Green Computing and Communications (GreenCom), 2010 IEEE/ACM International Conference on International Conference on Cyber, Physical and Social Computing (CPSCom), pp. 319–324. Wu, Zhengkai, Giles, Christopher, Wang, Jun, 2013. Classified power capping by network distribution trees for green computing. Cluster Comput. 16 (1), 17–26. Yamagiwa, Motoi, Uehara, Minoru, 2012. A study on constructing an energy saving cloud system powered by photovoltaic generation. 2012 15th International Conference on Network-Based Information Systems, pp.844–848. Yang, Chao-Tung, Huang, Kuan-Lung, 2013. Implementation a power saving method for virtual machine management in cloud. In: 2013 International Conference on Cloud Computing and Big Data, pp. 283–290. Yang, Chao-Tung, Tseng, Chien-Hsiang, Chou, Keng-Yi, Tsaur, Shyh-Chang, 2010. A virtualized hpc cluster computing environment on xen with web-based user interface. In: High Performance Computing and Applications. Springer, pp. 503– 508. Yang, Chao-Tung, Cheng, Hsiang-Yao, Huang, Kuan-Lung, 2011. A dynamic resource allocation model for virtual machine management on cloud. In: Kim, Tai-hoon, Adeli, Hojjat, Cho, Hyun-seob, Gervasi, Osvaldo, Yau, StephenS., Kang, Byeong-Ho, Villalba, JavierGarcía, editors, Grid and Distributed Computing, Communications in Computer and Information Science, vol. 261. Springer, Berlin Heidelberg, pp. 581– 590. Zhang, Xiantao, Dong, Yaozu, 2008. Optimizing xen vmm based on intel® virtualization technology. In: ICICSE’08. International Conference on Internet Computing in Science and Engineering, 2008. IEEE, pp. 367–374. Zhao, Yi, Huang, Wenlong, 2009. Adaptive distributed load balancing algorithm based on live migration of virtual machines in cloud. In: Fifth International Joint Conference on INC, IMS and IDC, 2009. NCM’09. IEEE, pp. 170–175. Zhong, Hai, Tao, Kun, Zhang, Xuejie, 2010. An approach to optimized resource scheduling algorithm for open-source cloud systems. In: ChinaGrid Conference (ChinaGrid), 2010 Fifth Annual, pp. 124–129.
References Adams, Keith, Agesen, Ole, 2006. A comparison of software and hardware techniques for x86 virtualization. In: ACM SIGOPS Operating Systems Review, vol. 40, ACM, pp. 2– 13. Alhamazani, Khalid, Ranjan, Rajiv, Rabhi, Fethi A., Wang, Lizhe, Mitra, Karan. Cloud monitoring for optimizing the qos of hosted applications. In: CloudCom CloudCom, 2012, pp. 765–770. Baliga, Jayant, Ayre, Robert WA., Hinton, Kerry, Tucker, Rodney S., 2011. Green cloud computing: balancing energy in processing, storage, and transport. Proc. IEEE 99 (1), 149–167. Banerjee, P.K., Sukthankar, V.K., Srinivasan, V., 2012. Method to fairly distribute power saving benefits in a cloud among various customers. In: 2012 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM), pp. 1–4. Barham, Paul, Dragovic, Boris, Fraser, Keir, Hand, Steven, Harris, Tim, Ho, Alex, Neugebauer, Rolf, Pratt, Ian, Warfield, Andrew, 2003. Xen and the art of virtualization. In: Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles, SOSP ’03. ACM, New York, NY, USA, pp. 164–177. Beloglazov, Anton, Buyya, Rajkumar, 2010. Energy efficient resource management in virtualized cloud data centers. In: 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, pp. 826–831. Chang, Ruay-Shiung, Wu, Chia-Ming, 2010. Green virtual networks for cloud computing. In: 2010 5th International ICST Conference on Communications and Networking in China (CHINACOM), pp. 1–7. Chen, Dan, Lu, Dongchuan, Tian, Mingwei, He, Shan, Wang, Shuaiting, Tian, Jian, Cai, Chang, Li, Xiaoli, 2013. Towards energy-efficient parallel analysis of neural signals. Cluster Comput. 16 (1), 39–53. Choon Lee, Young, Zomaya, Albert Y., 2012. Energy efficient utilization of resources in cloud computing systems. J. Supercomput. 60 (2), 268–280. Clark, Christopher, Fraser, Keir, Hand, Steven, Hansen, Gorm Jacob, Jul, Eric, Limpach, Christian, Prattm, Ian , Warfieldm, Andrew, 2005. Live migration of virtual machines. In: Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation, vol. 2, USENIX Association, pp. 273–286. CloudCom, 2012. 4th IEEE International Conference on Cloud Computing Technology and Science Proceedings, Taipei, Taiwan, December 3–6, 2012. IEEE, Taipei. Corradi, A., Fanelli, M., Foschini, L., 2011. Increasing cloud power efficiency through consolidation techniques. In: 2011 IEEE Symposium on Computers and Communications (ISCC), pp. 129–134. Dong, Yaozu, Li, Shaofan, Mallick, Asit, Nakajima, Jun, Tian, Kun, Xu, Xuefei, Yang, Fred, Yu, Wilfred, 2006. Extending xen with intel virtualization technology. Intel Technol. J. 10 (3), 193–203. Endo, Takako Patrícia, Gonçalves, Estácio Glauco, Kelner, Judith, Sadok, Djamel, 2010. A survey on open-source cloud computing solutions. In: Brazilian Symposium on Computer Networks and Distributed Systems. Figuerola, Sergi, Lemay, Mathieu, Reijs, Victor, Savoie, Michel, Arnaud, Bill, St., 2009. Converged optical network infrastructures in support of future internet and grid services using iaas to reduce ghg emissions. J. Lightwave Technol. 27 (June (12)), 1941–1946. Hansen, Gorm Jacob, Jul, Eric, 2004. Self-migration of operating systems. In: Proceedings of the 11th workshop on ACM SIGOPS European Workshop. ACM, p. 23. Hasebe, K., Niwa, T., Sugiki, A., Kato, K., 2010. Power-saving in large-scale storage systems with data migration. In: 2010 IEEE Second International Conference on Cloud Computing Technology and Science (CloudCom), pp. 266–273. Huang, Qiang, Gao, Fengqian, Wang, Rui, Qi, Zhengwei, 2011. Power consumption of virtual machine live migration in clouds. In: 2011 Third International Conference on Communications and Mobile Computing (CMC), pp. 122–125. Ullah Khan, Samee, Wang, Lizhe, Yang, Laurence T., Xia, Feng, 2013. Green computing and communications. J. Supercomput. 63 (3), 637–638. Kim, Nakku, Cho, Jungwook, Seo, Euiseong, 2011. Energy-based accounting and scheduling of virtual machines in a cloud system. In: 2011 IEEE/ACM International Conference on Green Computing and Communications (GreenCom), pp. 176–181. Kim, Nakku, Cho, Jungwook, Seo, Euiseong, 2012. Energy-credit scheduler: an energyaware virtual machine scheduler for cloud systems. Future Gen. Comput. Syst.. Kivity, Avi, Kamay, Yaniv, Laor, Dor, Lublin, Uri, Liguori, Anthony, 2007. kvm: the linux virtual machine monitor. In: Proceedings of the Linux Symposium, vol. 1, pp. 225– 230. KVM, 2013. Kernel based virtual machine, 〈http://www.linux-kvm.org/page/Main_ Page〉. libvirt.org. libvirt, 2013, 〈http://libvirt.org/〉. Lin, Ching-Chi, Liu, Pangfeng, Wu, Jan-Jan, 2011. Energy-efficient virtual machine provision algorithms for cloud systems. In: 2011 Fourth IEEE International Conference on Utility and Cloud Computing (UCC), pp. 81–88. Milojičić, Dejan S., Douglis, Fred, Paindaveine, Yves, Wheeler, Richard, Zhou, Songnian, 2000. Process migration. ACM Comput. Surv. (CSUR) 32 (3), 241–299. Moreno-Vozmediano, Rafael, Montero, S. Ruben, Llorente M. Ignacio, 2009. Elastic management of cluster-based services in the cloud. In: Proceedings of the 1st Workshop on Automated Control for Datacenters and Clouds, ACDC ’09. ACM, New York, NY, USA, pp. 19–24. Nurmi, Daniel, Wolski, Rich, Grzegorczyk, Chris, Obertelli, Graziano, Soman, Sunil, Youseff, Lamia, Zagorodnov, Dmitrii, 2009. The eucalyptus open-source cloudcomputing system. In: 9th IEEE/ACM International Symposium on Cluster Computing and the Grid, 2009, CCGRID’09. IEEE, pp. 124–131. Raj, Himanshu, Schwan, Karsten, 2007. High performance and scalable i/o virtualization via self-virtualized devices. In: Proceedings of the 16th International Symposium on
179
Journal of Network and Computer Applications 80 (2017) 165–180
C.-T. Yang et al. Chao-Tung Yang is a Professor of Computer Science at Tunghai University in Taiwan. He received the Ph.D. in Computer Science from National Chiao Tung University in July 1996. In August 2001, he joined the Faculty of the Department of Computer Science at Tunghai University. He is serving a number of journal editorial boards, including International Journal of Communication Systems, Journal of Applied Mathematics, Journal of Cloud Computing, “Grid Computing, Applications and Technology” Special Issue of Journal of Supercomputing, and “Grid and Cloud Computing” Special Issue of International Journal of Ad Hoc and Ubiquitous Computing. Dr. Yang has published more than 250 papers in journals, book chapters and conference proceedings. His present research interests are in cloud computing and service, grid computing, parallel computing, and multicore programming. He is a member of the IEEE Computer Society and ACM.
Shuo-Tsung Chen received the B.S. degree in Mathematics from National Cheng Kung University, Tainan in 1996 and M.S. degree in Applied Mathematics from Tunghai University, Taichung in 2003, Taiwan. In 2010, he received the Ph.D. degree in Electrical Engineering from National Chinan University, Nantou, Taiwan. Now he is an Adjunct Assistant Professor in Department of Electronic Engineering, National Formosa University and Department of Applied Mathematics, Tunghai University.
Kuan-Lung Huang is now working at IISI Company, he was a member of HPC (high performance computing) Lab of computer science at Tunghai University in Taichung, Taiwan. He received the M.S. in computer science from Tunghai University in July 2013. Huang has published more than 10 papers in book chapters and conference proceedings. His present research interests are in cloud and grid computing and parallel computing.
Jung-Chun Liu received the B.S. degree in electrical engineering from National Taiwan University in 1990. He received M.S. and Ph.D. degrees from the Department of Electrical and Computer Engineering at the University of Texas at Austin, in 1996 and 2004, respectively. He is currently an assistant professor in the Department of Computer Science at the Tunghai University, Taiwan. His research interests include cloud computing, embedded systems, wireless networking, artificial intelligence, digital signal processing, and VLSI design.
180