IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014), August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India
Empirical Study of Virtual Disks Performance with KVM on DAS Gauri Joshi
S.T. Shingade
M.R. Shirole
Computer Science Department, VJTI, Matunga Mumbai, India
[email protected]
Computer Science Department, VJTI, Matunga Mumbai, India
[email protected]
Computer Science Department, VJTI, Matunga Mumbai, India
[email protected]
Abstract— There is exponentially increasing demand of data generation, its storage, access and communication. To fulfil the demands, concept called Cloud Computing came into the picture. The key concept operating at the basic level of cloud computing stack is a Virtualization. Virtual machine (VM) state is represented as a virtual disk file (image) that is created on the hypervisor’s local file system, from where virtual machine is booted up. Virtual machine requires minimum one disk to boot and start its function. Within guest operating system, one can use block devices or files as virtual disks with Kernel-based Virtual Machine (KVM). Till the time, no empirical study has been performed on different types of virtual disk image formats to quantify their runtime performance. We have studied representative application workload: I/O micro-benchmarks on a local file system i.e. direct-attached storage (DAS) environment in conjunction with RAW, Copy-on-Write scheme QCOW2 from QEMU, Microsoft’s VHD, Virtualbox’s VDI, VMWARE’s VMDK and parallel’s HDD. We have also investigated the impact of block size on applications runtime performance. This paper seeks to provide the detailed runtime performance analysis of different image formats based on different parameters such as latency, bandwidth, IOs performed per second (IOPS). Today users have a choice to select virtual disks from the pool of virtual disk image formats. But, currently it’s a black box selection for users as no comparison or decision model exist for different virtual disk image formats. This study is done to provide insights into the performance aspect of various virtual disk image formats and offer guidelines to virtual disk end users in implementing and using them respectively. Keywords- Virtualization, KVM hypervisor, Virtual machine, Virtual disk, fio, Virtual disk image formats.
I.
INTRODUCTION
The performance of the cloud has become an important due to increasing workload [1]. The key concept operating at lower level of cloud computing stack is Virtualization. For the majority of high-performing clouds the underpinning is a virtualized infrastructure. Virtualization has been in data centres for several years as a successful IT strategy for consolidating servers. The main purpose to design a virtualization is to pool infrastructure resources. Apart from such operation it provides agility and flexibility to the cloud environment. “Virtualization, in computing, is the creation of a virtual version of something, such as a hardware platform, operating system, a storage device or network resources” [6].
Basically, virtualization is a technique that divides a physical computer into several isolated machines known as virtual machines (VMs). Client or server operating system is required if VMs are created within a hypervisor or a virtualization platform. A virtual machine (VM) has been serving as a crucial component in cloud computing with its rich set of features [15]. Multiple virtual machines can run on a host computer, each possessing its own operating system and applications. Virtual disk is created on the hypervisor’s local file system, from where virtual machine is booted up. Virtual disks can be created via different processes, for example, virtual machine creation process, or independent inside storage repository. Block devices or files can be used as a local storage in the guest operating systems, with KVM. Files are commonly known as virtual disk image files due to following reasons [4]: •
Disk image files are available to the hypervisor as files.
•
Similar to block devices, disk image files represent a local mass storage disk.
A disk image file can be considered as a local hard disk for the guest operating system. The maximum size of the virtual disk is equal to the size of the disk image file. A disk image file of 50 GB can create virtual disk of 50 GB. The virtual disk location may be outside the domain of the guest operating system and the virtual machine. Guest operating system has limited access and rights. It can access only information related to size of the virtual disk. As shown in Fig. 1along with the DAS, the storage space for virtual machines’ virtual disks can be allocated from multiple sources such as network-attached storage (NAS), or storage area network (SAN) each having different performance, reliability, and availability at different prices. DAS is at least several times cheaper than NAS and SAN, but DAS limits the availability and mobility of VMs [2]. In this paper, we have conducted a performance study using I/O micro-benchmarks workloads on a local file system environment. Virtual disks RAW, Copy-on-Write scheme QCOW2 from QEMU, Microsoft’s VHD, Virtualbox’s VDI, VMWARE’s VMDK and parallel’s HDD are evaluated against bandwidth, latency and IOPS parameters. Also, we have studied the impact of block size and file size at the hypervisor level on application runtime performance.
978-1-4799-6393-5/14/$31.00 ©2014 IEEE
IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014), August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India which leads to severe delays in a heavily loaded cloud environment [1]. VM
VM
Guest OS
Guest OS
Application
Application
Virtual Machine Monitor (VMM)
Disk VM Image Data
VM Image Data
Figure 2. VM environment Figure 1.
Allocation techniques of virtual disks
The remaining paper is organized as follows. Section 2 provides background information and related work. Section 3 describes the methodology of our performance study. Section 4 presents the results of the study with detailed analysis of one of the scenario. Section 5 presents concluding remarks and directions for future work. II.
BACKGROUND AND RELATED WORK
Virtualization is a logical disk a computer uses to perform I/O operation. Virtualization is used to boot the operating system. The VM environment is shown in Fig. 2. A hard disk image is interpreted by a Virtual Machine Monitor as a system hard disk drive. Infrastructure as a service (IaaS) cloud encapsulates user applications into virtual machines. The VMs are distributed on a large number of compute nodes to share the physical infrastructure. Virtualization enables many features such as consolidation for improving resource efficiency, live migration for easier maintenance, and so forth. The hard disk drive of a virtual machine (i.e., virtual disk) is typically emulated with a regular file on the hypervisor host (i.e., VM image file). I/O requests received at virtual disks are translated by the virtualization driver to regular file I/O requests to the image files. A typical IaaS cloud, such as Amazon Elastic Compute Cloud (EC2), has thousands of VM images. In order to create a new VM instance in an IaaS cloud, a VM image needs to be available at its hypervisor host. As illustrated in Figure 1, one straightforward solution is to pre-copy the entire image to the compute nodes before a new VM is started. If an instance uses an image that the target hypervisor does not have, it may take a long time to start up that instance. A typical VM image file contains multiple gigabytes or even tens of gigabytes of data,
Subsequent instances that use the same image on that host can start up faster as the image is locally available. An alternative method to address this issue is to transfer the image data in an on-demand streaming fashion, where the parts of an image are copied as needed from the shared storage system to hypervisor hosts. This scheme is used by cloud operating environments such as IBM SmartCloud Provisioning (SCP) [1]. VM images can be stored in different formats. The most straightforward option is to use the RAW format, where I/O requests to the virtual disk are served via a simple block-toblock address mapping. In order to support multiple VMs running on the same base image, copy-on-write techniques have been widely used, where a local snapshot is created for each VM to store all modified data blocks. The underlying image files remain unchanged until new images are captured. There are different copy-on-write schemes, including QEMU QCOW2, Microsoft’s VHD, Virtualbox’s VDI, VMWARE’s VMDK and parallel’s HDD and so forth. In some schemes, such as QCOW2, a separate file is created to store all data blocks that have been modified by the provisioned VM. There are many efforts on benchmarking application runtime performance of virtual disks on local file system. Our paper analyzes the impact of different VM image formats on application runtime performance on local file system. The focus of efforts is on performance study of virtual disks III.
METHODOLOGY
To represent a typical virtualization environment, we set up experiment testbed as shown in Fig. 3. We then configure the hypervisors to use the various virtual disks described in the previous section. Application workloads are executed on this testbed using these virtual disk configurations
978-1-4799-6393-5/14/$31.00 ©2014 IEEE
IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014), August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India A. EXPERIMENT ENVIRONMENT Our experiment testbed, consists of one machine with minimum configuration, Intel (R) Core (TM) i5-3210M CPU @ 2.50GHz, 4GB RAM with virtualization extensions as shown in Fig. 3. The machine is used as compute node with Ubuntu 12.04 LTS along with KVM hypervisor. Six VMs are created on this machine. Each is provided with 10GB volume and 512MB RAM. Virtual Machine Linux Applications Linux Guest OS
Virtual Machine Linux Applications Linux Guest OS
RAW/QCO W2/VMDK
VDI/VHD /HDD
Linux Applic ations
KVM Linux (Ubuntu 12.04 LTS) Intel (R) Core (TM) i5-3210M CPU @ 2.50GH, 4GB RAM with virtualization extensions Figure 3. Hardware setup for performance study
B. APPLICATION WORKLOAD We use I/O micro-benchmarks workload to characterize the VM runtime performance under different conditions. IO Micro-benchmarks: We use a benchmark tool called fio. fio is a tool used to avoid writing test cases again and again for simulating IO workloads. Steps to stimulate desired IO workload using fio are, •
Write a job file describing a specific setup. It may contain any number of threads, jobs and/or files. • Run the job file. fio parses this file and sets everything as declared in the file while executing the job file. The job file consists of global section and one or more job sections. Global section defines shared parameters, and the job section describes the job involved. Job contains the following basic parameters if we break down it from top to bottom [5]: IO type – It is IO patterns (sequential read, sequential write, random read, random write or even combination of reads and writes, sequentially or randomly) issued to the file(s). Block size – Chunk size (single value, or range of block sizes) used to issue IO. IO size - How much data is going to be read/write. IO engine – It is the way of IO issue. IO could be memory mapping the file, using regular read/write, using splice, async IO, syslet, or even SG (SCSI generic sg). IO depth - If the IO engine is async, it is a maximum size of queuing depth to maintain. IO type – It can be buffered IO, or direct/raw IO. Num files – Number of files spread in the workload.
Num threads – Number of threads or processes spread in this workload. Along with the above basic parameters defined for a workload, there are a number of parameters that modify other aspects of the job. Below is the sample example of fio script that is writing sequentially to a file. ; -- start job file -[seq-writers] ioengine=libaio iodepth=4 rw=write bs=8k direct=0 size=128m numjobs=4 ; -- end job file – No global section is defined here. Here depth of 4 for the file is used along with async IO. Number of jobs is equal to four that forks four identical jobs. The four processes will write randomly to their 64MB file. We have set the block size to 8K and buffered IO is used as direct is not set to true. The output of fio job file looks like below. seq-write: (g=0): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=4 ... seq-write: (g=0): rw=write, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=4 fio 1.59 Starting 4 processes seq-write: Laying out IO file(s) (1 file(s) seq-write: Laying out IO file(s) (1 file(s) seq-write: Laying out IO file(s) (1 file(s) seq-write: Laying out IO file(s) (1 file(s)
/ / / /
128MB) 128MB) 128MB) 128MB)
seq-write: (groupid=0, jobs=1): err= 0: pid=2740 write: io=131072KB, bw=403124 B/s, iops=49 , runt=332944msec slat (usec): min=4 , max=74700 , avg=140.89, stdev=2318.99 clat (msec): min=40 , max=254 , avg=81.14, stdev=14.12 lat (msec): min=44 , max=254 , avg=81.28, stdev=14.12 bw (KB/s) : min= 263, max= 560, per=25.00%, avg=393.55, stdev=36.60 cpu : usr=0.09%, sys=0.22%, ctx=16196, majf=0, minf=20 IO depths : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w/d: total=0/16384/0, short=0/0/0 lat (msec): 50=0.01%, 100=91.44%, 250=8.53%, 500=0.02% seq-write: (groupid=0, jobs=1): err= 0: pid=2743 write: io=131072KB, bw=403070 B/s, iops=49 , runt=332988msec slat (usec): min=4 , max=104502 , avg=59.83, stdev=1466.12
978-1-4799-6393-5/14/$31.00 ©2014 IEEE
IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014), August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India Run status group 0 (all jobs): WRITE: io=524288KB, aggrb=1574KB/s, minb=403KB/s, maxb=403KB/s, mint=332944msec, maxt=332988msec Disk stats (read/write): sda: ios=0/16657, merge=0/49577, ticks=0/1337768, in_queue=1337900, util=100.00%
In the output the client number, group id, process id and error of that thread are printed. Below is the IO statistics, for above example. io - Number of megabytes io performed which is in KB. bw – Average bandwidth rate. iops – Average IOs performed per second runt - Runtime of the thread slat - Submission latency, it is time taken to submit the io. clat - Completion latency, it is the time from io submission to completion. lat – Latency, it is fairly new metric. It is not documented in the man page. Looking at the code, it appears that this metric starts the moment the IO struct is created in fio and is completed right after clat, making this the one that best represents what applications will experience. It’s not slat plus clat. bw – Bandwidth, it denotes approximate percentage of total aggregate bandwidth particular thread received in this group. If the threads in this group are on the same disk, then the last value is really useful since they compete for same disk access. cpu - CPU usage, it shows usage of the system and user, along with the number of context switches of thread went and finally the number of major and minor page faults. In our experiments, we have concentrated on Bandwidth (KB/s), Latency (ms) and IOPS parameters. IV.
Figure 4. Sequential Read, Bandwidth
Figure 5. Sequential Read, Latency
EXPERIMENT RESULTS
We have performed experiments on IO Micro-benchmark application workload in the virtualized environment. These experiments test the application runtime performance based on following parameters: File Size = 64MB to 2048MB, Block Size = 4KB to 128KB and IO Patterns = Sequential Read, Sequential Write, Random Read, Random Write, Random Read Write (RW) Analysis is done based on bandwidth (KB/s), latency (ms) and IOPS parameters. We have observed that results may vary according to the file size and block size. So, we have performed a number of experiments to see the variations in application runtime performance on virtual disks. For detailed analysis we have selected below scenario. File size = 1024MB. Block Size = 4KB to 128KB and IO Patterns = Sequential Read, Sequential Write, Random Read, Random Write, Random Read Write (RW)
Figure 6. Sequential Read, IOPS
For Sequential Read, Fig.4, Fig. 5, and Fig. 6 explore bandwidth, latency, and IOPS for different Virtual Hard Disks. As shown in Fig. 4 RAW and QCOW2 perform well for different values of file size and block size. VDI works well if block size is less and VHD works well if file size is large. Latency time is very less for RAW, QCOW2 and for HDD, VHD it’s very high as shown in Fig. 5. As shown in Fig. 6 RAW performs large number of sequential read I/O operations per second. QCOW2 performs moderate no. of I/O operations. HDD performs very less operations. We have observed that IOPS value for VDI increases as file size increases. Preferable sequence of Virtual Hard Disks for Sequential Read is RAW, QCOW2, VDI, VMDK, VHD, HDD.
978-1-4799-6393-5/14/$31.00 ©2014 IEEE
IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014), August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India
Figure 10. Sequential Write, Bandwidth Figure 7. Random Read, Bandwidth
Figure 11. Sequential Write, Latency Figure 8. Random Read, Latency
Figure 12. Sequential Write, IOPS Figure 9. Random Read, IOPS
For Random Read, Fig.7, Fig. 8, and Fig. 9 explore bandwidth, latency, and IOPS for different Virtual Hard Disks. As shown in Fig. 7 RAW has highest bandwidth in almost all conditions. QCOW2 is almost close to RAW in many configurations. All other virtual disks are not so suitable for this operation if bandwidth is the selection criteria. Latency time is very less for Raw and Qcow2 as shown in Fig. 8. As shown in Fig. 9 RAW performs large no. of sequential read I/O operations per second. QCOW2 also performs well here. Considering all the experiments and results we have performed the preferable sequence of Virtual Hard Disks for Random Read is RAW, QCOW2, VDI, VMDK, VHD and HDD.
For Sequential Write, Fig.10, Fig. 11, and Fig. 12 explore bandwidth, latency, and IOPS for different Virtual Hard Disks. As shown in Fig. 10 VDI and VMDK are best suitable for this operation. HDD is also better. RAW and QCOW2 are not so preferable. Performance of RAW and QCOW2 increases with the large block size. VDI takes very less time (latency time) to complete operation in different situations. RAW and QCOW2 take highest time to complete the operation as shown in Fig. 11. As shown in Fig. 12, VDI and VMDK perform large no. of sequential write I/O operations per second. VHD and HDD perform moderate no. of operations. RAW and QCOW2 are not preferable at all. Preferable sequence of Virtual Hard Disks for Sequential Write is VDI, VMDK, VHD, HDD, RAW and QCOW2.
978-1-4799-6393-5/14/$31.00 ©2014 IEEE
IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014), August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India
Figure 13. Random Write, Bandwidth
Figure 16. Random RW(50), Bandwidth
Figure 14.
Random Write, Latency
Figure 15.
Random Write, IOPS
Figure 17. Random RW(50), Latency
For Random Write, Fig.13, Fig. 14, and Fig. 15 explore bandwidth, latency, and IOPS for different Virtual Hard Disks. As shown in Fig. 13 VDI and VMDK are best suitable for this operation. HDD is also better. RAW and QCOW2 are not so preferable if bandwidth is the selection parameter. Performance of RAW and QCOW2 increases with the large block size. VDI takes very less time (latency time) to complete operation in different situations. VHD, HDD, QCOW2 and RAW have highest latency while performance operation as shown in Fig. 14. As shown in Fig. 15, RAW and QCOW2 perform large no. of sequential write I/O operations per second. The preferable sequence for this IO pattern is purely based on selection parameter (Bandwidth/latency/IOPS). In general preferable sequence of Virtual Hard Disks for Random Write is VDI, VMDK, RAW, QCOW2, HDD and VHD. Figure 18. Random RW(50), IOPS
978-1-4799-6393-5/14/$31.00 ©2014 IEEE
IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014), August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India For Random Read Write operations with equal mix read and write ratios Fig.16, Fig. 17, and Fig. 18 explore bandwidth, latency, and IOPS for different Virtual Hard Disks. Equal mix of read and write ratio indicates percentage of reads and writes are 50:50. The result varies based on percentage of reads and writes operations. As shown in Fig. 16 VDI and VMDK perform well in both read and write operations and for different mix read and write ratios. RAW and HDD are better. QCOW2 also performs well if read and write mix are equal. Latency of RAW and QCOW2 is less for read operations. While latency of VDI and VMDK is less for write operations as shown in Fig. 17. As shown in Fig. 18, VDI and VMDK perform large no. of sequential write I/O operations per second. VHD and HDD perform moderate no. of operations. RAW and QCOW2 are not preferable. VDI and VMDK execute large no. of random RW I/O operations per second. QCOW2 and HDD perform almost equal no. of I/O operations per second. In this case preferable sequence of Virtual Hard Disks for Random RW is VDI, VMDK, RAW, HDD, QCOW2 and VHD.
Figure 21. Random RW(30), IOPS
Figure 19. Random RW(30), Bandwidth
For Random Read Write operations with different mix read and write ratios, Fig.19, Fig. 20 and Fig. 21 explore bandwidth, latency, and IOPS for different Virtual Hard Disks. Here the ratio of reads and writes is 70:30. As shown in Fig. 19 VDI and VMDK perform well in this case. Performance of RAW and HDD is good. Latency of RAW and QCOW2 is less for read operations. While latency of VDI and VMDK is less for write operations and latency of VHD and HDD is very high for read operations as shown in Fig. 20. As shown in Fig. 21, VDI and VMDK perform large no. of sequential write I/O operations per second. VHD and HDD perform moderate no. of operations. HDD, RAW and QCOW2 are not preferable. VDI and VMDK execute large no. of random RW I/O operations per second. QCOW2 and HDD perform almost equal no. of I/O operations per second. For different mix ratio scenario performance of QCOW2 is even slightly decreases. In such cases performance of RAW disk is better. Preferable sequence of Virtual Hard Disks for Random RW is VDI, VMDK, RAW, HDD, QCOW2 and VHD Overall, higher values of bandwidth and IOPS and lower values of latency constitute better results. Still, the preferable sequence of virtual hard disks entirely depends on required parameters. For e.g. one application purely concentrates on time required to complete the operation. Then latency of virtual hard disks will be an important factor, bandwidth and IOPS will have less important. From above graphs we can conclude, RAW and QCOW2 performs well in Sequential and Random Read operation. VDI is well suitable for Sequential Write operation. VMDK can also be better choice here. For Random Write, VDI and VMDK are better if bandwidth and latency are the factors. But when we consider only IOPS, RAW performs well over VDI and VMDK. In Random Read Write (RW) operation overall the performance of VDI and VMDK is better. On the whole VDI is suitable in all types of write operations.
Figure 20. Random RW(30), Latency
978-1-4799-6393-5/14/$31.00 ©2014 IEEE
IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014), August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India FUTURE WORK
V.
As a future work, we plan to expand the scope of this study further. First, we will include additional benchmarks e.g. data intensive application like MySQL. Second, we plan to perform experiments in cloud environment i.e. NAS and SAN environment using NFS and iSCSI streaming protocols. We are planning to do in-depth analysis of the root cause of the apparent runtime performance by studying the formation of virtual disk image formats. Finally, we plan to establish an analytical model for the runtime performance. VI.
CONCLUSION
This research paper presents an empirical study of different types of virtual disk image formats with KVM on a local file system. We have selected I/O Microbenchmark workload to study the performance. Exhaustive experiments provide evidence that selection of the appropriate virtual hard disk can significantly increase the performance of the I/O operation issued by the user. This study will definitely help the developers whilst building new virtual hard disks or modifying the existing ones.
[11] Virtualbox VDI Image Storage. See B. Shah. “Disk performance of copy-on-write snapshot logical volumes”. PhD thesis, University Of British Columbia, 2006. [12] Liang Yang, Anthony F. Voellm. “Virtual Hard Disk Performance”, A Microsoft White Paper Published: March 2010 [13] http://www.storagereview.com/fio_flexible_i_o_tester_s ynthetic_benchmark [14] https://github.com/radii/fio/tree/master/examples [15] Xun Zhao, Yang Zhang, Yongwei Wu, Kang Chen, Jinlei Jiang, Keqin Li, "Liquid: A Scalable Deduplication File System for Virtual Machine Images," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.5, pp.1257,1266, May 2014 doi: 10.1109/TPDS.2013.173 [16] Zhiming Shen, Zhe Zhang, Andrzej Kochut, Alexei Karve, Han Chen, Minkyong Kim; Hui Lei, Nicholas Fuller, “VMAR: Optimizing I/O Performance and Resource Utilization in the Cloud” [17] en.wikipedia.org/wiki/Virtual_disk_image
REFERENCES [1] Han Chen, Minkyong Kim, Zhe Zhang, Hui Lei, “Empirical Study of Application Runtime Performance using On-demand Streaming Virtual Disks in the Cloud”. In Proceedings of the Industrial Track of the 13th ACM/IFIP/USENIX International Middleware Conference (MIDDLEWARE '12). ACM, New York, NY, USA, Article 5, 6 pages. DOI=10.1145/2405146.2405151 http://doi.acm.org/10.1145/2405146.2405151 [2] Chunqiang Tang. 2011. FVD: a high-performance virtual machine image format for cloud. In Proceedings of the 2011 USENIX conference on USENIX annual technical conference (USENIXATC'11). USENIX Association, Berkeley, CA, USA, 18-18. [3] Daniel A. Menasc´e, “Virtualization: Concepts, Applications, and Performance Modeling”. [4] Kernel Virtual Machine (KVM): Best practices for KVM Copyright IBM Corp. 2010, 2012 [5] I/O Microbenchmark tool. See http://www.bluestop.org/fio/HOWTO.txt [6] http://en.wikipedia.org/wiki/Virtualization [7] The QCOW2 Image Format. See https://people.gnome.org/~markmc/qcow-imageformat.html [8] VirtualBox VDI Image Format. See http://forums.virtualbox.org/viewtopic.php?t=8046. [9] Microsoft VHD Image Format. See http://technet.microsoft.com/enus/library/bb676673.aspx#EHB [10] VMware Virtual Disk Format 1.1. See http://www.vmware.com/technicalresources/interfaces/VMDK.html.
978-1-4799-6393-5/14/$31.00 ©2014 IEEE