Understanding the Impact of Cache Locations on Storage ... - Usenix

4 downloads 8068 Views 741KB Size Report
Overview. • Cloud storage stacks. • Impact of cache locations on system performance and energy efficiency. • Potential optimizations. 11 ...
Understanding the Impact of Cache Locations on Storage Performance and Energy Consumption of Virtualization Systems Tao Lu, Ping Huang, and Xubin He Virginia Commonwealth University Ming Zhang EMC Corporation

Caching: one key frees two birds Server load

I/O latency

2

Caching: one key frees two birds Server load Poorly cached

I/O latency

3

Caching: one key frees two birds Server load

Well cached I/O latency

4

Cache devices DRAM PCIe SSD

SATA SSD

5

Cloud caching: a puzzle VM

VM

Hypervisor

VM

VM

Hypervisor

VM

VM

Hypervisor

Where should the cache be Network deployed? Storage

6

Cloud caching: traditional wisdom

Ioan Stefanovici et. al., Software-Defined Caching: Managing Caches in Multi-Tenant Data Centers, SoCC’15 7

Cloud caching: traditional wisdom (cont’d) • D. Arteaga et. al., Cloudcache: On-demand Flash Cache Management for Cloud Computing , FAST’16 • Hwang, J. et.al., Mortar: Filling the Gaps in Data Center Memory, VEE’14 • Byan, S. et.al., Mercury : Host-side Flash Caching for the Data Center, MSST’12 8

Cloud caching: traditional wisdom VM

VM

VM

Hypervisor

VM

VM

Hypervisor

VM

Hypervisor

Caches are deployed in hypervisors Storage Network

9

Is hypervisor the best location to deploy caches?

10

Overview • Cloud storage stacks • Impact of cache locations on system performance and energy efficiency • Potential optimizations

11

Overview • Cloud storage stacks • Impact of cache locations on system performance and energy efficiency • Potential optimizations

12

Traditional

Storage stacks Application VFS Page Cache Flash Cache Disk FS / Blk. Dev. File Generic Blk. Layer I/O Sched. Blk. Dev. Drv. HDD/SSD 13

Traditional

Storage stacks Application VFS Page Cache Flash Cache Disk FS / Blk. Dev. File Generic Blk. Layer I/O Sched. Blk. Dev. Drv. HDD/SSD

Run fio micro-benchmark on a physical machine ~2us • Run fio once to warm up the page cache • Run fio for a second time to measure the page cache performance Run fio on a physical machine ~10us • Build a RAMDisk • Run fio on the RAMDisk with O_DIRECT enabled to bypass the page cache

Run fio on a physical machine 100+ us • Run fio on a SSD/HDD with O_DIRECT enabled to bypass the page cache

14

Traditional

Storage stacks Application VFS Page Cache Flash Cache Disk FS / Blk. Dev. File Generic Blk. Layer I/O Sched. Blk. Dev. Drv. HDD/SSD

~2us

~10us

100+ us

Virtualization

Overview • Cloud storage stacks • Impact of cache locations on system performance and energy efficiency • Potential optimizations

16

Testbed VM

Host &. Hypervisor

Storage

17

Testbed VM Host &. Hypervisor Storage

● Physical machine and Host OS ○ AMD Phenom II X4 B95 Quad-core 3.0 GHz processor with AMD-V virtualization support ○ 8GB DRAM ○ Host OS is a 64-bit Ubuntu 15.04 with Linux kernel version 3.19.0- 30-generic. ○ QEMU emulator version 2.4.1 and KVM are used as the hypervisor. 18

Testbed VM Host &. Hypervisor

● VM & Guest OS ○ 2 VCPUs and 2GB memory ○ Ubuntu 15.04 64-bit Server Cloud Image as the guest OS

Storage

19

Testbed VM Host &. Hypervisor Storage

● Disks ○ One 500GB Samsung 850 Pro SATA SSD ○ One 240GB OCZ RevoDrive 3 PCIe SSD

20

Testbed VM Host &. Hypervisor

● Benchmark ○ fio

Storage

21

DRAM caches: VM-side vs. Hypervisor-side vs. Native Benchmark

Benchmark Benchmark

VM

VM

Host &. Hypervisor

Host &. Hypervisor

Host &. Hypervisor

Storage

Storage

Storage

22

Performance of DRAM caches Benchmark

VM Hypervisor Storage

450000

Benchmark

250000

Benchmark

Hypervisor Storage

350000

Native VM-side Hypervisor-side

11%

300000

IOPS

VM Hypervisor Storage

400000

97%

200000 150000 100000 50000 0 4

8

16

32

64

128

256

512

1024

I/O size (KB) 23

Power consumption of DRAM caches Benchmark

Benchmark

VM Hypervisor Storage Benchmark

Hypervisor Storage

30

Power consumption (watts)

VM Hypervisor Storage

Native

VM-side

Hypervisor-side

25

1.7x

20 15

10

0.9x

9x

5 0 1

2

3

4

5

6

7

8

9

10

IOPS (K) 24

For DRAM caches, VM-side caching has better performance and energy efficiency than Hypervisor-side caching. 25

DRAM cache comparison: VM-side vs. Hypervisor-side vs. Native Benchmark

Benchmark Benchmark

VM

Host &. Hypervisor Storage

VM KVM DRAM

Host &. Hypervisor Storage

virtio DRAM

Host &. Hypervisor

DRAM

Storage

26

SATA SSD caches: VM-side vs. Hypervisor-side vs. Native Benchmark

Benchmark

Benchmark

VM

VM

Host &. Hypervisor

Host &. Hypervisor

SATA SSD

SATA SSD

Host &. Hypervisor SATA SSD 27

Performance of SATA SSD caches

28

Power consumption of SATA SSD caches

29

For SATA SSD based caches, VM-side caching and Hypervisor-side caching perform similarly.

30

SATA SSD cache comparison: VM-side vs. Hypervisor-side vs. Native Benchmark

Benchmark

Benchmark

VM

VM

Host &. virtio Hypervisor

virtio Host &. Hypervisor

SATA SSD

SATA SSD

Host &. Hypervisor SATA SSD

31

PCIe SSD caches: PCI passthrough vs. virtio Benchmark

Benchmark

VM

VM

Host &. PCI-pass Hypervisor PCIe SSD

Host &. virtio Hypervisor PCIe SSD 32

Performance of PCIe SSD caches (cont’d) Benchmark

VM Host &. PCI-pass Hypervisor PCIe SSD Benchmark

VM Host &. virtio Hypervisor PCIe SSD

33

Power consump. of PCIe SSD caches (cont’d) Benchmark

VM Host &. PCI-pass Hypervisor PCIe SSD Benchmark

VM Host &. virtio Hypervisor PCIe SSD

34

PCI passthrough enables the PCIe SSD allocated to a VM achieving near native performance.

35

Is hypervisor the best location to deploy caches? SATA SSD Maybe YES

PCIe SSD NO

DRAM Absolutely NO

36

What caps the performance of hypervisor-side caching? Virtual I/O path

37

Virtual I/O path • Challenge

DRAM cache: 1-2 us

• ~60 us RTT (for virtio) SSD: 40-100us

~70us ~70us

38

Virtual I/O path • Challenge • ~60 us RTT (for virtio)

• Chance • Bandwidth is high

39

Overview • Cloud storage stacks • Impact of cache locations on system performance and energy efficiency • Potential optimizations

40

Potential optimizations ● Resource Management

○ Allocating DRAM resources directly to VMs

● Guest OS Optimization

○ VM-side block device read-ahead

● Hardware Support/ New Devices ○ PCI Passthrough

● Host OS/ Hypervisor Optimizations ○ Reducing Virtual I/O Overheads

41

Thanks & Questions

42

Suggest Documents