Performance Evaluation of Lightweight Virtualization ...

4 downloads 21256 Views 3MB Size Report
at University of Pernambuco (UPE) since 2010. Her current research interests are: Cloud Computing, and resource management. David Beserra is a doctorate.
Universidade Federal de Sergipe Programa de Pós-graduação em Ciência da Computação

Performance Evaluation of Lightweight Virtualization Solution for HPC I/O Scenarios

David Beserra (UFS) Edward David Moreno (UFS) Patricia Takako Endo (UPE) Jymmy Barreto (UFPE)

The authors David Beserra is a doctorate student on Computer Science at University Paris 1 PantheonSorbonne. He received his MSc at Federal University of Sergipe (UFS). His research interest is High Performance Computing, focusing performance evaluation of virtualized infrastructures.

Patricia Endo received her PhD of Computer Science from the Federal University of Pernambuco (UFPE) in 2014. She is a Professor at University of Pernambuco (UPE) since 2010. Her current research interests are: Cloud Computing, and resource management.

Edward David Moreno received his PhD in Electrical Engineering at the University of São Paulo (USP). He is an Associate Professor at the Department of Computer Science, Federal University of Sergipe (UFS).

Jymmy Barreto is a Master Student in Computer Science at Federal University of Pernambuco (UFPE). His research interest is High Performance Computing and Simulation of Networked Systems. 2

Introduction ●





Cloud computing is a dominant paradigm in distributed systems. Some HPC users migrated their applications from local clusters to cloud environments. Virtualization is a key-feature on cloud environments, but presents some challenges to allow its usage for HPC. –

Performance penalties due virtualization software stack



Performance penalties due resource sharing



Many others xD 3

Other problem appears... ●

Well... so many people (me included) have studied virtualization bottlenecks on CPU and GPU –



Nussbaum et al. 2009 [2], Xavier et al. 2013 [9], Duato et al. 2010 [4], Beserra et al. 2015 [1]

HOWEVER, we are missing to investigate better how virtualization affects I/O-bound tasks (we done it here ;}) –

IoT, Big Data, Internet Banking...

4

To the point ●

There are distinct techniques to provide virtualization



Each technique has its vantages and disadvantages





All techniques are affected by resource sharing GOAL: –

Determine in which scenarios OS-level virtualization is better than hardware virtualization to perform I/O-bound operations

I paid attention but... there are differences between these techniques justify this study? 5

Virtualization ●

Main approaches: –

Hardware virtualization ●



Type I and II

Operating System level virtualization Interesting! Containers seems to be lightweight! Make sense verify if it is real or not

6

Related works ●

Performance analysis of containers in I/O-bound operations:

Ref.

What they did?

14

IO-bound benchmarks adopted

Main results

Lacks

Benchmarked KVM, LXC, Bonnie++ Docker and OSv for some HPC applications

LXC and Docker performed similar, better than other solutions and close to native a native environment

Not consider distinct filesizes and communication patterns; Do not consider resource sharing

7

Benchmarked KVM and Docker FIO for IO operations in a SAN-like storage

Docker performed equal or better than KVM in cases tested

Same as in 14

15

Benchmarked OpenVZ, LXC and VServer for MapReduce applications

TestDFSIO, LXC offered the best WordCount, Terasort combination between isolation and performance

Same as in 14; No comparison with hypervisors; Not used MPI;

16

Hardware usage of Docker for some applications (IO-bound included)

Bonnie++

Same as in 14 and 15

Disk usage on Docker close to a native environment

7

Experimental method ●

We used a methodology based on 5 steps [18]



30 runs per test, media and standard deviation



2 factors (2 levels per factor)



4 environments on a single server

8

Experimental method Communication patterns adopted on tests

Explain it, please!

9

Results: 1K/Aggregated ●





Writing High variability on native and LXC (CRAZY!) KVM low variability –



Memory buffers before writing

LXC performs better

● ●



Reading Concurrence reduces LXC performance Concurrence reduces standard variation! –

Because inside each abstraction the amount of processed also reduces

10

Results: 128M/Competing ● ●



Writing High variability on native and LXC (AGAIN!) Parallelism reduces performance in KVM









Reading Concurrence generates variability But does not impacted on performance KVM and LXC performed similar here! (WHY?)

11

Results: 1K/Coopering ● ●

Writing Same performance for 1 instance









+abstractions,performance

Reading Same performance for 1 instance Resource sharing affects KVM more than LXC

12

Results: 128M/Coopering ●



Writing Results a bit rare in Virt2-KVM –







I do not know why, sorry!

Low variability

Reading Cooperative reading is strongly affected by splitting of resources –



TCP stack?

Worst than with 1K files, why?

13

Conclusions ●



The increase of abstractions running at the same host reduces the performance of the environment The filesize impacts on performance –



Latency impacts more on small files

The use of communication routines does not impacts on performance 14

References

15

The end ●



Thank you :) Thanks to UPE, UFS, CAPES, IEEE SMC Society and University Pantheon-Sorbonne for financial support to this work

16

Suggest Documents