On the Performance of Multithreading Applications under Private Cloud Conditions Anderson M. Maliszewski¹ , Dalvan Griebler¹,² , Adriano Vogel¹,² , Claudio Schepke³ ¹Laboratório de Pesquisas Avançadas para Computação em Nuvem (LARCC), Sociedade Educacional Três de Maio (SETREM) – Três de Maio – RS – Brasil ²Programa de Pós-Graduação em Ciência da Computação, Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS), Porto Alegre – RS – Brasil ³Universidade Federal do Pampa (UNIPAMPA), Laboratório de Estudos Avançados (LEA), Alegrete – RS – Brasil
E-mail:
[email protected] 1. Introduction
4. Performance Evaluation
Our goal is to compare two cloud environments (dedicated – one instance with full machine resources scaling up to 8 threads and shared - two instances with half machine resources scaling up to 4 threads) and native (Ubuntu 14.04) as baseline, using different virtualization technologies (LXC v.1.0.8 and KVM v.2.0.0) under a private cloud (CloudStack 4.8). The tests were performed with a well-known benchmark suite of multithreading applications (PARSEC 3.0), in which five benchmarks compiled with native inputs (Canneal, Ferret, x264, Bodytrack and Dedup) were chosen. This work extends the studies on cloud performance evaluation of [Griebler et al. 2018]. In addition, the number of threads were limited to the number of vCPUs available and Hyperthreading was intentionally disabled.
Dedicated
Shared
Figure 3. x264 Execution Times.
2. Deployment Our cloud deployment model is depicted in Figure 1. We configured one host to be the front-end (cloud manager) while the other hosts were configured as computing nodes. The primary and secondary storage locations were exported from the front-end node via NFS and used to store the VM images, models, and operating system images. A gigabit switch was used to build the network interconnection for our pool of hosts. In this conceptual model, each node used a virtualization technology to create its instances. The user creates its instances through the CloudStack user interface (installed on the front-end). CloudStack sends a request to Node 1, 2, or N to create N LXC-based or KVM-based cloud instances. After the cloud instances are created, the PARSEC package applications can be deployed and tested.
Dedicated
Shared
Figure 4. Canneal Execution Times.
Shared
Dedicated
Figure 5. Dedup Execution Times.
Figure 1. Cloud deployment model.
3. Methodology Our methodology is shown in Figure 2. In the experiments with dedicated resources, a single instance was deployed on the physical host, and the applications from PARSEC suite running inside the instance had access to full machine resources. In contrast, each instance deployed with shared resources had access to half of full machine resources and two instances were deployed on the same physical host. As a good practice, the memory allocated (RAM) to the instances were 90% of the full host capacity. In both environments, the benchmarks were executed 10 times. Table 1 shows a short description of the PARSEC applications used.
Figure 2. High-level representation of the methodology followed in the evaluation.
Figure 6. Performance Scaling of Dedicated Machine Resources Environment LXC/KVM.
5. Conclusion The results demonstrated that performance in cloud varies according to specific characteristics of each application, environment, and virtualization technology. In some applications, a negligible overhead is noticed in both cloud scenarios. On the other hand, specific characteristics like memory locality and management, and I/O bound operations tend to introduce significant overheads both in KVM-based (Canneal about 18% in dedicated and 12% in shared machine resources with 2 threads) and LXC-based cloud (Dedup overhead up to 53% with 7 threads) instances. We plan in the future, evaluate different application domains to discover different behaviours; include different IaaS management tools (e.g., OpenStack, OpenNebula); perform the experiments with overprovision of computer resources; and perform our experiments with other virtualization technologies (e.g., HyperV, VMWare, Xen, Docker).
Acknowledgments The authors would like to thank for the partial support from HiPerfCloud project, CAPES, FAPERGS, and institutional support of SETREM, PUCRS and UNIPAMPA.
6. References
Table 1. Short description of PARSEC 3.0 benchmarks used in this paper.
[Griebler et al. 2018] Griebler, D., Vogel, A., Maron, C. A. F., Maliszewski, A. M., Schepke, C., and Fernandes, L. G. (2018). Performance of data mining, media, and financial applications under private cloud conditions. In 23rd IEEE Symposium on Computers and Communications (ISCC), Natal, Brazil. IEEE.
Laboratory of Advanced Research on Cloud Computing (LARCC) - http://larcc.setrem.com.br