PVM Emulation in the Harness Metacomputing Framework – Design and Performance Evaluation Dawid Kurzyniec, Vaidy Sunderam Dept. of Math and Computer Science, Emory University 1784 North Decatur Road, Suite 100 Atlanta, GA 30322, USA fdawidk,
[email protected] Abstract This paper describes the Harness-PVM emulation module that makes it possible to run legacy PVM applications within the Harness metacomputing system environment. We show benchmark results suggesting that component- and Java-based frameworks may provide substantial benefits without significantly compromising performance. Augmenting or implementing grids using such component based systems can thus be advantageous in a number of situations.
1. Introduction In recent years, novel infrastructures in the form of computational grids have transformed many aspects of distributed computing. However, reuse of, and dependence on, legacy application codes continues to remain an issue. Harness [4] is a metacomputing framework based upon the concept of extensible, and reconfigurable virtual machine. It introduces modular architecture based upon the concept of lightweight software backplane configured by attaching additional software components, i.e. plugins. Within such infrastructure, different (and multiple) programming paradigm emulators may be deployed, thus enabling the execution of legacy codes or a convenient transition path. At the time of writing, plugins emulating both PVM and MPI have been developed for Harness. In this paper, we focus on the PVM plugin for Harness that effectively emulates the PVM runtime system [2], retains binary compatibility with existing PVM applications, and, due to its modularity, significantly decreases maintenance and development costs.
Mauro Migliardi University of Genoa, DIST Via Opera Pia 13 16145 Genoa, Italy
[email protected]
is a pure Java implementation of PVM-like system. It provides a PVM-like communication API in Java, and as such is not interoperable with traditional PVM applications. A different approach is taken in the “jPVM” project [5], whose goal is to enable cooperation between Java applications and traditional PVM runtime via Java language bindings to the PVM library.
3. Design of Harness PVM Plugin The architecture of the Harness PVM emulator is shown in the Fig. 1. Every node of a Distributed Virtual Machine (DVM) is hosting a Harness kernel, running a set of appropriate plugins. The hpvmd is the main plugin emulating a PVM daemon. The hpvmd is not self-contained but depends on other, general purpose plugins that provide standard facilities. These plugins may be reconfigured and substituted on the fly. For instance, specialized communication protocols may be utilized via alternative transport plugins. As an another example, the staging facility may be implemented within a spawner plugin, immediately furnishing the emulator with the functionality beyond the original PVM.
2. Related Work Figure 1. PVM Emulation in Harness. Legion metacomputing system provides PVM support [3] with modified version of the PVM library. “JPVM” [1]
Proceedings of the 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGRID’02) 0-7695-1582-7/02 $17.00 © 2002 IEEE
Figure 2. Throughput test, configuration with two CPUs per node.
4. Performance Results We executed a set of benchmarks to evaluate the performance of Harness-PVM as compared to traditional PVM. Our experiments included point-to-point throughput and latency tests, matrix multiplication, and three NAS [6] kernel benchmarks: EP, MG, and IS. Due to space constraints, we show only selected results here. The test environment consisted of 17 SUN Ultra 60 workstations, each having 2 UltraSPARC-II 360MHz CPUs, 256-512 MB RAM, and a Fast Ethernet network adapter. All executable and data files were stored on shared NFS filesystem. We used Sun HotSpot Client Java VM from J2SE 1.3.0. The tests were performed for Harness-PVM emulator and for two PVM variants, using TCP and Unix domain sockets for daemonlibrary communication, respectively, in the normal dualCPU mode as well as in the uni-CPU mode. As the Figure 2 shows, Harness-PVM exhibited up to 50% better throughput than traditional PVM in dual-CPU mode. In the uni-CPU mode (not shown), the results for Harness-PVM and PVM were close to each other. Those results show that Java VM is able to exploit multiprocessor platform more efficiently, scheduling hpvmd threads on both CPUs. However, the latency introduced by HarnessPVM turned out to be about 50-75% larger than that of the PVM (latency figures are not shown). Figure 3 presents the results of NAS integer sort benchmark, characterized by especially intensive though small volume communication patterns. This one was the most demanding test for HarnessPVM due to its sensivity to the latency issues. However, the emulator overhead did not exceed 50% even in the worst case (4 nodes, uni-CPU mode) and exhibited significant improvement in the dual-CPU mode, once again showing better multiprocessor platform exploitation. In the remaining tests, Harness-PVM performance was very close to (sometimes even exceeding) the performance of the real PVM.
Figure 3. NAS Integer Sort benchmark results, sorting 223 keys in range [0::219 ].
5. Conclusions In this paper, we describe a component basedimplementation of the PVM runtime emulator for the Harness metacomputing system. We argue that due to its modular design it can be easily extended beyond what the real PVM runtime was capable of. We present benchmark results relating the performance of the emulator to that of real PVM for various parallel applications. We show that despite the utilization of Java technology, the overhead incurred by the emulation is reasonably small even for the most demanding benchmarks, especially when multiprocessor architectures are utilized.
References [1] A. Ferrari. JPVM: The Java parallel virtual machine. http: //www.cs.virginia.edu/˜ajf2j/jpvm/. [2] A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam. PVM: Parallel Virtual Machine: A Users’ Guide and Tutorial for Networked Parallel Computing. Scientific and engineering computation. MIT Press, Cambridge, MA, USA, 1994. [3] R. R. Harper. Interoperability of parallel systems: Running PVM applications in the Legion environment. Technical Report CS-95-23, Department of Computer Science, University of Virginia, May 03 1995. [4] M. Migliardi and V. Sunderam. The Harness metacomputing framework. In Proceedings of the Ninth SIAM Conference on Parallel Processing for Scientific Computing, San Antonio, Texas, USA, March 22-24 1999. Available at http://www.mathcs.emory.edu/harness/ PAPERS/pp99.ps.gz. [5] D. Thurman. jPVM: The Java to PVM interface, Dec. 1996. http://www.chmsr.gatech.edu/jPVM. [6] S. White, A. Alund, and V. S. Sunderam. Performance of the NAS parallel benchmarks on PVM based networks. Journal of Parallel and Distributed Computing, 26(1):61–71, Apr. 1995.
Proceedings of the 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGRID’02) 0-7695-1582-7/02 $17.00 © 2002 IEEE