Optimising Heterogeneous Task Migration in the Gardens Virtual Cluster Computer Ashley Beitz, Simon Kent and Paul Roe School of Computing Science Queensland University of Technology Australia fa.beitz,
[email protected],
[email protected] Abstract Gardens is an integrated programming language and system designed to support parallel computing across nondedicated cluster computers, in particular networks of PCs. To utilise non-dedicated machines a program must adapt to those currently available. In Gardens this is realised by over decomposing a program into more tasks than processors, and migrating tasks to implement adaptation. To be effective this requires efficient task migration. Furthermore, typically non-dedicated clusters contain different machines hence heterogeneous task migration is required. Gardens supports efficient task migration between heterogeneous machines via meta-information which completely describes a task’s state. By identifying different degrees of heterogeneity and different kinds of tasks, we are able to optimise task migration. The main contribution of this paper is to show how heterogeneous task migration may be optimised.
1. Introduction In the aggregate, networks of workstations represent a huge and cheap unused computing resource. By their very nature such non-dedicated cluster computers are dynamic. The workstations available to a computation will typically change during the run of a program as workstation users come and go. Thus programs must adapt to the changing availability of workstations. The Gardens system [28] is an integrated programming language and system targeted at non-dedicated cluster computers. The goals of Gardens are: adaptation, safety, abstraction and performance (ASAP!). These are realised in part by a modern object oriented programming language, Mianjin [27], a derivative of Pascal. The Gardens system and Mianjin programming language are custom designed
and built; thus we have complete control over both of these. Gardens utilises task migration to realise adaptation. A program is over decomposed into more tasks than processors and tasks are migrated in response to changing workstation loads. This adaptation is transparent to the programmer. Typically workstation networks comprise a collection of different machines. Thus efficient use of such machines entails heterogeneous computing and heterogeneous task migration, the subject of this paper. Task migration is integrated into both Gardens and the Mianjin compiler. Tasks communicate via a virtual shared object space. Tasks may reference objects belonging to other tasks, to communicate they invoke methods on such remote objects. This is the only way tasks may communicate, tasks cannot otherwise share data. The main contribution of this paper is to show how efficient heterogeneous task migration may be achieved, to realise adaptive utilisation of workstation clusters; however, it should be noted that there are other uses for task migration e.g. to implement automatic fault tolerance through migrating tasks to disk. The next section summarises our techniques for achieving heterogeneous task migration. Section 3 presents a more detailed look at the implementation. Some performance figures are reported in Section 4. Section 5 presents related work, and the final section discusses the work and future directions.
2. Task migration To implement task migration Gardens uses metainformation which fully describes a task’s state. This metainformation is generated by the Gardens Mianjin compiler. The meta-information is similar to that available in Java, although in addition to heap objects our meta-information also describes stack frames. A prerequisite for task migration is a safe language. For example we cannot allow a
pointer to masquerade as an integer or vice versa since integers and pointers may require different translation under task migration (see Section 3). Task migration is only supported at predetermined call points in the program. These migration points may be manually inserted by the programmer or automatically by the compiler. At these points the compiler generates the additional code and meta-information to support task migration. This can support both preemptive and non-preemptive task migration. Our current compiler does not support optimised code, although this is currently under investigation. To simplify migration we arrange for data structures to have common alignments and sizes across all platforms. We can do this since we have a custom system, language and compiler. A foreign language interface mechanism supports interoperability, but we do not support task migration within such code. Meta information is used to recover a task’s state. A task’s state comprises its stack, heap and global variables; registers and the PC are flushed to the stack. We do not migrate OS process state, our goal is to handle such state within wrapper libraries that can be migrated. The stack and heap are similar in that both can be viewed as collections of tagged records. The heap contains tagged objects, the stack contains tagged activation records, all objects in stack frames are statically known at compile time. When required, a task’s state is transformed into a state suitable for the target machine. In general this is done lazily since the task may initially be saved to stable storage hence its destination may be unknown hence the information for transformation will only be available at task load time e.g. stack and heap base addresses. To make task migration efficient we use optimisations based on different kinds of tasks and different degrees of platform heterogeneity. This is described in the following sections.
2.1. Different kinds of tasks There are three kinds of tasks in Gardens which have different migration requirements. These different kinds of tasks can be distinguished by the runtime system. Seed tasks are newly created tasks which have never been run. They comprise just the initial data passed in a create task operation, they have no stack nor heap and hence are trivial to migrate. Seed tasks are stored in a separate structure from other tasks until they are run, and hence are easily distinguished from other tasks. Stackless tasks have no stack. They correspond to an inverted programming style as often used in event driven programming where control must periodically return
to an event loop. Such tasks require only heap migration, since their stacks are empty, which can be considerably simpler than full task migration, some early work on this was reported in [29]. A stackless task may also be a task that has completed its main thread of execution but has “actions” to perform or objects in its heap which are referenced by other tasks. Full tasks have stacks and heaps both of which must be migrated. These are the most expensive tasks to migrate.
2.2. Degrees of heterogeneity There is a spectrum of degrees of system heterogeneity: 0. Same architecture, statically linked code, same heap and stack base addresses: a completely homogeneous platform. For such platforms no state transformation is required and migration corresponds to a straight memory copy from one machine to another. However in practice few modern platforms are this simple. 1. Same platform, but different base addresses (e.g. due to dynamic linking and loading): since all structures have common sizes and alignments, and all stack frames have the same representation, task migration only requires stack and heap pointers to be adjusted to deal with new stack and heap base offsets. This requires meta-information to locate all pointers in stack frames and heap objects. 2. Different architecture, but same word size: for heaps this requires pointer adjustments as described above and endian adjustments. In the case of migrating a stack, the stack must be rebuilt with different activation record conventions, e.g. stack mark information, register window flushing for SPARC etc. This can be very expensive to perform. 3. Different architecture and different word size: at present we do not address this level of heterogeneity. Note, some 64bit processors are capable of running 32bit code which may prove useful.
3. Implementation 3.1. Meta information All objects (records and arrays) in Gardens have an identifying “tag” located in the two words before the logical start of the object in memory. One word is used for garbage collection purposes while the second points to the object’s type descriptor; these type descriptors serve two purposes. Firstly, a type descriptor holds typical run time information such as dimension and element size for arrays and
method, ancestor and pointer tables for records to enable construction and polymorphism. Secondly, the type descriptors contain links to complete meta-information generated by the Gardens compiler to aid task migration. This meta information maps out the size, location and type of fields or elements of the type in question. This allows the traversal of all objects at runtime. Furthermore, the metainformation contains a link to a descriptor for the type’s module as well as a per-module index that is assigned to the type at compile time. This allows for a unique module ID/per-module index pair that may be used to identify the type across heterogeneous platforms. In addition to the meta data for types, meta data for procedures is generated as well. Procedural meta data maps out procedure entry addresses, local variable and parameter information (number, type and frame offset), possible frame offset location of saved display values, a module descriptor link and per-module index similar to those described for types. This information allows for traversal of a stack frame, given the location of a frame via a frame pointer and for unique identification for procedures, in function pointers or as instances in the stack, across platforms. Finally, the module descriptor contains compile and time stamps as well as three tables mapping per-module indexes to concrete addresses. The first two tables correspond to the type and procedure indexes while the third maps possible migration points.
3.2. Heap migration Each heap segment in Gardens comprises of two logical parts: the contiguous memory in which objects are allocated and the run time information for managing that memory. Having designed and implemented the Gardens compiler and run time system has allowed us to ensure that: 1. Objects of identical type are aligned identically across platforms. 2. Heap segment structure is identical across platforms. 3. Heap run time information is logically identical across platforms. This makes heap migration relatively simple; all that is required is a few changes to the heap segment’s representation. Furthermore, since all hosts in Gardens environment have complete information as to the characteristics (architecture and operating system) of the other hosts, these representation changes may be made directly by the source or destination host; packing to and unpacking from an intermediate representation is avoided.
Representation changes fall into three categories: 1. Pointer rebasing 2. Endian adjustment 3. Code/Data segment address translation Pointer rebasing is necessary when migrating between hosts with a degree of heterogeneity of (1) and (2) and involves traversal of all pointer fields within objects in the heap segment and adjusting any non-null pointers by an appropriate heap offset. Pointer rebasing is performed by the source host and, in the case of objects in the heap, requires only the pointer table found in the type descriptor. Endian adjustment requires a full traversal of all objects in the heap segment and performing byte swapping on fields of necessary size. This is only required in degree (2) cases in which machines are of different endian. Endian swapping is generally performed by the source host. Code/data segment address translation is also only required in degree (2) cases. Since the layout of code and data segments will not be identical across heterogeneous platforms, pointers into the code or data segments can not be simply “rebased”. In Gardens, however, the only candidates for code and data segment pointers in the heap are procedure variables and type descriptor addresses present in each object tag. These are replaced with procedure and type module ID/per-module index identifiers on the source side and replaced with the host specific address on the destination side. The main task of heap migration in Gardens thus breaks down into performing the above transformations to the objects in the heap segment and to the run time information describing the heap itself. The objects in the heap segment may be located scanning a heap object bitmap that is maintained by the run time system for memory management and garbage collection. The type and meta-information of each object is then obtained by inspecting the object’s tag and the object may be traversed and transformed as necessary. The run time information for the heap consists of the portion of the heap object bitmap relevant to the heap segment, a heap descriptor located at the start of each heap segment and the list of free blocks for the heap. The heap object bitmap contains no pointers and only needs endian adjustments; however, the heap object bitmap remains in use on the source host after heap migration has occurred (to mark objects and heap segments as remote). Therefore, endian adjustments on the heap object bitmap are performed by the destination host if necessary. The heap descriptor and free block list are unremarkable from other objects and are transformed correspondingly.
3.3. Stack migration Stack segments in gardens comprise of a runtime stack and a task descriptor. The task descriptor stores stack and context information along with some programmer definable task property objects. Transformation of the task descriptor is straightforward as it is with heap descriptors. The method of runtime stack transformation, however, depends upon the degree of heterogeneity between hosts. Degree (0), of course, requires no modifications to the stack. Degree (1) requires only pointer rebasing as in degree (1) heap migration. Candidates for pointer rebasing in stack migration are no longer just pointer fields in structures but pointers in local variables and parameters, VAR parameters, display pointers saved in stack frames and frame pointers themselves. The stack is traversed using the instruction pointer (or return address) and frame pointer for each stack frame to obtain the corresponding procedure’s metainformation. For degree (2) all three representation transformations described above need be performed. In addition to this, the layout of each of the stack frames needs to be restructured to match that of the destination host. To achieve this, the source host traverses the stack and deconstructs the stack into an abstract stack while performing the representation transformations. A list of all VAR parameters that point into the stack and the address into the abstract stack at which they point is also constructed by the host. The abstract stack is similar to the concrete stack in some ways. Each concrete frame has a corresponding abstract frame and each abstract frame has a parameter section, stack mark, local variable section and workspace (for value open arrays and value reference records). However, the abstract stack format does differ from concrete stacks in the following manners:
Abstract frames are in opposite order to those in the concrete stack with the abstract frame pointers pointing to the frame after it rather than before. Along with an abstract frame pointer, each abstract stack mark contains module ID/per-module index identifier for the return address and a similar identifier for the procedure relevant procedure. Parameters and local variables are stored in order from the abstract stack mark as frame offsets of parameters and local variables differ across platforms. Once the destination host has received the abstract stack, it rebuilds a stack particular to its architecture using a novel approach. For each stack frame, the parameters are first loaded from the abstract stack (into the concrete stack or
the parameter registers). A context switch is performed to the new stack and a dummy procedure prologue is called for the appropriate procedure. This allocates appropriate stack space, updates the display vector and copies any value arrays into the appropriate position in the workspace. Context is switched back to the original stack and the correct return address is inserted into the newly allocated stack frame. Local variables are copied to their respective positions and code/data segment translations are performed. Finally, the VAR parameter list is checked to see if memory pointed to by a VAR parameter down the stack has been copied to the concrete stack. If so, the VAR parameter value is adjusted to reflect the change. The stack frame is then complete. Finally, the stack/task descriptor requires some minor changes to reflect the state of the stack on the new host.
4. Performance The measurements below were taken on: 233 MHz Pentium II, 96 Mb RAM running RedHat Linux v5.2 and Sun 4 Sparc, 32 Mb RAM running Solaris 2.51. For degrees of heterogeneity (0) and (1), figures are specified for machines running Linux as specified above. For degree (2), (LS) indicates migration from Linux to Solaris and (SL) indicates migration from Solaris to Linux. The measurements are for a recursive sum program of approximately 40 stack frames and linked list program with 1000 objects in the heap. Each set of measurements presents the time taken for task transformation only (that is, no communication times are included) with the measurements split between the time taken by the source and destination hosts to perform the transformations necessary.
Seed Task Migration Source Dest. Degree (time s) (time s) 0 6 4 1 8 4 2(LS) 9 8 2(SL) 11 4 Stackless Task Migration (Linked List) Heap Stack Source Dest. Source Dest. Degree (time) s (time) s (time) s (time) s 0 0 0 0 0 1 643 0 33 0 2(LS) 4489 27651 639 945 2(SL) 57200 2461 7832 48
Degree 0 1 2(LS) 2(SL)
Full Task Migration (Linked List) Heap Stack Source Dest. Source Dest. (time) s (time) s (time) s (time) s 0 0 0 0 646 0 1055 0 4747 30083 1213 7370 59383 2470 12470 830
Degree 0 1 2(LS) 2(SL)
Full Task Migration (Recursive Sum) Heap Stack Source Dest. Source Dest. (time) s (time) s (time) s (time) s 0 0 0 0 29 0 1135 0 34 819 1458 10698 89 86 13951 980
The above figures clearly show the advantages of identifying and targeting both the different degrees of heterogeneity and different classes of tasks to migrate. In the context of task creation and initial load balancing, it is clear that seed task migration holds the greatest advantage for all degrees of heterogeneity, especially since seed tasks incur the smallest communication costs due to their size. Similarly, the speed up for degree (1) heterogenous migration from degree (2) migration (twice as fast for stack migration, ten times as fast for heap migration) is considerable. The heap migration figures reflect the use of pointer tables for pointer rebasing for degree (1) migration. This suggests a similar pointer map should be implemented for stack frames. Of note are the times for the stack and task descriptor transformations for stackless and full task migration. The full task recursive sum stack transformation with 40 stack frames takes is only 10% to 20% slower than the full task linked list stack transformation. We believe this is due to inefficiencies in our current method of loading metainformation. This is further illustrated by the stack transformation (really task descriptor transformation) figure for the degree (2) stackless task migration; full meta data information for the programmer defined task properties object must be loaded whereas only pointer tables are required for degree (1).
5. Related Work There are three main approaches to task migration across heterogeneous platforms [32]. The first approach assumes
that all tasks will execute on a virtual machine that is available on all nodes in the system, for example the Java Virtual Machine. The second and third approaches both assume that tasks will execute on their node’s native machine. To do this they need to generate meta-information on the executing task, so that they can translate the task’s execution state from one native machine’s format to another native machine’s format. The second approach relies on code being included in the task’s source code to collect this metainformation. This can either either be done manually by the programmer or automatically by a pre-processor. The third approach relies on the compiler and runtime system to generate the meta-information. The first approach is much simpler than the other two, as the use of a common execution environment reduces the problem to one that can be solved via a homogeneous migration solution. This approach was initially used by Chameleon [12]. Today it is widely used by mobile agent systems, such as Agent TCL [16], Aglets [18], ARA [22], Concordia [11], Extended Facile [23], Liquid Software [13], Mole [2], Obliq [5], Odyssey [15], Omniware [20], Sumatra [1], TACOMA [39] and Telescript [40]. Despite this approach’s simplicity it suffers performance penalties from the use of a virtual machine. Some solutions [20, 13] alleviate this problem by using “on-the-fly compilation” to translate parts of the task’s code to native code. However, the native code produced is still 25 percent slower than regular native code [1], as they must include safeguards to protect the execution environment from being corrupted by the native code. The second and third approaches provide better performance results than the first approach, as they allow the tasks to execute directly on the native machine. The second approach is more portable than the third approach, as it does not require a specialised compiler. However, the third approach delivers better runtime performance as it does not need to generate all of its meta-information at runtime and the migration mechanism is more transparent to the programmer. Examples of the second approach include: HMF [21], Process Inspection [14], HiCaM [25], Ythreads [30], Arachne [8], DOME [31], Shao and Schnabel’s work [32] and MpPVM [6]. Examples of the third approach include: Emerald [35], Tuis [34], Shub, Dubach and Rutherford’s work [9, 10], Hollander and Silberman’s work [17], Distributed C [24], Theimer and Hayes work [38] and porch [36]. Our work is based on the third approach, as it provides the most optimal results. With heterogeneous task migration, most research has focused on how to reconstruct the task’s state [38, 3, 7, 30], the location of migration points [4, 35] and analysing the safety aspects of this approach [34, 19]. Very little research has been done on how to optimising migration based on different kinds of tasks and different degrees of heterogeneity.
Most of the work in this area has been done by the University of Colorado at Colorado Springs [10, 33, 9]. The most significant contribution originating from their work is the idea of ensuring that compilers generate code with the same data alignment rules. Our work builds on what they have done by providing optimised translation based on the task type and the degree of heterogeneity between platforms.
6. Discussion and Further Work A basic heterogeneous task migration system has been implemented and initial results are promising. We are currently working on a revised system using local compiler back-end technology which supports the migration of tasks utilising optimised code. To make the migration process even more efficient we are looking at optimising our metainformation. Current performance figures suggest that our current meta-information and corresponding traversal techniques may be over complicated. To this end, a generalised version of the pointer map scheme, using bitmaps to plot required actions for stack frames and objects is being considered. Other methods of optimising meta-information include compression and lazy loading. A general problem is how to deal with non-migrable resources such as I/O and file handles, our current solution is to retain remote references to them [26]. An interesting alternative, is to migrate tasks to the JVM where no target mapping is defined using e.g. Java Platform Debugger Architecture [37]. Task migration may also be generalised to encompass dynamic software reconfiguration. We have yet to study degree 3 heterogeneity where e.g. word sizes and alignments of data may differ between platforms; this is particularly challenging.
Acknowledgements We would like to thank S-Y Chan for his help writing the Gardens meta facility and other Gardeners for their useful discussions concerning task migration. This study has been supported by an Australian Research Council grant and the Gardens research project (www.plasrc.qut.edu.au/Gardens) of the Programming Languages and Systems Research Centre at QUT.
References [1] A. Acharya, M. Ranganathan, and J. Saltz. Sumatra: A language for resource-aware mobile programs. In Mobile Object Systems. Springer, 1997. [2] M. Strasses, J. Baumann and F. Hohl. Mole - a Java based agent system. In Proceedings of the ECOOP’96 Workshop on Mobile Object Systems, 1996.
[3] D. G. V. Bank. Gnu c language system support for dynamic native code heterogeneous process-originated migration in the v system. Master’s thesis, University of Colorado at Colorado Springs, 1990. [4] D. V. Bank, C. M. Shub, and R. W. Sebesta. A unified model of pointwise equivalence of procedural computations. ACM Transactions on Programming Languages and Systems, 14(6):1842–1874, Nov. 1994. [5] L. Cardelli. Obliq, a language with distributed scope. Technical Report TR-127, Digital Equipment Corporation, Systems Research Center, Palo Alto, CA, 1994. [6] K. Chanchio and X. Sun. MpPVM: A software system for non-dedicated heterogeneous computing. In Proceedings of the International Conference on Parallel Processing, 1996. [7] K. Chanchio and X. H. Sun. Memory space representation for heterogeneous network process migration. In 12th International Parallel Processing Symposium, Mar. 1998. [8] B. Dimitrov and V. Rego. Arachne: A portable threads system supporting migrant threads on heterogeneous network farms. In Proceedings of IEEE Parallel and Distributed Systems, volume 9, 1998. [9] F. B. Dubach. Code-point and data mapping in dynamic native-code cross-architectural process migration. Master’s thesis, University of Colorado at Colorado Springs, 1990. [10] F. B. Dubach, R. M. Rutherford, and C. M. Shub. Processoriginated migration in a heterogeneous environment. In ACM Seventeenth Annual Computer Science Conference, pages 98–102. ACM Press, 1989. [11] D. W. et al. Concordia: An infrastructure for collaborating mobile agents. Number 1219 in Springer LNCS, 1997. [12] G. A. et al. Techniques for dynamic software migration. In Proc, 5th Annual ESPRIT Conference, pages 475–491, Brussels, Belgium, Nov. 1988. North-Holland. [13] J. H. et al. Liquid software: A new paradigm for networked systems. Technical Report TR96-11, Dept. of Comp. Sci., University of Arizona, 1996. [14] A. Ferrari. Process State Capture and Recover in HighPerformance Heterogeneous Distributed Computing Systems. PhD thesis, School of Engineering and Applied Science, University of Virginia, 1998. [15] General Magic Inc.., Introduction to the Odyssey API, 20 North Mary Avenue, CA 94086 [16] R. Gray. Agent TCL: A Flexible and secure mobile-agent system. PhD thesis, Dartmouth College, Hanover, New Hampshire, 1997. [17] Y. Hollander and G. M. Silberman. A mechanism for the migration of tasks in heterogeneous distributed processing systems, pages 93–98. Elsevier, 1988. [18] IBM. Aglets software development kit. Technical report, IBM Japan, 1998. http://www.trl.ibm.co.jp/aglets/. [19] G. Q. M. Jr and J. M. Smith. Process migration: Effects on scientific computation. ACM SIGPLAN Notices, 23(3):102– 106, Mar. 1988. [20] S. Lucco, O. SHarp, and R. Wahbe. Omniware: A universal substrate for web programming. In Proceedings of the 4th International World Wide Web Conference: The Web Revolution, 1995.
[21] M. V. M Bishop and L. Wisniewski. Process migration for heterogeneous distributed systems. Technical Report PCSTR95-264, Dept of Comp Sci, Dartmouth College, Janover, New Hampshire, Aug. 1995. [22] H. Peine and T. Stolpmann. The architecture of the ara platform for mobile agents. Number 1219 in Springer LNCS, 1997. [23] F. Knabe. Language Support for Mobile Agents. PhD thesis, School of Computer Science, Carnegie Mellon University, 1995. [24] C. Pleier. Prozessverlagerung in heterogenen Rechnernetzen basierend auf einer speziellen Ubersetzungstechnik. PhD thesis, Technische Universitat Munchen, Institut fur Informatik, 1996. [25] T. Redhead. A High-level Checkpointing and Migration Scheme for Heterogeneous Distributed Systems. PhD thesis, Dept of Comp Sci, University of Queensland, Australia, 1996. [26] P. Roe and S.-Y. Chan. I/o in the gardens non-dedicated cluster computing environment. In to appear in: First International Workshop on Cluster Computing, Melbourne, Australia, Dec. 1999. IEEE Press. [27] P. Roe and C. Szyperski. Mianjin is Gardens Point: A parallel language taming asynchronous communication. In Fourth Australasian Conference on Parallel and RealTime Systems (PART’97), Newcastle, Australia, Sept. 1997. Springer. [28] P. Roe and C. Szyperski. The gardens approach to adaptive parallel computing. In R. Buyya, editor, Cluster Computing, volume 1, pages 740–753. Prentice Hall, 1999. [29] P. Roe and C. Szyperski. Transplanting in gardens: Efficient heterogeneous task migration for fully inverted software architectures. In Proc, Australasian Computer Architecture Conference (ACAC’99), Auckland, New Zealand, Transactions of the CSA. Springer, 1999. [30] J. Sang, G. W. Peters, and V. Rego. Thread migration on heterogeneous systems via compile-time transformation. In Proceedings of the International Conference on Parallel and Distributed Systems (ICPADS’94), pages 634–639, Hsinchu, Taiwan, Dec. 1994. IEEE Computer Society Press. [31] E. Seligman and A. Beguelin. Dome: Distributed object migration environment. Technical Report CMU-CS-94-153, Carnegie-Mellon University, 1994. [32] C. Shao and B. Schnabel. A task migration system for parallel scientific computations in heterogeneous now environments. In Proceedings of the Ninth SIAM Conference on Parallel Processing for Scientific Computing, San Antonio, TX, 1999. [33] C. M. Shub. Native code process-oriented migration in a heterogeneous environment. In Proc, 18th Annual Computer Science Conf, pages 266–270, Washington, DC, Feb. 1990. ACM Press. [34] P. Smith. The Possibilities and Limitations of Heterogeneous Process Migration. PhD thesis, University of British Columbia, 1997. [35] B. Steensgaard and E. Jul. Object and native code thread mobility among heterogeneous computers. Operating Systems Review, 39(5), Dec. 1995.
[36] W. Strumpen. Compiler technology for portable checkpoints, submitted for publication (http://theory.lcs. mit.edu/ strumpen/porch.ps.gz), 1998. [37] Sun Microsystems. Java platform debugger architecture. http://java.sun.com/products/jpda/. [38] M. M. Theimer and B. Hayes. Heterogeneous process migration by recompilation. In Proc, 11th Intl Conf on Distributed Computing Systems, pages 18–25. IEEE Comp Soc Press, 1991. [39] R. van Renesse and F. Schneider. An introduction to the Tacoma distributed system, version 1.0. Technical Report 95-23, University of Tromso, Norway, June 1995. [40] J. White. Mobile agents, White Paper, General Magic Inc., 1996, http://www.genmagic.com/agents/Whitepaper/whitepaper.html.
Biographies Ashley Beitz is a software architect at CITR (http://www.citr.com.au) and is a part time PhD student at Queensland University of Technology (QUT). Simon Kent is a PhD student also at QUT; he was formerly a research assistant working on heterogeneous task migration. Paul Roe is a senior lecturer at QUT interested in cluster computing, programming languages and component technology. For further information concerning our research see: www.plasrc.qut.edu.au.