Files J, P, A and B have to be moved to the LCG2 middle- ware for the execution of .... [9] John P. Morrison, James J. Kennedy, and David A. Power. Webcom: A.
1
Interoperability using a Metagrid Architecture. G. Pierantoni, B. Coghlan, E. Kenny, O. Lyttleton, D. O’Callaghan, G. Quigley
Abstract— In this paper we describe an architecture for interoperability that exploits the power of a computational model known as Condensed Graphs to express workflows and that uses GT4 WSRF Web Services as a bridge technology to submit jobs that encompass the workflow engine (WebCom), LCG2 and GT4 submissions. Index Terms— Grid Interoperability, Workflow, Condensed Graphs, WebCom
I. I NTRODUCTION
internal regions. These metagrid services are implemented using one or more metagrid technologies. The minimal set of services necessary for interoperability include file staging, job submission and security, while a logging service, although not strictly necessary, is implemented. In this paper we describe an architecture that demonstrates this combination to exploit the power of the Condensed Graphs unified computational model to express complex workflows of operations that are executed in a metagrid environment encompassing WebCom, the LHC Grid (LCG2) and GT4.
I
N the last decade the Grid concept gave rise to many different implementations: data-centric, computation-centric and hybrid. As each of these different solutions have particular strengths and weaknesses it is becoming increasingly important in the Grid community to find solutions capable of harnessing together the different capabilities of each of these implementations. Grid Interoperability is becoming a very active field of research[3], [17], [16]. An evolving approach is a new class of infrastructure termed metagrid: this solution is to different Grids what the grid middleware is to different operating systems. In other words a metagrid is able to use heterogeneous set of Grids and capable of offering to the user a friendly interface through which heterogeneous grids and advanced services can be used easily. To use heterogeneous Grids within a single environment we suggest that two main issues have to be addressed; there must be a bridge infrastructure able to communicate with different grid middlewares and there must be a unified model capable of expressing the workflow of operations to be executed by those middlewares. While not the only possible solution, we contend that the Unified Computational Model known as Condensed Graphs [8] is ideal for expressing such workflows, where its implementation (WebCom [9], [10], [11]) allows its execution with a range of evaluation policies, e.g. eagerly or lazily. At the same time stateful Web Services, such as those provided by the Globus Toolkit 4 (GT4), offer a promising technology for the first implementation of the bridge infrastructure. In our solution the metagrid encompasses several heterogeneous grids and a set of services, which we term Metagrid Services. These services act as a bridge between the user and the heterogeneous Grids. To implement this solution a design concept of internal and border regions is used: an internal region is a set of machines that run either metagrid, WebCom, or a homogeneous set of grid services (either LCG2 or GT4 Services) while a border region is a set of machines where WebCom instances or Grid services can coexist and interact with metagrid services. The metagrid services provide a bridge infrastructure so that the border region extends the internal sets such that they overlap in the border region. In the metagrid region, a set of services allows communication and interoperability among the grid services hosted in the other
II. M ETAGRID J OB REPRESENTATION To represent a metagrid job we use the mathematical model of condensed graphs and also the grid job description languages. In fact the representation of a metagrid job encompasses two different layers: a topology layer that describes the dependencies among the various grid jobs (expressed in Condensed Graphs) and a grid layer that describes the grid jobs encompassing different languages such as the LCG2 JDL[14], [15] and the GT4 RSL [6]. The condensed graphs computational model is based on directed acyclic graphs in which every node has not only operand ports, but also an operator and a destination port and where the flow of entities on arcs are used to trigger execution. Nodes can be condensations, or abstractions, of other condensed graphs (CGs). CGs can thus be collapsed (i.e. condensed) to a single node in a graph at a highler level of abstraction. Conversely, the condensation can be undone (i.e. evaporated) to reveal the internal CG. When the proper entities (operands, operators and destination, all described by CGs) are present at all the ports of a node, the node can be fired, resulting in the execution of the instructions it represents. With condensed graphs, workflows of jobs to be executed on multiple grids can be represented. The dependencies among the different jobs are represented with the topology of a CG such as the one represented in Fig. 1. This CG represents a workflow that comprises six jobs and their dependencies. The jobs are: •
• • • •
Nodes E,X,W1 and W2 executed sequentially in the WebCom internal region. – E and X represent the beginning and end of a CG. – W1 and W2 are general WebCom jobs. Nodes L1 and L2 executed sequentially in the LCG2 internal region. Nodes G1 and G2 executed in parallel in the GT4 internal region. Nodes G3 and L3 executed sequentially in the relevant internal region. Notice that flows {L1 , L2 }, {G1 , G2 } and {G3 , L3 } are all executed in parallel
2
A. Coarse and fine grained submission topologies The node in Fig. 2 represents an atomic job submission to the Grid that comprises all the necessary steps to perform a job in a Grid (submission, status handling and output retrieval). These atomic nodes cannot be evaporated. Other operations that implement each of the possible operations on a grid can also be provided as atomic operations that can be combined in different eager or lazy CGs, such as the one in Fig. 3, to describe particular needs such as job submissions with cancellation after a timeout.
Fig. 1.
A condensed graph representing a multi-Grid job submission.
In the topology representing the workflow the various jobs are represented by CG nodes as shown in Fig. 1. Each of these nodes is, in turn, represented by a node such as the one shown in Fig. 2 with the following meanings: •
Operands: The operands represent data needed to perform job submission: – Job Description: The name of the file with the job description, either a JDL file for LCG2 submissions or a RSL file for GT4 submissions – Metagrid Submission Service address: This is the address of the main metagrid service where the entire job was originally submitted. It is used to invoke the metagrid services for file staging, security and logging. – User: The user identity on behalf of whom the job will be submitted. – Dependencies: Dependencies on other jobs. – Others: Other various parameters specific to the job
•
•
Operator: The Job submission operator. This represents a java class which will be executed when the CG is processed by the WebCom engine. Output: Jobs that follow in the workflow (i.e. output dependencies).
Fig. 2.
Fig. 3.
A CG representing a job execution with a timeout.
As described above, a job representation in the metagrid space is composed of two different syntax layers: an upper layer that describes the dependencies among different jobs, and a lower layer that deals with the description of the single jobs to be submitted to a Grid. In order to have the maximum possible flexibility a metagrid service translates between compatible job description formats. This metagrid service adds a degree of liberty to the metagrid as jobs can be moved from one grid to another if so needed. While the topology of Fig. 3 represents a job execution with a given timeout, the CG in Fig. 4 represents the same Job execution as a condensed node in a larger graph that is executed only when the node executing the ListMatch command returns a non-empty list. This example applies only to job execution on LCG2 as GT4 lacks the ListMatch command.
A CG node representing a job submission.
Using the node in Fig. 2 and graph topologies such as the one in Fig. 1 we can describe complex jobs consisting of multiple grid submissions to different Grids (LCG2, WebCom and GT4) with any arbitrarily complex dependency patterns that can be represented as directed acyclic graphs.
Fig. 4.
A Node representing a conditional Job execution.
3
different machines and connect to each other in a peer-to-peer structure to execute hierarchical graphs, while conventional Grids are based on rarely changing topologies where services run for most of the time at a fixed location. Secondly, additional file staging mechanism may be necessary. Finally, security interoperability [12] issues have to be addressed for each of the actors involved (WebCom, LCG2 and GT4).
Fig. 5.
A complex job submission: 5 jobs in 2 Grids with multiple VOs.
B. Complex job topologies In Fig. 5 a complex topology of jobs is submitted to both a GT4 and LCG2 Grid for a variety of virtual organisations. G1 and G2 are GT4 job submission graph nodes submitted in series. L3 , L4 and L5 are LCG2 jobs graph nodes submitted in series. Collectively the GT4 and LCG2 jobs are submitted in parallel. The entry node E takes an argument in the form of a string specifying the location of the output results directory. An arc connects the entry node E to every other node since this information must be passed to every job submission node. The recapitulation node R sums up the number of job successes and logs the result in the results directory. In this configuration, the job submission nodes G1 , G2 , L3 , L4 , L5 take as input three strings. The first is the output directory, the second specifies the location of the RSL/JDL file and the third specifies the location of the user proxy credential file obtained from a voms-proxy-init. The resultant graph therefore supports multi-grid and multi-VO job submission. It should be noted that GT4 requires a grid-proxy-init in normal circumstances, not a voms-proxy-init, but in this scenario Condor pools were used to analyse the VO name, which in turn were used to make decisions about which GRAM manager the GT4 jobs would be submitted to; in effect, low-level resource brokerage was introduced into the GT4 submission system. The graph nodes for job submission produce a large amount of logging obtained from the resource broker in the case of LCG2 nodes and GRAM manager in the case of GT4 nodes. III. T HE M ETAGRID A RCHITECTURE A. Introduction In order to effectively run a composition of grid jobs represented in CGs, a metagrid architecture must be provided that supports the execution of jobs on heterogeneous Grids in a resilient and scalable way on behalf of users belonging to different Virtual Organizations (VOs). In this particular case, the architecture has to cope with a fundamental difference between WebCom and the conventional Grid solutions. The current release of WebCom is based on a highly dynamic topology in which different instances of WebCom are instantiated on
Fig. 6.
The metagrid architecture.
In consequence we have conceived an architecture based on the concept of internal and border regions. Figure Fig. 6 illustrates the architecture of metagrid: a job is first submitted to a metagrid user interface that forwards the description of the job to the workflow engine region (in this case represented by WebCom technology). Where different regions overlap we have border regions. In these regions services of different technologies communicate with each other to allow interoperability. In order to have a consistent representation of these regions and their properties we use set theory to describe them. B. A Description based on Set Theory First we define aSRegion as a set of resources available to the user. Let S = k∈K Rk : 1 ≤ k ≤ K be the set of all the possible resources available to the user where Rm ⊆ S is the metagrid region, and Rw ⊆ S is the family of workflow regions, and Rwi ⊆ S : 1 ≤ i ≤ I is the ith workflow region (for example WebCom). Rg ⊆ S is the family of Grid regions, and Rgj ⊆ S : 1 ≤ j ≤ J is the j th grid region (for example LCG2). Using this formalism we describe Internal and Border regions as outlined below: An internal region is a set of machines that run either metagrid, WebCom or a homogeneous set of Grid S services (LCG2 or GT4). Rm \Rk |k6=m = Rm \(Rw Rg ) is a pure metagrid region where only metagrid technology exists. Rwi \Rk |Rk 6=Rwi is a pure ith workflow region (WebCom, for example). Rgj \Rk |Rk 6=Rgj is a pure j th grid region (LCG2, for example). A border region is a set of machines where at least metagrid technologies exist, and where they can coexist with other technologies. Various types of borders can be supported within the metagrid region. We describe these metagrid subsets as follows: {RmW } = Rm ∩ RW , where metagrid service(s) and
4
WebCom technologies coexist, and {Rmg } = Rm ∩ Rg where metagrid service(s) and grid services technologies coexist, and {RmW g } = Rm ∩ RW ∩ Rg where metagrid service(s), WebCom and grid service technologies all coexist. C. Extended and Collapsed Borders Borders can be of two main types: extended and collapsed. Extended borders, such as those shown in Fig. 7, are subsets of metagrid machines, each of which hosts either WebCom and metagrid technology, or grid and metagrid technology. This simple metagrid technology has two aims: it is the base upon which metagrid services such as file staging and security are built, and it acts as a technology decoupler between WebCom and the Grid technology. Fig. 8.
•
• •
Fig. 7.
An Extended Border Region.
Collapsed borders, such as those shown in Fig. 8, are subsets of metagrid machines where more than two technologies coexist. WebCom interacts directly with the metagrid services and the Grid technology. The metagrid technology primarily serves the metagrid services such as file staging and security while job submission is performed directly from WebCom to grid technology. IV. T HE M ETAGRID S ERVICES The WebCom job of Fig. 1 can be submitted to the metagrid machine hosting the Metagrid Submission Service, as in Fig. 7 and Fig. 8. A Webcom job is propagated to the WebCom Submission Service, which runs on a machine that is part of an extended border hosting a WebCom instance and the service. The CG is executed by WebCom in its internal region, but when a node representing a Grid job submission is scheduled for execution it is targeted to a machine in the client side border {RmW } and thence to the server-side border {Rmg } (either LCG2 or GT4) and, from there, is submitted to the internal region of the target Grid. Before the job is submitted to the Grid’s internal region, metagrid services such as file staging and security ensure that the data needed for the job submission (files, job description and certificates) must be present at the border. The entities are in client-server pairs as shown in Fig. 9.
A Collapsed Border Region.
Server-Side Metagrid Submission Service: This service represents the Gateway to the interoperability world. CGs representing jobs are submitted here. All necessary files and user credentials are present. This service comprises the following metagrid server-side services. – File Closet Service Server – Logging Service Server – Security Service Server Collapsed Border: This hosts a WebCom instance, the Grid technology and the clients of the metagrid services. Extended Border: These are in pairs. One hosts a WebCom instance and the client of both the metagrid services and the grid submission service, while the other hosts the client of the metagrid service and the server of the grid submission service plus the Grid technology.
Fig. 9.
Architecture of Border Services.
Metagrid services are of two main kinds: one deals with job submission and enables WebCom to submit jobs to the Grid regions while the other, which we term Ancillary Metagrid Service ensures that, before the job is submitted to the internal Grid region, all the data and files needed are present where
5
needed. Metagrid services, thus, interact in a choreography of submission and ancillary services during the execution of the entire job. A description of the choreography is as follows: • CG Submission: The CG representing the job is submitted to WebCom through the metagrid submission service. All necessary files (including JDLs and/or RSLs) and certificates are present. • Execution of WebCom nodes: WebCom nodes are executed directly in the WebCom internal region. • Execution of a grid job submission: When a node representing a grid submission is to be executed, it is sent to one of a specific set of metagrid machines where the following events take place: – Job Description Retrieval: The file with the description of the job is retrieved from the File Closet Service. – Job Description Analysis: The file with the description of the job is parsed and information regarding any other necessary file is extracted – Job Pre-processing: The necessary files are obtained from the File Closet Service. The user’s credential is obtained from the Security Service – Job Submission: The job is submitted to the internal grid region – Waiting and Output: The status of the job is continously polled, and when the job is completed, the output of the job is retrieved. – Job Post-processing: The output files of the job are staged back to the File Closet Service and a log file that tags all the metagrid operations with a timestamp is sent back to the Logging Service. The main metagrid services are: • Metagrid Submission: This service is the user interface to submit, manage and retrieve the results of metagrid jobs. Current plans are that a GUI based on the Crossgrid Migrating Desktop (MD)[4] will take over this role. • Grid Submission: This service submits, manages and retrieves the results of single atomic jobs to the various Grid regions encompassed by the metagrid. • File Closet: This service ensures that the files needed by the jobs that are executed in the Grid regions are moved into place when they are necessary. Currently this uses gridFTP. • Logging: This service gathers the various log files produced by every job execution in the Grid regions and lets them be available to the user. • Security: The Metagrid Security Service is a central service to exchange security credentials from one middleware with those for another. It includes a KeyNote Credential Authority, a Delegation Service and a Trust Verification Service. At present the MSX delegates VO authorisation decisions to the EGEE VOMS service. A. The File Closet Service The metagrid architecture currently offers two systems for file staging. One relies on a GSI-FTP server running on the Metagrid Submission Service machine while the other relies
upon a service specifically designed to be hosted on a GT4 container on the Metagrid Submission Service machine. These two systems can be used depending on the configuration present on the Metagrid Submission Service machine. The behaviour of these services is expressed in the use case described in Fig. 10 that describes a complex job submitted to three different Grid middlewares (WebCom, LCG2 and GT4). This job is described by the CG = {E, W1 , L, G, W2 , X}.
Fig. 10.
Data Movement with File Services.
E represents the entry node of the CG. W1 represents an atomic operation to be executed on WebCom. It reads file A and produces file B • L represents an atomic operation to be executed on the LCG2 middleware; this operation is described (in JDL language) in file J. It reads file J, P, A and B and produces file C. • G represents an atomic operation to be executed on GT4 middleware; this operation is described (in the RSL language) in file R. It reads file R, Q, A and B and produces file D. • W2 represents an atomic operation to be executed on WebCom. It reads file C and D, and produces result file E. • X represents the exit node of the CG. 1) Data movements with current file transfer service: The following operations must happen: • Files J, P, A and B have to be moved to the LCG2 middleware for the execution of job L, either by prestaging, use of sandboxes or file catalogs, or by explicit file transfers. • File C has to be moved to the WebCom middleware for the execution of job W2 , • Files R, Q, A and B have to be moved to the GT4 middleware for the execution of job G, either by prestaging, use of file catalogs, or by explicit file transfers. • File D has to be moved to the WebCom middleware for the execution of job W2 , • Result file E must be moved where it can be accessed by the user. In addition, unless WebCom executes all the graph nodes on the same physical machine (a very much unintended restriction) many of the files A, B, J, P, R, Q, C, D and E will be expensively copied by value between the nodes. 2) Data movements without staging: In order to overcome these far from optimal staging operations we are developing • •
6
a grid file system, gridFS [7], that allows access to files in a transparent way. In fact, the presence of the grid file system will allow the following flow of operations: • Files J, P, A and B can be read directly during the submission and execution of job L. • File C can be written directly to gridFS by job L and can be read directly by job W2 , • Files R, Q, A and B can be read directly during the submission and execution of job G. • File D can be written directly to gridFS by job G and can be read directly by job W2 , • Result file E can be written directly to gridFS by job W2 and can be accessed directly by the user. B. The Security Services In addition to the security features of the other Metagrid services, there are a number of security services. The Metagrid Security Exchange (MSX) [13] provides a central service to exchange security credentials from one middleware with those suitable for another. The KeyNote Credential Authority and the Delegation Service described below are components of the MSX which are particularly important for interoperability between WebCom, which uses KeyNote security, and grid middleware that uses GSI-based security. KeyNote Credential Authority: WebCom instances must have appropriate KeyNote authorization credentials [2]. Each KeyNote credential must be signed by an authority that is trusted by the party making an authorization decision. In the metagrid we have an online KeyNote Credential Authority that can be trusted by all participating WebCom instances. KeyNote credentials for WebCom instances can be fetched by authenticating using the appropriate host or service certificate. Delegation Service: Delegation of authentication credentials allows a service to operate on behalf of a user. WebCom does not support credential delegation and so an additional service must provide this ability. The delegation service operates as a form of credential repository. A user’s client program delegates a credential to the service which returns a signed token, containing a unique random value, that can later be exchanged for the credential by certain authorized hosts. Trust Verification Service: The trust verification service uses an evaluation of the policies and practices of grid certification authorities to determine if they are trustworthy according to a set of rules. VOMS Authorization Credential Acquisition: This allows a job submission to acquire an authorization credential from the Virtual Organisation Membership Service (VOMS) for the appropriate VO. This component contacts VOMS using the delegated credential on behalf of the user. C. Logging and Service In a metagrid environment logging information deals with different levels of information. The metagrid logging service deals with these different layers in the following way. Each job submission log is saved with the logging information of the border events that encompass that grid event. Finally all the logging information of the various border events and the
related grid events are passed back to the logging service on the main border machine. This aggregate log represents the entire set of events of the metajob execution. V. I MPLEMENTATION The current implementation is based on the architecture of Fig. 11. The WebCom package comprises the workflow engine that invokes the operators that encompass the various invocations of the border services. The operators invoked by WebCom are of four types: • Extended Border Operators as shown in Fig. 7. • Collapsed Border Operators as shown in Fig. 8. • Constrained Nodes Operators used for testing. These allow job submission from a WebCom region to a grid region without the involvement of the ancillary border services. • Stress Test Nodes Operators used for stress testing, They allow job submission from a WebCom region to a grid region without the involvement of the ancillary border services except for a particular flavour of the logging service that is used to analyze the architecture under stress test. These operators, in turn, use an abstraction layer termed handlers that is used to decouple the packages representing the node operators from those representing the implementation of the various border services. Finally the handlers package invokes the clients of the various ancillary border services.
Fig. 11.
Current Implementation.
VI. R ESULTS A set of stability tests were performed on the LCG2 Stress Test Node operators using the Webcom graph engine. Each graph was composed of 30 benchmarks running 11 GridBench benchmarks [18], 10 NASA parallel benchmarks [5] and 9 Splash-2 [1] benchmarks. Each benchmark was submitted along with a shell script that calculated the computational time of the benchmark on the worker node. This information was returned by the output sandbox back to an output directory using the method described in Sect. II-B and illustrated in Figure 5. Each LCG2 job submission returns, using the logging service, the computational time to the output directory. In turn, each graph is timed, giving the time for the Webcom submission. This results in the timing of the graph submission engine, the LCG2 Submission and the individual benchmark submission.
7
and the WebCom’s internal submission service were merged for rapid prototyping. These implementations have been used both as a testbed for interoperability within the WebComG project and to conduct performance testing with particular topologies of jobs to assess the overhead of the interoperability infrastructure. VII. F UTURE W ORK Interoperability is a very active field of research and the results of this first implementation were promising. We are further pursuing interoperability in two main directions: firstly, the implementation of an enhanced version of the current architecture; secondly the extension of the architecture from the simplex submission of jobs from WebCom to two or more grids to a more versatile multiplex interoperability that allows any internal region to interact with any other internal region. Fig. 12.
Fig. 13.
50 condensed graphs each containing 30 serial benchmarks.
Webcom times for 50 graphs run sequentially.
The results in Fig. 12 were obtained for 50 consecutive runs of each graph resulting in 1500 job submissions. Only 31 of these were unsuccessful, most likely due to unavailable resources during that time. At no time did the Webcom engine’s submission system fail. As can be seen from Fig. 12 and 13 there are 5 major outliers (spikes) relating to jobs that were scheduled for a very long time due to unavailable resources. The results were obtained over a period of 7 days using myproxy delegation to retrieve the users proxy credential after each completed graph execution. Since the maximum Webcom time (14189seconds) for a single graph was less that the expiry time of a user proxy credential, the 50 graphs could be submitted in a fully automated environment once the initial myproxy service was started. Fig. 13 contains the 5 outliers that exist in Fig. 12. This suggests that Webcom graph submissions have consistent runtimes when resources are available, but deviate when resources are unavailable. Therefore, Webcom graph submissions do not introduce fluctuating overheads into the computational time for the benchmark submissions. Over the past year, two initial implementations of this interoperability architecture have been completed. Firstly a constrained form where all services had to coexist in the user’s environment on a single machine, then a second unconstrained form, although in both cases the Metagrid Submission Service
VIII. C ONCLUSION This research demonstrated that it is possible to achieve interoperability among different Grids by using a metagrid middleware to access the different Grids and by expressing the submission topologies with the Unified Computational Model. The process of design, implementation and testing of this first prototype of the metagrid middleware raised interesting considerations and further issues for future investigation. Analysis and design of the architecture led to a flexible architecture that will consitute the basis for future prototypes. A first prototype that is capable of arbitrarily complex submission topologies has been developed and tested. From a theoretical point of view, set theory was used to start a formal description of metagrid services and their interaction. Finally, benchmarking showed how the different layers of the metagrid middleware (execution engine, border services and grid services) contribute to the overhead for job submission. April 3, 2006 R EFERENCES [1] J. M. Arnold and M. A. McGarry. Splash2 programmers manual. In Technical Report SRC-TR-93-107, Center for Computing Sciences, Bowie, MD, 1994. [2] M. Blaze, J. Feigenbaum, J. Ioannidis, and A. Keromytis. The KeyNote Trust-Management System Version 2, September 1999. RFC 2704. [3] Clovis Chapman, Paul Wilson, Todd Tannenbaum, Matthew Farrellee, Miron Livny, John Brodholt, and Wolfgang Emmerich. Condor services for the global grid: interoperability between Condor and OGSA. In Proceedings of 2004 UK e-Science All Hands Meeting, pages 870–877, Nottingham, UK, August 2004. [4] Crossgrid. Crossgrid project web page. http://www.crossgrid.org/main.html. [5] Rob F. Van der Wijngaart. Nas parllel benchmarks version 2.4. In NAS Technical Report NAS-02-007, NAS Division, NASA Ames Research Center, Moffett Field, CA 94035-1000, 2002. [6] Globus. Gram rsl schema documentation. http://www.globus.org/toolkit/docs/3.0/gram/rsl-schema.html. [7] Soha Maad, Brian Coghlan, Geoff Quigley, John Ryan, Eamonn Kenny, and David O’Callaghan. Towards a complete grid filesystem functionality. Future Generation Computer Systems, submitted to special issue on Data Analysis, Access and Management on Grids, 2006. [8] John P. Morrison. Condensed Graphs: Availability-Driven, CoercionDriven and Control-Driven Computing. PhD thesis, Technische Universiteit Eindhoven, 1996. [9] John P. Morrison, James J. Kennedy, and David A. Power. Webcom: A web based volunteer computer. The Journal of Supercomputing, 18:47– 61, 2001.
8
[10] John P. Morrison, David A. Power, and James J. Kennedy. An evolution of the webcom metacomputer. J. Math. Model. Algorithms, 2:263–276, 2003. [11] J.P. Morrison, B. Coghlan, A. Shearer, S. Foley, D. Power, and R. Perrot. WebCom-G: A candidate middleware for Grid-Ireland. High Performance Computing Applications and Parallel Processing, 2005. [12] David O’Callaghan and Brian Coghlan. Bridging secure WebCom and European DataGrid security for multiple vos over multiple grids. In Proc. ISPDC 2004, pages 225–231, Cork, Ireland, July 2004. IEEE Computer Society. [13] David O’Callaghan, Brian Coghlan, Gabriele Pierantoni, and Geoff Quigley. Security for grid middleware interoperability. submitted to FGCS, February 2006. [14] Fabrizio Pacini. Job description language how-to, 2001. [15] Fabrizio Pacini. JDL attributes, 2003. [16] G. Pierantoni, O. Lyttleton, D. O’Callaghan, G. Quigley, E. Kenny, and B Coghlan. Multi-grid and multi-vo job submission based on a unified computational model. In Cracow Grid Workshop 2005, November 2005. [17] M. Rambadt and Ph. Wieder. UNICORE–Globus interoperability: Getting the best of both worlds. In Proc. of 11th International Symposium on High Performance Distributed Computing (HPDC), page 422, 2002. [18] George Tsouloupas and Marios D. Dikaiakos. Gridbench: A tool for benchmarking grids. In Proceedings of the 4th International Workshop on GridComputing (GRID2003), pages 60–67, Phoenix, AZ, November 2003. IEEE.