Grid Deployment Experiences: Grid Interoperation - Springer Link

26 downloads 5214 Views 298KB Size Report
Aug 27, 2009 - provided via different local systems. These generic interfaces .... free flow of accounting information. VOs usually bind this .... such as trouble ticket-routing and infrastructure monitoring. .... The Internet and Beyond. National ...
J Grid Computing (2009) 7:287–296 DOI 10.1007/s10723-009-9128-1

Grid Deployment Experiences: Grid Interoperation Laurence Field · Erwin Laure · Markus W. Schulz

Received: 10 July 2009 / Accepted: 12 August 2009 / Published online: 27 August 2009 © Springer Science + Business Media B.V. 2009

Abstract Over recent years a number of Grid projects have emerged which have built Grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one Grid Infrastructure due to the different middleware and operations procedures used. Grid Interoperation is trying to bridge these differences and enable Virtual Organizations to access resources independent of the Grid project affiliation. Building upon the experiences the authors have gained while working on interoperation between EGEE and various other Grid infrastructures as well as through co-chairing the Grid Interoperation Now (GIN) efforts of the Open Grid Forum (OGF), this paper gives an overview of Grid Interoperation and describes various methods that can be used to connect Grid Infrastructures. The case is made for standardization in key areas and why the Grid community should move more aggressively towards standards.

L. Field (B) · M. W. Schulz CERN, Geneva, Switzerland e-mail: [email protected] M. W. Schulz e-mail: [email protected] E. Laure KTH, Stockholm, Sweden e-mail: [email protected]

Keywords Grid · Interoperation · Interoperability · Standardization · Grid infrastructures

1 Introduction In the paper The Anatomy of the Grid: Enabling Scalable Virtual Organizations [9], the Grid concept is underlined by the real and specific problem which it addresses. This problem is defined as coordinated resource-sharing and problem-solving in dynamic, multi-institutional virtual organizations. The key concept introduced was the Virtual Organization (VO), which is a group of users who collaborate to solve a problem. Traditionally, institutions which had resources would only allow members of that institution to use those resources, however, problem-solving activities are often cross-institutional and involve members from multiple institutes. To overcome this barrier, the concept of the VO can be used along with Grid middleware to enable members of numerous VOs to access resources via common interfaces. This approach follows the hourglass model [12] where a small set of well-defined interfaces at the boundary of the institute enables a diverse set of VOs to utilize resources which may be provided via different local systems. These generic interfaces abstract out the resource and from the

288

users perspective have become known as Grid Services [10]. The relationship between institutions and VOs is that of a resource provider and resource consumer respectively. As such it was envisaged that Grid Services would enable ubiquitous or utility computing, whereby the resource consumer only cares about the utility of the resource and has the freedom to choose any resource provider based on their own requirements. As the technology to enable this becomes pervasive and transparent for the user, the user becomes decoupled from the provider of the resource and their own required utility through what is known as a Grid Infrastructure. This concept is analogous to that of a power grid where consumers obtain electricity via an infrastructure which links multiple resource providers, power stations, transparently. Over recent years a number of Grid projects, of which many had a strong regional presence, have emerged to help coordinate institutions build Multi-institutional Infrastructures for e-Science. Today, we face a situation where a number of Grid Infrastructures exist, each of which using different, often not interoperable, middleware and procedures. A number of efforts have been made to enable the usage of various infrastructures in a seamless way, typically referred to as Grid Interoperations. The two main obstacles to overcome in this context are: 1. Differences in the interfaces and semantics of the Grid services offered, and; 2. Differences in the operational procedures used at the various infrastructures. This papers builds upon real experience gained from leading interoperation activities between EGEE [15] and various other Grid infrastructures, as well as through co-chairing the Grid Interoperation Now (GIN) [17] efforts of OGF. It provides a critical review of the different techniques used to tackle these problems and reports on the experience gained by working with various the various Grid infrastructures. The use of these techniques in production environments has enabled some of the today’s leading Grid infrastructures to seamlessly interoperate, and hence enable users to gain transparent access to re-

L. Field et al.

sources regardless of the primary Grid infrastructure to which they are affiliated. As such the analysis provided is both from a user and a resource provider perspective. The remainder of this paper is structured as follows. Techniques to overcome differences in Grid middleware services are discussed in Section 2, followed by a discussion of issues in operational procedures in Section 3. Section 4 describes various interoperation activities where these techniques have been employed between production infrastructures. An analysis of these efforts and a summary of lessons learned is provided in Section 5. We close the paper with summarizing remarks in Section 6.

2 Grid Middleware Interoperability A necessary pre-requisite to allow a seamless exploitation of different Grid infrastructure is to overcome differences in the interfaces and semantics of the different Grid services offered by the infrastructures. The basic goal is to ensure information from one infrastructure can be exchanged and understood by the other infrastructure, i.e. make the services offered interoperable. In order to achieve Grid Interoperability, it is first necessary to understand the composition of and differences between each infrastructure. An Interoperability Matrix can be used to compare the critical interfaces and will highlight the differences between each infrastructure. The most important interfaces are those found at the institute boundary, as higher level services are usually specific to a VO and hence are not location-dependent. Hence, an Interoperability Matrix typically focuses on security, information services, job management and data management, which are the four core areas of any Grid Infrastructure. Once the differences in these services have been identified, various methods can be used to overcome these differences and achieve interoperability. As an example, the Interoperability Matrix shown in Table 1 compares the main interfaces of NDGF [5], EGEE [15], and OSG [7]. It is worth pointing out that even though many of the services used by these infrastructures are the same,

Grid Deployment Experiences: Grid Interoperation Table 1 An interoperability matrix showing the different technologies deployed on NDGF, EGEE, OSG and NAREGI

Interface

NDGF

EGEE

OSG

NAREGI

Job submission interface Data model Information model File transfer protocol Storage interface Security infrastructure

GridFTP LDAP ARC GridFTP SRM GSI

GRAM v2 LDAP GLUE v1.1 GridFTP SRM GSI

GRAM v2 LDAP GLUE v1.2 GridFTP SRM GSI

GRAM v4 CIM CIM + GLUE GridFTP GFS GSI

interoperability is hampered by different versions deployed as well as by configuration differences. Starting from such an interoperability matrix, a number of different approaches can be taken to enable interoperability. We discuss these in the following subsections and provide an analysis from a user and resource provider point of view. 2.1 Virtual Organization Driven VOs can themselves strive to achieve interoperability as shown in Fig. 1. They can access multiple Grid infrastructures and either split the workload between the infrastructures or build into their frameworks the ability to work with each infrastructure. While this allows resource providers to offer a single service, this approach places significant effort on the VO in their framework development. In addition, as each VO solves the problem, this results in a significant duplication of effort and loss of overall productivity. The effort required also increases with the number of Grid infrastructures which the VO would like to use. Moreover, this often results in a keyhole approach where the minimum common subset of functionality is used and handling failures can Fig. 1 An example of how VOs themselves can achieve interpolation by using multiple interfaces

289

be problematic. Furthermore, frameworks might need to change when infrastructures upgrade their services. This approach was used by the ATLAS [4] community to overcome the problem of interoperation with OSG, EGEE and NDGF. Taking this to the extreme, the ALICE [18] community has developed a complete middleware solution and have essentially deployed a complete parallel infrastructure; however this then impacts the resource provider who has to provide specific services and/or access to servers to the community. 2.2 Parallel Deployment Sites can achieve interoperability by deploying multiple interfaces as shown in Fig. 2. The resource can be made available to multiple infrastructures by deploying the respective Grid services that are required. This approach would enable seamless interoperation from the VOs’ perspective; however, it is a significant overhead for the resource provider. The system administrator will need to become an expert in each Grid service and each service requires resources that could have been directly utilized by the VO. The effort required also scales with the number

290

L. Field et al.

strate the demand for achieving interoperability. Essentially, gateways show the same behaviour as the user-driven approach discussed above, with the difference that gateways are typically provided by resource provider. Hence it is them taking the responsibility and effort and not the user communities. Consequently, gateways can be used by multiple user communities, reducing the risk of effort duplication. This approach was used by NAREGI [16] in their interoperation activity with EGEE and more details of which are can be found in Section 4.

Fig. 2 An example of how the parallel deployment of Grid interfaces can be used to achieve interoperation

of Grid infrastructures that a resource provider wishes to support, and therefore this method is only recommended for large resource centres. Forschungszentrum Karlsruhe used this approach to overcome the problem of interoperation with EGEE, NDGF and D-Grid [11]. 2.3 Gateways A gateway, as shown in Fig. 3, is a bridge between Grid Infrastructures. It is a specific service which makes the Grid Infrastructure look like a single resource. This results in a keyhole approach where the minimum common subset of functionality is used and handling failures can be problematic. Gateways can also be a single point of failure and a scalability bottleneck; however, this approach is very useful as a proof of concept and to demon-

Fig. 3 An example of how a gateway can be used to achieve interoperation

2.4 Adapters and Translators Adapters, as shown in Fig. 4, allow two entities to be connected. Translators modify information so that it can be understood. Adapters and translators can be incorporated directly into the middleware so that it can work with both interfaces. This will require modifications to the Grid middleware but it does mean that the existing interfaces can be used. This approach was used to create the GIN BDII [8] which was a top level BDII that contained information from multiple Grid infrastructures. Where and how the adapters and translators are used highlights the interfaces which need standardization. The ability to use multiple interfaces is a useful feature even when using standards to manage the evolution of that standard. Adapters and translators are almost transparent to users and resource providers as they are managed by the middleware providers. However, strong collaboration between the middleware providers is

Fig. 4 An example of how adapters can be used to achieve interoperation

Grid Deployment Experiences: Grid Interoperation

required to keep them functioning when updates are made on the corresponding infrastructure. 2.5 Common Interfaces Common interfaces with same semantics are clearly the desired solution, however achieving consensus can be cumbersome and challenging. These enable resource providers to deploy a single set of services and evolve it independently without the risk of breaking interoperability. However, in the absence of standards, which interface should be used? As a Grid Infrastructure has already heavily invested in one interface, moving to another interface may only be possible in the long term. In addition, agreeing on which interface to use and the deployment of a production-quality implementation across all infrastructures will take time.

3 Operational Procedures Interoperable Grid services make up only one of the pre-conditions that need to be met for allowing interoperation of Grid sites. Equally important is to bridge gaps in operational procedures that currently hamper seamless interoperation. This includes usage policies, accounting, user support, infrastructure monitoring, upgrade policies and daily operations. Harmonization of these procedures and operational tools is essential for real interoperation. In the following we discuss these issues and highlight what needs to be in place for seamless interoperation. 3.1 Usage Policies Most Grid infrastructures are governed in accordance to a number of policies. These policies define the procedures which a resource provider must follow in order to join the infrastructure and what is expected while participating in the infrastructure, and under which conditions usage of the resources is allowed. A VO wishing to use the infrastructure has to define the scope of the work that will be carried out, and the users

291

belonging to the VO have to agree to an acceptable usage policy. Before any interoperation activity can take place, discussions are needed between the two infrastructures to align these policies. Interoperation is seamless if the user has to sign only one usage policy and institutes have a single set of procedures to follow. 3.2 Accounting and Auditing Accounting for the resource usage is vitally important and accounting records must be available to the VO and the Grid infrastructure providers. If the infrastructures have different accounting systems, the techniques as outlined in Section 2 can be used to link the accounting systems. A common solution is to create a gateway to exchange only accounting summaries. This is particularly important to observe legal constraints that prevent a free flow of accounting information. VOs usually bind this information to their workflows or use accounting in user space by extracting accounting information from their portals or workflow engines. 3.3 User Support Most Grid infrastructures operate a User Support Service whereby users can provide information on a problem that they encountered while using the infrastructure. In addition, the resolution of this problem can be tracked. Usually the problem passes through a number of hands, typically using a tracking system, before the source of the problem is identified and the relevant expert can provide a solution. In the case of two or more interoperating infrastructures, the problem experienced in one infrastructure may be caused by a service in another infrastructure. For example, a computing activity fails in one infrastructure due to an input file not being transferred as the storage service in another infrastructure is unexpectedly down. In such a scenario, the problem description will need to traverse both infrastructures, possibly multiple times. As User Support is generally implemented using a tracking system, the techniques as outlined in Section 2 can be used to bridge the two systems if different.

292

L. Field et al.

3.3.1 Virtual Organization Support

3.6 Security Incident Response

Not all VOs are the same and many have unique requirements. While a VO’s activity may be a priority for one Grid infrastructure, it may not be a priority for the other. In such case a special effort is needed to understand the VO’s requirements and how they can be met on the other infrastructure. In some cases, the other infrastructure may require changes to meet these new requirements. Alternatively the VO may need addition help to modify their methods and switch to a more opportunistic approach. In either case extra effort may be required to satisfy the needs of the VO.

A particular problem in daily operations is how to respond to security incidents, as the distributed nature of a Grid infrastructure presents additional security challenges. Security incidents at one resource provider can potentially create security-related issues for another. As security incidents have the potential to traverse infrastructure boundaries, Grid interoperations require a common Security Incidence procedure to ensure that there is a coordinated response.

3.4 Infrastructure Monitoring To ensure the integrity of the infrastructure, monitoring tools are required to gather various metrics. The monitoring metrics need to be agreed and it must be ensured that these metrics are available for the native tooling used by each infrastructure. This may require using the techniques as outlined in Section 2. These metrics may also be used to understand the usage of the infrastructure and to measure quantities such as the adherence to Service Level Agreements. In each case, what is required needs to be discussed between the infrastructure providers to ensure that the correct metrics are gathered and communicated.

3.5 Daily Operations To ensure infrastructure integrity and user satisfaction, daily operations are required. Daily operations continually monitor the infrastructure for issues and ensure that they are tracked and resolved. This requires preempting problems, firefighting and prioritizing issues as well as ensuring that the relevant information is disseminated to all parties concerned. In order to achieve interoperations, it is important that the day-to-day operations cover both infrastructures. This requires first understanding the operational work carried out in each infrastructure, and identifying in the respective processes and where to create the communication links between the operational teams.

3.7 Software Upgrades Grid infrastructures are constantly evolving and Grid middleware frequently needs to be updated across the infrastructure. Some updates have a negligible impact on the infrastructure as they only affect an obscure feature in one particular service. However, other updates are infrastructure-wide and may affect all services such as information system or security infrastructure changes. It must be ensured that whatever changes are made to the infrastructure, interoperability is maintained. For this reason the software process used for evaluating the software releases before deployment must include interoperability testing. Any major changes that need to be made to the infrastructure must be discussed in advance between all infrastructures so that the solutions required to maintain interoperability can be developed. The usage of standards can help with this problem, however for each update standard compliance tests need to be performed and even slight changes in the standard (or its interpretation) might lead to interoperability problems. As a result, interoperability tests are still useful and required. The discussions and tests become increasingly difficult with the number of infrastructures involved and in addition infrastructures follow independent release schedules.

4 Example Grid Interoperations Activities The Enabling Grids for E-SciencE [15] project is the largest multi-disciplinary Grid infrastructure in the world, which brings together

Grid Deployment Experiences: Grid Interoperation

institutes to produce a reliable and scalable infrastructure available to the European and global research community. The worldwide LHC Computing Grid [14] project aims to build and maintain a data storage and analysis framework for the entire high energy physics community that will use the Large Hadron Collider [2] and is a major user of the EGEE infrastructure. The LHC VOs have resources in multiple infrastructures and as such it is necessary that these Grids interoperate. In this section we report on our experiences with interoperation efforts gained in these projects. We particularly focus on advanced efforts where we can already report on lessons learned; these include the Open Science Grid (OSG) and the Nordic Data Grid Facility (NDGF). We also report on efforts with NAREGI, as this infrastructure is quite different and thus exposes new challenges. As described in Section 2 the first step toward interoperation is to produce an interoperability matrix. Table 1 shows the interoperability matrix for EGEE, OSG, NDGF, and NAREGI. The matrix gives an overview of the of the main interfaces used, and highlights the similarities and differences between the infrastructures. It serves as a foundation for the work and is used to focus on the main areas where interoperability is required. It can be seen that although they all support the same protocol for file transfers, NDGF and NAREGI both provide different job submission interfaces and information models. In the following we present the effort, status, and lessons learned for each of the pair-wise interoperability efforts. 4.1 Open Science Grid The Open Science Grid [7] is a US grid computing infrastructure that supports scientific computing via an open collaboration of science researchers, software developers and resource providers. The first steps towards interoperability took place during the end of 2004. The interoperability matrix indicates that the main difference between OSG and EGEE was the information schema used in the information system. As the main purpose of the GLUE schema [3] used in EGEE was to facilitate interoperability, OSG considered moving to the GLUE schema on condition that a minor revi-

293

sion was undertaken which would meet their additional use cases. After the definition on GLUE 1.3, the information providers were updated to provide this additional information and deployed on a prototype infrastructure to test interoperability. In January 2005, the proof of concept was demonstrated where a job was sent from an EGEE UI to an OSG CE via an EGEE WMS. This was achieved as information about the OSG resources were now published using the GLUE representation and the WMS was able to match the job to the resource. In order to achieve this goal a number of additional small configuration changes were required to the installed software. It took until August 2005 for all necessary middleware changes to be fed back into the main software release and deployed across the OSG infrastructure. This was not due to the complexity of the task but due to the release cycles and the time required to update software on a large-scale production system. In September 2005, the CMS VO [2] successful took advantage of the interoperating infrastructures. However, it took a further 2 years to iron out all the interoperation issues, such as trouble ticket-routing and infrastructure monitoring. OSG and EGEE set up a joint weekly operations meeting to ensure that interoperation is maintained. 4.2 Nordic Data Grid Facility The Nordic Data Grid Facility, NDGF, [13] is a production grid facility that leverages existing national computational resources and grid infrastructures of the Nordic countries (Denmark, Finland, Norway and Sweden). The NDGF infrastructure is provided using the ARC [6] middleware and initial discussions on interoperation took place in August 2005. The interoperability matrix showed that the main differences between the ARC middleware and EGEE were the information system and the computing interface. In order to achieve interoperability, adapters and translators have been incorporated into the middleware to overcome these differences. For the information system, translating information providers were created to translate the information from the Nordugrid schema to the GLUE schema and vice versa. These were deployed on

294

information system gateways, which provided a bridge between the two infrastructures. To address the problem of the CE interface, adaptors were included in the job management components to enable them to work with the different interfaces. The ARC client was adapted so that it could also submit jobs to the CE interface used in EGEE and the WMS use in EGEE was modified so that jobs could be submitted to the ARC CE. Selecting which interface to use was achieved by encoding this information in the ASCII string representing the endpoint in the information system. After these prototype components were evaluated, it was not until June 2008 that the changes were incorporated into the main software release and end-to-end submission could be demonstrated. The CMS VO immediately evaluated this approach and continued to push for improvements until it met their high standards for scalability and performance. This highlights the need for a pilot VO in interoperations activities. Pilot VOs not only serve to demonstrate the need and ensure that the activity is focused, but also provide the drive to achieve the goal. In parallel to the software development, discussions on operations aspects took place between EGEE and NDGF to discuss trouble ticket-routing and infrastructure monitoring. NDGF and EGEE also have a joint weekly operations meetings to ensure that interoperation is maintained. 4.3 NAREGI The NAREGI project [16] aims to provide a large-scale computing environment for widelydistributed advanced research and education in Japan. Initial discussions on interoperation took place between NAREGI and EGEE in the end of 2006. The interoperability matrix in Table 1 indicates that there are many differences between the NAREGI and EGEE middleware. A number of approaches were discussed to overcome these differences and prototypes of interoperable components were developed using gateways. Gateways were chosen as they could enable interoperation between the infrastructures without making significant changes to the infrastructures themselves and would enable a proof of concept to be evaluated in a production environment. The

L. Field et al.

EGEE WMS was adapted to enable it to submit to the NAREGI CE, and the NAREGI Super Scheduler was adapted to submit to the CE used in EGEE. Translating information providers were again employed to create gateways between the two infrastructures. For the data management, an investigation was required into SRM/SRB interoperability. As the NAREGI middleware was in an early stage of development, a production infrastructure with corresponding operations activity was not available and as such work on interoperations could not get underway. In this context, the interoperations activity is therefore on hold until the NAREGI production infrastructure is up and running.

5 Analysis Our multi-year experience in Grid interoperations has shown that seamless access to resources operated in different infrastructures is possible, albeit with significant effort. Several Grid infrastructures were established over the past years and these were indeed successful in providing seamless access to the resources federated in the infrastructure, and in overcoming differences in the administrative domains of their resource providers. However, instead of solving the problem of seamless access in a generic way, different solutions have evolved both on the middleware as well as operational side lifting the problem of seamless access from the resource providers to Grid infrastructures, but not solving it in the general sense. The main reason is that Grid infrastructures use different interfaces/services and different operational procedures. While differences in middleware can be overcome by the techniques described in Section 2 there are limitations to the completeness and sustainability of these approaches. For Grid Computing to achieve its full potential, all infrastructures must adopt interoperable solutions. The best way to achieve this is by standardizing the most important interfaces required for Grid Computing. The Interoperability matrices used in the interoperability activities can be used to identify which interfaces need standardization. The areas which need primary focus are the areas of security,

Grid Deployment Experiences: Grid Interoperation

information services, job management and data management, and operational tools. There are a number of efforts underway, most specifically within the Open Grid Forum [1], which are attempting to address these missing standards. One most notable effort is the definition of the GLUE 2.0 information model which aims to provide a common information schema for Grid infrastructures. A missing common information model was an area that was highlighted in all the interoperation activities. Another area which was highlighted was need for a common job submission interface. It is hoped that the Basic Execution Service along with the Job Submission Description Language will be useful in addressing this issue. Within the OGF the Grid Interoperation Now (GIN) activity has prepared the field [17] and the recently formed Production Grid Infrastructure (PGI) group, a GIN spin-off activity, is expected to provide guidelines and profiles for interoperating Grid services, while taking experiences from production Grid infrastructures into account. Standardization is, however, a long and delicate process and it may take several years until standardized, interoperable components appear in mainstream Grid services. As shown by our interoperation activities described in Section 4 even ad-hoc, non standard solutions take several months to years before being supported in mainstream Grid middleware releases. Interoperable Grid middleware is, while a necessary pre-condition to Grid interoperation, only one side of the coin. Harmonized operational procedures and usage policies are equally important to provide users seamless access to resources. In a joint effort the major international Grid infrastructures (DEISA, EGEE, NDGF, NAREGI, OSG and TeraGrid) are trying to harmonize their usage policies. The aim is to have users only sign the usage policy of one infrastructure which would subsequently allow them to use another infrastructure seamlessly. OSG, EGEE and NDGF have come a long way in harmonizing their operation procedures as well, defining the exchange of usage records and problem tickets, coordinating their security efforts, and having joint operational meetings on a regular (weekly) basis. This constant effort is essential to maintain interoperations

295

but it needs to be well-motivated as it constitutes significant additional effort. In the EGEE, OSG and NDGF case it was the common goal to support the LHC experiments via the LCG project that motivated the interoperation efforts and provided a tangible goal.

6 Conclusion Most current Grid efforts have pushed the original Grid problem to a higher level. While seamless resource usage is possible within a certain infrastructure, cross-infrastructure usage is still a problem and as such the original problem has not been solved but merely turned into the Grid Interoperation problem. The main reason why the original problem remains unsolved is that many different solutions exist, for a variety of reasons. For Grid Computing to achieve its full potential, different infrastructures must offer interoperable services which a user can access in a seamless way, much like the World Wide Web works today. In this paper we presented a number of techniques that can be applied in the most important areas of interoperation. Table 2 summarizes them in an interoperation checklist; following this checklist will lead to successful interoperation efforts as already demonstrated in various efforts reported in this paper. The best way to achieve interoperability is by standardizing the most important interfaces required for Grid Computing. However, with the absence of standards, middleware interoperability can be achieved by using the methods outlined in Section 2. Interoperability activities can also be used to identify which interfaces need standardization and hence, It is important that the standards which are defined are acceptable for

Table 2 Interoperation checklist • Middleware interoperability (Interoperability Matrix) • Usage policies • Accounting and auditing • User support • Infrastructure monitoring • Daily operations

296

the production Grid infrastructures which need to adopt them. One of the reasons why multiple solutions exist today is that the proposed de facto solution presented to the infrastructures did not meet the requirements of deployment reality and each infrastructure independently developed their own solutions. This situation must be avoided during the development of standards. The best way to do this is to ensure that the infrastructures who will depend on these standards are part of the standardization process and proven solutions are used as templates for those standards. Even after middleware interoperability has been achieved, operational harmonization is still required. Ensuring infrastructure integrity and user satisfaction requires daily operations which involves extensive communication between the operation teams for their respective infrastructures. Acknowledgement This work was partially supported by the European Commission through the EGEE-III project nr. INFSO-RI-222667

References 1. Open Grid Forum. http://www.ogf.org/ (2007) 2. The Large Hadron Collider. http://lhc.web.cern.ch/lhc/ (2008) 3. Andreozzi, S., Burke, S., Field, L., Konya, B.: GLUE Schema Specification version 1.3. http://glueschema. forge.cnaf.infn.it/Spec/V13 (2004) 4. Campana, S., Barberis, D., Brochu, F., et al.: Analysis of the ATLAS Rome production experience on the LHC computing Grid. In: e-Science and Grid Computing, 2005. First International Conference on e-Science and Grid Computing, Melbourne, Australia, December 2005

L. Field et al. 5. Kónya, B., et al.: The NorduGrid architecture and tools. In: Proceedings of the International Conference on Computing in High Energy and Nuclear Physics (CHEP 2003) (2003) 6. Ellert, M., et al.: Advanced resource connector middleware for lightweight computational Grids. Future Gener. Comput. Syst. 23, 219–240 (2007) 7. Pordes, R., Livny, M., Foster, I., Gardner, R., Quick, R., et al.: The Open Science Grid. Journal of Physics: Conference Series 78, 012057 (15pp). http://stacks. iop.org/1742-6596/78/012057 (2007) 8. Field, L., Flechl, M.: Grid interoperability: joining Grid information systems. In: Proceedings of the International Conference on Computing in High Energy and Nuclear Physics (CHEP 2007) (2007) 9. Foster, I., Kesselman, C., Tuecke, S.: The anatomy of the Grid: enabling scalable virtual organizations. Int. J. High Perform. Comput. Appl. 15, 200–222 (2001) 10. Foster, I., Kesselman, C., Tuecke, S.: Grid services for distributed system integration. Computer 35(6), 37–46 (2002) 11. Gabriel, S.: GridKa Tier1 site management. In: CHEP 2007, Victoria, Canada, September 2007 12. Kleinrock, L.: Realizing the Information Future: The Internet and Beyond. National Academy Press, Washington (1994) 13. Kónya, B., Smirnova, O., Ekelof, T., Ellert, M.: The NorduGrid production Grid infrastructure, status and plans. In: Proc. 4th International Workshop on Grid Computing (2003) 14. Lamanna, M.: The LHC computing Grid project at CERN. Nucl. Instrum. Methods Phys. Res. 534, 1–6 (2004) 15. Laure, E., Jones, R.: Grid Computing: Infrastructure, Service, and Application, Chapter Enabling Grids for e-Science: The EGEE Project. CRC, Boca Raton (2009) 16. Matsuoka, S.: Japanese computational Grid research project: NAREGI. Proc. IEEE 93(3), 522–533 (2005) 17. Riedel, M., Laure, E., Soddemann, Th., et al.: Interoperation of world-wide production e-Science infrastructures. Concurrency and Computation: Practice & Experience 21, 961–990 (2009) 18. Saiz, P., Aphecetcheb, L., Bun, P., Piska, R.: AliEn: ALICE environment on the GRID. In: Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 502, pp. 2–3 (2003)