A Lightweight Application Hosting Environment for ...

2 downloads 97345 Views 141KB Size Report
using it. In response to this we have developed the Application Hosting Environment, a lightweight, easily ... ing that potential users have to develop a new skill set before they ... the same application. The AHE focuses on applications not jobs,.
A Lightweight Application Hosting Environment for Grid Computing P. V. Coveney, S. K. Sadiq, R. Saksena, M. Thyveetil, and S. J. Zasada Centre for Computational Science, Department of Chemistry, University College London, Christopher Ingold Laboratories, 20 Gordon Street, London, WC1H 0AJ

M. Mc Keown, and S. Pickles Manchester Computing, Kilburn Building, The University of Manchester, Oxford Road, Manchester, M13 9PL Abstract Current grid computing [1, 2] technologies have often been seen as being too heavyweight and unwieldy from a client perspective, requiring complicated installation and configuration steps to be taken that are beyond the ability of most end users. This has led many of the people who would benefit most from grid technology, namely application scientists, to avoid using it. In response to this we have developed the Application Hosting Environment, a lightweight, easily deployable environment designed to allow the scientist to quickly and easily run unmodified applications on remote grid resources. We do this by building a layer of middleware on top of existing technologies such as Globus, and expose the functionally as web services using the WSRF::Lite toolkit to manage the running application’s state. The scientist can start and manage the application he wants to use via these services, with the extra layer of middleware abstracting the details of the particular underlying grid middleware in use. The resulting system provides a great deal of flexibility, allowing clients to be developed for a range of devices from PDAs to desktop machines, and command line clients which can be scripted to produce complicated application workflows.

1

Introduction

ii. they are dependent on lots of supporting software being installed, particularly libraries that are not likely to already be installed on the resource, or modified versions of common libraries.

We define grid computing as distributed computing conducted transparently by disparate organisations across multiple administrative domains. Fundamental to the inter-institutional sharing of resources in a grid is the grid middleiii. they require non-standard ports to be ware, that is the software that allows the instiopened on firewall, requiring the interventution to share their resources in a seamless and tion of a network administrator. uniform way. iv. they have a high barrier to entry, meanWhile many strides have been made in the ing that potential users have to develop a field of grid middleware technology, such as new skill set before they are able to use the [3, 4], the prospect of a heterogeneous, ontechnology productively. demand computational grid as ubiquitous as the electrical power grid is still a long way off. Part To address these deficiencies there is now of the problem has been the difficulty to the end much attention focused on ‘lightweight’ middleuser of deploying and using many of the current ware solutions such as [6] which attempt to lower middleware solutions, which has lead to relucthe barrier of entry for users of the grid. tance amongst some researchers to actively embrace grid technology [5]. Many of the current problematic grid midThe Application Hosting dleware solutions can be characterised as what 2 we define as ‘heavyweight’, that is they display Environment some or all of the following features: i. the client software is difficult to configure In response to the issues raised above we have or install, very often requiring an experi- developed the Application Hosting Environment enced system administrator to do so. (AHE), a lightweight, WSRF [7] compliant, web

services based environment for hosting scientific applications on the grid. The AHE allows scientists to quickly and easily run unmodified, legacy applications on grid resources, managing the transfer of files to and from the grid resource and allowing the user to monitor the status of the application. The philosophy of the AHE is based on the fact that very often a group of researchers will all want to access the same application, but not all of them will possess the skill or inclination to install the application on a remote grid resource. In the AHE, an expert user installs the application and configures the AHE server, so that all participating users can share the same application. The AHE focuses on applications not jobs, with the application instance being the central entity. We define an application as an entity that can be composed of multiple computational jobs; examples of applications are (a) a simulation that consists of two coupled models which may require two jobs to instantiate it and (b) a steerable simulation that requires both the simulation code itself and a steering web service to be instantiated. Currently the AHE has a one to one relationship between applications and jobs, but this restriction will be removed in a future release once we have more experience in applying these concepts to scenarios (a) and (b) detailed above. An application instance is represented as a stateful WS-Resource[7], the properties of which include the application instance’s name, status, input and output files and the target grid resource that the application has been launched on. Details of how to launch the application are maintained on a central service, in order to reduce the complexity of the AHE client. The design of the AHE has been greatly influenced by WEDS (WSRF-based Environment for Distributed Simulations)[8], a hosting environment designed for operation primarily within a single administrative domain. The AHE differs in that it is designed to operate across multiple administrative domains seamlessly, but can also be used to provide a uniform interface to applications deployed on both local HPC machines, and remote grid resources. The AHE is based on a number of preexisting grid technologies, principally GridSAM [9] and WSRF::Lite [10]. WSRF::Lite is a Perl implementation of the OASIS Web Services Resource Framework specification. It is built using the Perl SOAP::Lite [11] web services toolkit, from which it derives its name. WSRF::Lite provides support for WS-Addressing [12], WS-

ResourceProperties [13], WS-ResourceLifetime [14], WS-ServiceGroup [15] and WS-BaseFaults [16]. It also provides support for digitally signing SOAP [17] messages using X.509 digital certificates in accordance with the OASIS WSSecurity [18] standard as described in [19]. GridSAM provides a web services interface for submitting and monitoring computational jobs managed by a variety of Distributed Resource Managers (DRM), including Globus [3], Condor [20] and Sun Grid Engine [21], and runs in an OMII [22] web services container. Jobs submitted to GridSAM are described using Job Submission Description Language (JSDL) [23]. GridSAM uses this description to submit a job to a local resource, and has a plug-in architecture that allows adapters to be written for different types of resource manager. In contrast to WEDS, which represents jobs co-located on the hosting resource, the AHE can submit jobs to any resource manager for which a GridSAM plug-in exists. Reflecting the flexible philosophy and nature of Perl, WSRF::Lite allows the developer to host WS-Resources in a variety of ways, for instance using the Apache web server or using a standalone WSRF::Lite Container. The AHE has been designed to run in the Apache [24] container, and has also been successfully deployed in a modified Tomcat [25] container.

3

Design Considerations

The problems associated with ‘heavyweight’ middleware solutions described above have greatly influenced the design of the Application Hosting Environment. Specifically, they have led to the following constraints on the AHE design:

ˆ the user’s machine does not have to have

client software installed to talk directly to the middleware on the target grid resource. Instead the AHE client provides a uniform interface to multiple grid middlewares.

ˆ the client machine is behind a firewall that

uses network address translation (NAT) [26]. The client cannot therefore accept inbound network connections, and has to poll the AHE server to find the status of an application instance.

ˆ the client machine needs to be able to up-

load input files to and download output files from a grid resource, but does not have GridFTP client software installed.

ˆ

ˆ

ˆ

An intermediate file staging area is there- running on the remote grid resource (such as fore used to stage files between the client Globus). This layer is used to greatly simand the target grid resource. plify running an application on a remote machine, by abstracting away most of the details the client has no knowledge of the location of how the application is actually run. Figure 1 of the application it wants to run on the shows the architecture and workflow of the AHE. target grid resource, and it maintains no Briefly, the core components of the AHE are: the information on specific environment variApp Server Registry, a registry of applications ables that must be set to run the applicahosted in the AHE; the App Server Factory, a tion. All information about an application “factory” according to the Factory pattern [28] and its environment is maintained on the used to produce a WS-Resource (the App WSAHE server. Resource) that acts as a representation of the the client should not be affected by instance of the executing application. The App changes to a remote grid resource, such as Server Factory is itself a WSRF WS-Resource if its underlying middleware changes from that supports the WS-ResourceProperties operGT2 to GT4. Since GridSAM is used to ations. The Application Registry is a registry provide an interface to the target grid re- of previously created App WS-Resources, which source, a change to the underlying middle- the user can query to find previously launched ware used on the resource doesn’t matter, application instances. The File Staging Service is a WebDav [29] file server which acts as an as long as it is supported by GridSAM. intermediate staging step for application input the client doesn’t have to be installed on a files from the user’s machine to the remote grid single machine; the user can move between resource. We define the staging of files to the clients on different machines and access File Staging Service as “pass by value”, where the applications that they have launched. the file is transfered from the user’s local maThe user can even use a combination of chine to the File Stage Service. The AHE also different clients, for example using a com- supports “pass by reference”, where the client mand line client to launch an application supplies a URI to file required by the applicaand a GUI client to monitor it. The client tion. The MyProxy Server is used to store proxy therefore must maintain no information credentials required by GridSAM to submit to about a running application’s state. All Globus job queues. As described above we use state information is maintained as a cen- GridSAM to provide a web services compliant tral service that is queried by the client. front end to remote grid resources.

ˆ all communication is secured using Trans-

port Layer Security (TLS) [27], secured All user interaction is via a client that comwith the user’s grid X.509 certificate, municates with the AHE using SOAP messages. which is used to authenticate them. In figure 1 green arrows indicate messages sent by the user, red arrows indicate messages sent These constraints have led to the design of a by the AHE. The workflow of launching an aplightweight client for the AHE, which is simple to plication on a grid resource running the Globus install and doesn’t require the user to install any middleware is as follows: the user retrieves a extra libraries or software. It should be noted list of App Server Factory URIs from the AHE that this design doesn’t remove the need for mid(1). There is an application server for each apdleware solutions such as Globus on the target plication configured in the AHE. This step is grid resource; indeed we provide an interface to optional as the user may have already cached run applications on several different underlying the URI of the App Server Factories he wants to grid middlewares so it is essential that grid reuse. The user issues a “Prepare” message (2); source providers maintain a supported middlethis causes an App WS-Resource to be created ware installation on their machines. What the (3) which represents this instance of the applidesign does do is simplify the experience of the cation’s execution. To start an application inend user. stance the user goes through the sequence: Prepare Upload Input Files Start, where Start 4 Architecture of the AHE actually causes the application to start executing. Next the user uploads the input files to the The AHE introduces an extra layer of mid- intermediate file staging service using the WEBdleware between the user and the middleware DAV protocol (4).





Figure 1: The architecture of the Application Hosting Environment The user generates and uploads a proxy credential to the MyProxy server (5). The proxy credential is generated from the X.509 certificate issued by the user’s grid certificate authority. This step is optional, as the user may have previously uploaded a credential that is still valid. Once the user has uploaded all of the input files he sends the “Start” message to the App WSResource to start the application running (6). The Start message contains the locations of the files to be staged in to and out from the target grid resource, along with details of the user’s proxy credential and any arguments that the user wishes to pass to the application. The App WS-Resource maintains a registry of instantiated applications. Issuing a prepare message causes a new entry to be added to the registry (7). A “Destroy” command sent to the App WSResource causes the corresponding entry to be removed from the registry.

authentication using the user’s proxy certificate. GridSAM retrieves the user’s proxy credential from the MyProxy server (9) which it uses to transfer any input files required to run the application from the intermediate File Staging Service to the grid resource (10), and to actually submit the job to a Globus backend.

The user can send Command messages to the App WS-Resource to monitor the application instance’s progress (11); for example the user can send a “Monitor” message to check on the application’s status. The App WS-Resource queries the GridSAM instance on behalf of the user to update state information. The user can also send “Terminate” and “Destroy” messages to halt the application’s execution and destroy the App WS-Resource respectively. GridSAM submits the job to the target grid resource and the job completes. GridSAM then moves the output files back to the file staging locations The App WS-Resource creates a JSDL docu- that were specified in the JSDL document (12). ment for a specific application instance, using its Once the job is complete the user can retrieve configuration file to determine where the appli- any output files from the application from the cation is located on the resource. The JSDL is File Staging Service to their local machine. The sent to the GridSAM instance acting as interface user can also query the Application Registry to to the grid resource (8), and GridSAM handles find the end point references of jobs that have

been previously prepared (14).

5

AHE Deployment

As described above the AHE is implemented as a client/server model. The client is designed to be easily deployed by an end user, without having to install any supporting software. The server is designed to be deployed and configured by an expert user, who installs and configures applications on behalf of other users. Due to the reliance on WSRF::Lite, the AHE server is developed in Perl, and is hosted in a container such as Apache or Tomcat. The actual AHE services are an ensemble of Perl scripts that are deployed as CGI scripts in the hosting container. To install the AHE server, the expert user must download the AHE package and configure their container appropriately. The AHE server uses a PostgreSQL [30] database to store the state information of the App WS-Resources, which must also be configured by the expert user. We assume that a GridSAM instance has been configured for each resource that the AHE can submit to. To host an application in the AHE, the expert user must first install and configure it on the target grid resource. The expert user then configures the location and settings of the application on the AHE server and creates a JSDL template document for the application and the resource. This can be done by cloning a preexisting JSDL template. To complete the installation the expert user runs a script to repopulate the Application Server Registry; the AHE can be updated dynamically and doesn’t require restarting when a new application is added. The AHE is designed to be interacted with by a variety of different clients. The clients we have developed are implemented in Java using the Apache Axis [31] web services toolkit. We have developed both GUI and command line clients from the same Java codebase. The GUI client uses a wizard to guide a user through the steps of starting their application instance. The wizard allows users to specify constraints for the application, such as the number of processors to use, choose a target grid resource to run their application on, stage all required input files to the grid resource, specify any extra arguments for the simulation, and set it running. To install the AHE clients all an end user need do is download and extract the client, load their X.509 certificate into a Java keystore using a provided script and set an environment vari-

able to point to the location of the clients. The user also has to configure their client with the AHE server endpoints supplied by their AHE server administrator. The AHE client attempts to discover which files need to be staged to and from the resource by parsing the application’s configuration file. It features a plug-in architecture which allows new configuration file parsers to be developed for any application that is to be hosted in the AHE. The parser will also rewrite the user’s application configuration file, removing any relative paths, so that the application can be run on the target grid resource. If no plug-in is available for a certain application, then the user can specify input and output files manually. Once an application instance has been prepare and submitted, the AHE GUI client allows the user to monitor the state of the application by polling its associated App WS-Resource. After the application has finished, the user can stage the application’s output files back to their local machine using the GUI client. The client also gives the user the ability to terminate an application while it is running on a grid resource, and destroy an application instance, removing it from the AHE’s application registry. In addition to the GUI client a set of command line clients are available which provide the same functionality of the GUI. The command line clients have the advantage that they can be called from a script to produce complex workflows with multiple application executions.

6

User Experience

We have successfully used the AHE to deploy two parallel molecular dynamics codes, LAMMPS [32] and NAMD [33]. These applications have been used to conduct production simulations on both the UK National Grid Service (NGS) and the US TeraGrid. There follows a discussion of two different use cases where the AHE has been used to quickly and easily run simulations using the grid.

6.1

The NAMD Use Case

Users often require the ability to launch multiple instances of the same or similar simulations that vary in particular attributes that affect the outcome of the simulation. An example of this is ‘ensemble’ molecular dynamics simulations of biological molecules in which the starting energies of various atoms are randomized to allow for

conformational sampling of the biological structure through multiple simulations. Another example is Thermodynamic Integration (TI) techniques that calculate binding affinities between biological molecules. Given that enough grid resources are available, multiple jobs each utilizing a slightly different configuration can be launched and executed simultaneously to provide the necessary results. Prior to the AHE, the problems with implementing such techniques have been the tediousness of repetitive job submission coupled with the monitoring of job status across multiple grid resources, as well as the time consuming act of shepherding input and output files around from resource to resource. The AHE circumvents these problems by presenting a uniform interface to multiple resources, through which multiple job submission can be achieved by scripting the AHE command line clients, as well as the ability to monitor each job through this interface. Furthermore, all files required for a job can be automatically staged to a set of desired resources as well as output files retrieved upon job completion. Some molecular dynamics simulations also require complex equilibration protocols that evolve a biological molecule from an available starting structure to an equilibrium state at which relevant data can be collated. Such protocols usually involve a series of chained simulations where the output of one simulation is fed into the input of the next. Whilst some conventional methods such as ssh can be employed to afford some automation of chained job submission, scripting the AHE command line clients provides a simpler and quicker mechanism through which chaining can be distributed seamlessly across multiple grid resources.

6.2

The LAMMPS Use Case

The microscopic and macroscopic behavior of large-scale anionic and cationic clay nanocomposite systems can be modeled using molecular dynamics (MD) techniques. The use of computer simulations to model these sort of systems has proved to be an essential adjunct to experimental techniques [34]. The clay systems which we simulate are those of the smectite clay, montmorillonite and the layered double hydroxide, hydrotalcite. Clays such as these form a sheet-like (layered) structure, which can intercalate molecules within their layers. Whilst useful information about the intercalated species can be obtained by running small-scale simulation, finite size effects can be explored by increasing

the model size. Our simulations extend to system sizes of up to 10 million atoms, which is extremely computationally demanding. The parallel molecular dynamics software Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [32] is used to carry out the simulations, which otherwise would not be possible using a serial application. Simulations of these sizes produce a large amount of information which needs to be analyzed and interpreted correctly. Visualization is a useful insight into understanding the behavior of a system and can act as an indication of which quantitative tests to carry out on the system. Computational steering also helps explore a simulation by providing the scientist ways to monitor and manipulate a simulation whilst it is still in progress. LAMMPS is an example of a well used MD code which does not have the functionalities of steering and visualization. We have integrated the RealityGrid Steering system [35, 36] into LAMMPS in order to introduce these features. The RealityGrid Steering system was designed for such legacy codes to be fully grid enabled. This means that the steering system allows applications to be deployed on a computational grid using the RealityGrid launcher GUI, which then can be steered using a steering client. Further integration of the steering library into a visualizer means that the application can transmit its data to a visualization service. The visualizer itself can be launched on a separate machine to that of the application, and communication is carried out over sockets. The RealityGrid launcher was built to manage steerable applications in the RealityGrid computational steering framework. To enable a scientist to launch, steer and visualize a simulation on separate grid resources, the launcher has to submit jobs for simulation and visualization, start a variety of supporting services, and put all these loosely coupled components in communication with each other. In doing this it relied on the presence of grid client software (actually Globus commands invoked through customized scripts) on the end-user’s machine. This approach possesses several of the drawbacks discussed in this paper, all of which increase the barrier to uptake. These include:

ˆ deep

software dependencies make the launcher heavyweight.

ˆ the situation in the client of (customizable) logic to orchestrate the distributed

components implicates the end-user in ongoing maintenance of the client’s configuration (consider the difficulty of adding a new application or new resource, especially one operating a different middleware, to the set that the user can access).

By providing a uniform interface to these different back end middlewares, the AHE will provide a truly interoperable grid from the user’s perspective. We also plan to integrate support for the RealityGrid steering framework into the AHE, so that starting an application which is marked as steerable automatically starts all the the client needs to be “attached” to the necessary steering services, and also to extend grid in order to launch and monitor jobs the AHE to support multi-part applications such and retrieve results, which decreases client as coupled models. The end-user still deals with mobility. a single application, while the complexity of managing multiple constituent jobs is delegated The AHE approach alleviates these difficulties to the service layer. by moving as much of the complexity as possible into the service layer. The AHE decomposes the target audience into expert and end-users, 8 Acknowledgments where the expert user installs, configures and maintains the AHE server, and the end-users The development of the AHE is funded by need simply to download the ready-to-go AHE the projects “RealityGrid” (GR/R67699) and client. The client itself becomes thinner, and “Rapid Prototyping of Usable Grid Middleware” with a reduced set of software dependencies is (GR/T27488/01), and also by OMII under the easier to install. All state persistence occurs at Managed Programme RAHWL project. The the service layer, which increases client mobil- AHE can be obtained from the RealityGrid webity. Architecturally, the AHE is akin to a por- site: http://www.realitygrid.org/AHE. tal, but one where the client is not constrained to be a Web browser, increasing the flexibility of what the client can do, and permitting pro- References grammatic access, which allows power users to [1] P. V. Coveney, editor. Scientific Grid Comconstruct lightweight workflows through scriptputing. Phil. Trans. R. Soc. A, 2005. ing languages. [2] I. Foster, C. Kesselman, and S. Tuecke. The anatomy of the grid: Enabling scalable vir7 Summary tual organizations. Intl J. Supercomputer Applications, 15:3–23, 2001. By narrowing the focus of the AHE middleware to a small set of applications that a group of [3] http://www.globus.org. scientists will typically want to use, the task of [4] http://www.unicore.org. launching and managing applications on a grid is Towards greatly simplified. This translates to a smoother [5] J. Chin and P.V. Coveney. tractable toolkits for the grid: a end user experience, removing many of the barriplea for lightweight, useable middleers that have previously deterred scientists from ware. Technical report, UK e-Science getting involved in grid computing. In a producTechnical Report UKeS-2004-01, 2004. tion environment we have found the AHE to be a http://nesc.ac.uk/technical papers/UKeSuseful way of providing a single interface to dis2004-01.pdf. parate grid resources, such as machines hosted on the NGS and TeraGrid. [6] J. Kewley, R. Allen, R. Crouchley, By representing the execution of an appliD. Grose, T. van Ark, M. Hayes, and Morcation as a stateful web service, the AHE can ris. L. GROWL: A lightweight grid services easily be built on top of to form systems of arbitoolkit and applications. 4th UK e-Science trary complexity, beyond its original design. For All Hands Meeting, 2005. example, a BPEL engine could be developed to allow users to orchestrate the workflow of ap- [7] S. Graham, A. Karmarkar, J Mischkinplications using the Business Process Execution sky, I. Robinson, and I. Sedukin. Web Language. Services Resource Framework. Technical report, OASIS Technical Report, 2006. In future we hope to be able to use a Gridhttp://docs.oasis-open.org/wsrf/wsrfSAM connector to the Unicore middleware to alws resource-1.2-spec-os.pdf. low the AHE to submit jobs to the DEISA grid.

ˆ

[8] P. Coveney, J. Vicary, J. Chin, and M. Har- [22] http://www.omii.ac.uk. vey. Introducing WEDS: a WSRF-based Submission Description Lanenvironment for distributed simulation. In [23] Job guage Specification. GGF. P. V. Coveney, editor, Scientific Grid Comhttp://forge.gridforum.org/projects/jsdlputing, volume 363, pages 1807–1816. Phil. wg/document/draft-ggf-jsdl-spec/en/21. Trans. R. Soc. A, 2005. [9] http://gridsam.sourceforge.net. [10] http://www.sve.man.ac.uk/research/ AtoZ/ILCT. [11] http://www.soaplite.com.

[24] http://www.apache.org. [25] http://tomcat.apache.org. [26] IETF. The IP Network Address Translator (NAT). http://www.faqs.org/rfcs/rfc1631.html.

[12] M. Gudgin and M. Hadley. Web Services Addressing, 2005. [27] IETF. The TLS Protocol Version 1.0. http://www.w3c.org/TR/2005/WDhttp://www.faqs.org/rfcs/rfc2246.html. ws-addr-core-20050331. [28] E. Gamma, R. Helm, R. Johnson, and [13] J. Treadwell and S. Graham. Web Vlissides J. Design Patterns: EleServices Resource Properties, 2006. ments of Reusable Object-Oriented Softhttp://docs.oasis-open.org/wsrf/wsrfware. Addison-Wesley, 1995. ws resource properties-1.2-spec-os.pdf. [29] IETF. HTTP Extensions for Dis[14] L. Srinivasan and T. Banks. Web Services tributed Authoring – WEBDAV. Resource Lifetime, 2006. http://docs.oasishttp://www.faqs.org/rfcs/rfc2518.html. open.org/wsrf/wsrf-ws resource lifetime1.2-spec-os.pdf. [30] http://www.postgresql.org/. [15] T. Maguire, D. Snelling, and T. Banks. [31] http://ws.apache.org/axis. Web Services Service Group, 2006. [32] S.J. Plimpton. Fast parallel algorithms http://docs.oasis-open.org/wsrf/wsrffor short-range molecular dynamics. J. of ws service group-1.2-spec-os.pdf. Comp. Phys., 117:1–19, 1995. [16] L. Liu and S. Meder. Web Services Base Faults, 2006. http://docs.oasis- [33] L Kale, R. Skeel, M. Bhandarkar, R. Brunner, A. Gursoy, N. Krawetz, J. Phillips, open.org/wsrf/wsrf-ws base faults-1.2A. Shinozaki, K. Varadarajan, and spec-os.pdf. K. Schulte. NAMD2: Greater scalability [17] M. Gudgin, M. Hadley, N. Mendelsohn, for parallel molecular dynamics. J. Comp. J. Moreau, and H. Frystyk. Soap Phys., pages 283–312, 1999. version 1.2 part 1: Messaging framework. Technical report, W3C, June 2003. [34] H. C. Greenwell, W. Jones, P. V. Coveney, http://www.w3.org/TR/soap12-part1. and S. Stackhouse. On the application of computer simulation techniques to anionic [18] A. Nadalin, C. Kaler, P. Hallamand cationic clays: A materials chemistry Baker, and R. Monzillo. Web Serperspective. Journal of Materials Chemvice Security: SOAP Message Seistry, 16(8):706–723, 2006. curity 1.0, 2006. http://www.oasisopen.org/committees/download.php/16790/ [35] S. M. Pickles, R. Haines, R. L. Pinning, and wss-v1.1-spec-osA. R. Porter. Practical Tools for ComputaSOAPMessageSecurity.pdf. tional Steering. 4th UK e-Science All Hands Meeting, 2004. [19] J. Brooke, M. Mc Keown, S. Pickles, and S. Zasada. Implementing WS-Security in [36] S. M. Pickles, R. Haines, R. L. Pinning, Perl. 4th UK e-Science All Hands Meeting, and A. R. Porter. A practical toolkit for 2005. computational steering. In P. V. Coveney, editor, Scientific Grid Computing, volume [20] http://www.cs.wisc.edu/condor. 363, pages 1843–1853. Phil. Trans. R. Soc. [21] http://gridengine.sunsource.net. A, 2005.

Suggest Documents