J Grid Computing (2010) 8:571–586 DOI 10.1007/s10723-010-9164-x
COMPCHEM: Progress Towards GEMS a Grid Empowered Molecular Simulator and Beyond Antonio Laganà · Alessandro Costantini · Osvaldo Gervasi · Noelia Faginas Lago · Carlo Manuali · Sergio Rampino
Received: 26 April 2010 / Accepted: 22 July 2010 / Published online: 21 August 2010 © Springer Science+Business Media B.V. 2010
Abstract Foundations and structure of the building blocks of GEMS, the ab initio molecular simulator designed for implementation on the EGEE computing Grid, are analyzed. The impact of the computational characteristics of the codes composing its blocks (the calculation of the ab initio potential energy values, the integration of the dynamics equations of the nuclear motion, and the statistical averaging of microscopic information to evaluate the relevant observable properties) on their Grid implementation when using rigorous ab initio quantum methods are discussed. The requests prompted by this approach for new computational developments are also examined by considering the present implementation of the simulator that is specialized in atom diatom reactive exchange processes. Keywords Grid · EGEE · Molecular dynamics
A. Laganà (B) · N. F. Lago · S. Rampino Department of Chemistry, University of Perugia, Perugia, Italy e-mail:
[email protected] A. Costantini · O. Gervasi · C. Manuali Department of Mathematics and Informatics, University of Perugia, Perugia, Italy
1 Introduction Computer simulations of chemical reactions are the ground on which several modern technological and environmental research advances rely. For this reason the chemistry community has developed several computational methods and related computer codes specifically designed to deal with molecular simulations of chemical transformations in an ab initio fashion [1–5]. In the absence of external fields an ab initio rigorous molecular simulation of the elementary reactive processes can be obtained by solving the following time dependent Schrödinger equation i
∂ Z ({x}, {X}, t) = Hˆ ({x}, {X}) Z ({x}, {X}, t) ∂t (1)
in which Hˆ is the time independent total many body (electrons and nuclei of the molecular system) Hamiltonian operator, Z is the total (electronic and nuclear) wavefunction with {x} and {X} being the electronic and nuclear sets of position vectors and t the time variable. Usually, the wavefunction Z is expanded in terms of the electronic eigenfunctions ({x}; {X}) calculated at fixed arrangement (constant X) of the nuclei. As a result, the computational problem can be partitioned into a sequence of three computational blocks: INTERACTION, DYNAMICS,
572
A. Laganà et al.
Fig. 1 Detailed workflow of the molecular simulator
OBSERVABLES. The first of them builds the eigensolutions of the electronic problem (at various fixed positions of the nuclei), the second integrates the motion equations of the nuclei and the third evaluates, out of the calculated microscopic information, the experimental observables. This articulation of the problem is illustrated in more detail by the workflow sketched in Fig. 1. As shown in Fig. 1: INTERACTION is the block in which the ab initio calculations determining at various levels of accuracy the electronic structure [1, 2] of the molecular system are carried out within a BornOppenheimer scheme if one does not find (or does not want to use) an existing potential energy routine when available. After performing ab
initio calculations (or collecting results available from the literature) the calculated energies (usually called potential energy values) are sometimes used directly as they are produced. Most often, however, ab initio calculations are first performed at the best affordable level of accuracy depending on the availability of an adequate amount of computer time and storage1 for the set of geometries of the molecular system necessary to describe the
1 On the local node a typical amount of computer time and storage necessary to calculate the electronic structure of atom diatom systems like N + N2 at a Coupled Cluster level with simple and double excitation with triples estimated by perturbation theory is about 3 h and 1GB, respectively, per geometry.
COMPCHEM: Progress Towards GEMS
considered process. Then the calculated potential energy values are properly refined by calculating a subset of points at higher level of accuracy and adjusted accordingly (also to exclude non converged results and to reproduce some known molecular properties). At this point the resulting values are fitted to an appropriate functional form to generate an analytical Potential Energy Surface (PES). If ab initio calculations are unfeasible one can build an empirical force field. DYNAMICS is the block that carries out the integration of the equations determining the dynamics of the nuclei [3, 5, 6] of the system. In a rigorous approach the problem is dealt by using full dimensional quantum mechanics means which are the method of election for an exact calculation of chemical reactivity. However, although for atom diatom systems the integration of related differential equations is nowadays routinary when the total angular momentum is zero, this is not true when convergence with it is seeked. This is due to the impressive amount of computer time and memory storage necessary for that purpose2 that a computational chemist can hardly get. In this case either reduced dimensionality quantum or classical or semi- (or quasi-) classical mechanics methods are used (or even a combination of them and model treatments). OBSERVABLES is the block that carries out the necessary statistical (and model) treatments of the outcomes of the theoretical calculations to provide a priory estimates of the measured properties of the system. This implies a proper sampling of both initial and final conditions as well as an additional integration over some unobserved variables. Most often the experimental measurements are not taken as such. In fact, usually, the experimentalists elaborate the value of the measured signal so as to work out of it some quantities easier to compare with theory. Goal of this block is, however, to avoid this step that sometimes implies the use of models and assump-
2 On the local node the amount of computer time and storage necessary to calculate the quantum S matrix elements of an atom diatom reaction like N + N2 → N + N2 typically varies from 1 day to several weeks and from 1 to 10GB per selected initial conditions (either per reactant state or per scattering energy depending on the technique used).
573
tions of questionable nature. Whenever possible, in fact, OBSERVABLES incorporates numerical procedure building the value of the experimental signal by taking into proper account all the characteristics of the experimental apparatus used. The present paper describes (focusing on atom diatom exchange processes) the progress made by our laboratories in building the ab initio Grid Empowered Molecular Simulator (GEMS) [7, 8] while coordinating the COMPCHEM [9] Virtual Organization (VO) of the EGEE Grid project [10]. The paper describes also the articulation for Grid execution of the programs of the three blocks (INTERACTION (Section 3), DYNAMICS (Section 4) and OBSERVABLES (Section 5)) for the first demo prototype SIMBEX (SIMulator of Beam EXperiments) [11] as sketched in Section 2. In Section 6 the paper examines in more detail the DYNAMICS block (because of the specific expertise of our laboratories and of the in-house nature of the related codes) to single out the requests that the building of GEMS sets for the evolution of the Grid to match the needs of the COMPCHEM community.
2 The Crossed Molecular Beam Demo Simulator We began developing the ab initio Molecular Simulator for distributed execution by implementing separately the main modules of the various blocks on the local node of the EGEE Grid. At present, work is progressing to connect them according to the above described flowchart and to match the difficulties associated with their implementation on the Grid. For this reason, when assembling the already mentioned Crossed Molecular Beam (CMB) SIMBEX demo simulator, we addressed first our attention to those codes that could be clearly (though not always straightforwardly) articulated into uncoupled tasks. In this respect, the programs associated with the first two blocks of the simulator are not well suited for Grid implementation because they are usually implemented (and made available to the users for production runs on realistic systems) on High Performance computing (HPC) platforms. After all, the programs of the two blocks have a quite different computational
574
nature. Those of the INTERACTION block, in fact, usually rely on packages made of well established and well articulated suites of codes (with most of them being also either commercial products or in any case licensed software). On the contrary, the codes used in the DYNAMICS block are most often, as already mentioned, in house developed (especially those based on quantum mechanics) software or, in any case, less well established packages. Moreover, some of them (especially those based on classical mechanics) are moderately memory demanding though still quite time consuming. A common feature, however, of most of the codes of both the INTERACTION and the DYNAMICS blocks is that they all rely on large computational grains difficult to decompose into independent computational subtasks (a feature that makes them scarcely suited for Grid distributed execution) to be iterated a large number of times. The OBSERVABLES block, instead, is in general significantly less computationally demanding even in the case in which the number of tasks to be considered is large. Related statistical treatments are, in fact, fairly straightforward to carry out even if this block often plays the crucial role of checking convergence with some of the calculation parameter. Moreover, this block is most often designed and handled by the experimentalists because of their exclusive knowledge and control of the physical parameters of the experiment. From what is said above the first two blocks of the simulator need significant restructuring to become able to exploit the innovative features of distributed High Throughput Computing (HTC) concurrent platforms (such as those of the Grid). A proper way of overcoming this problem would be an extensive inclusion of HPC machines in the Grid infrastructure and also a highly efficient implementation of MPI on it. Unfortunately, although both problems have been addressed by the EGI proposal submitted on November 23, 2009 to the INFRA 2010 call of the VII Framework Program [12]) the present situation is still quite unsatisfactory. HPC and HTC main platforms work on projects based on different middleware and some experiences with the use of MPI on the Grid have been rather disappointing [13, 14]. For all these reasons SIMBEX was designed and implemented so as to by-pass the difficulty
A. Laganà et al.
of invoking the INTERACTION block. This was obtained by incorporating into DYNAMICS a PES Fortran routine evaluating the interaction of the system considered. Accordingly SIMBEX, once incorporated the PES inside the DYNAMICS block, starts carrying out related calculations using classical mechanics equations (say the Hamilton’s ones). Accordingly, the computation integrating the differential equations relating the time (t) derivatives of the positions and momenta of the involved particles and the partial derivatives of the molecular system classical Hamiltonian (whose outcomes vary with the initial conditions) were broken in a large set of cpu intensive (though little memory demanding) independent tasks. In this case the tasks are concerned with the execution of the kernel of a modified version of the ABCtraj program [15]. ABCtraj, like other scientific computational applications, is made (see Fig. 2) of an uncoupled outer loop running over the jevents index of the Nevents classical trajectories nesting the kernel of the strongly coupled (recursive) inner loop of integration over time t. In order to evaluate detailed reactive properties, some of the initial conditions are associated with the reactants’ initial vibrational state v, rotational state j, translational energy Etr , total angular momentum quantum number J, and so on. After reaching the product asymptotic region at the total time ttot the product v , j , Etr are determined and the detailed state v j to state v j partial
Do je ents varying from 1 to N e ents Generate initial conditions: , j, E tr , J ... Perform preliminary calculations Do t varying from to to t fi n by tstep Perform the time step propagation , Check for constants conservation Check for reached asymptotic region EndDo t Perform final calculations Write output data: , j , ttot , E tr , P J j, j ( E tr ), ... EndDo je ents
Fig. 2 Pseudocode of the ABCtraj program
COMPCHEM: Progress Towards GEMS
575
(fixed J) probabilities (PvJ j,v j (Etr ), product quantities are primed) are updated. Then, out of the partial probabilities the OBSERVABLES block builds some final less detailed quantities that can be associated with some experimental properties, like the shape of center of mass (cm) product angular distributions (PAD) and translational distributions (PTD), that experimentalists work out of the measured intensity of the product beam [16]. These are the quantities which are rendered on the virtual monitors for comparison with those produced by the experiment on the real monitors. SIMBEX has shown to be a computational application ideally suited for Grid calculations since it scales linearly with the number of trajectories [11].
3 The INTERACTION Block The first effort tackled when developing GEMS was the Grid implementation of the INTERACTION block. The key computational task of this block is the execution of the programs (or suites of programs) carrying out the electronic structure calculations. This means that the solution of the electronic eigenvalue problem Hˆ e ({x}; {X})e ({x}; {X}) = E({X})e ({x}; {X}) (2) (in which Hˆ e ({x}; {X}) is the fixed nuclei electronic Hamiltonian, e ({x}; {X}) is the fixed nuclei electronic wavefunction and E({X}) is the electronic eigenenergy depending only on the nuclear position vectors), using an heterogeneous platform made mainly of processors equipped with little memory sizes. Our efforts have led to the development of a module called SUPSIM [17] that has been tested for atom diatom systems when using some well known packages (such as GAMESS US [18, 19], DALTON [20] and MOLPRO [21]) implemented at moderately high level. Higher level implementations of ab initio codes inevitably require, as shown recently for another popular package [22], the availability on the computing platform of HPC resources (which are presently scarcely available on the EGEE Grid platforms). In fact, while some preliminary minor parts of the calculations could be distributed on the Grid,
the main components of the iterative procedures are not suited for distributed computing. A minor module of INTERACTION is FITTING that is invoked when the ab initio values need to be fitted to a suitable functional form of the global type [23]. FITTING is definitely less time and memory demanding. 3.1 The SUPSIM Module SUPSIM is a portal bearing a user friendly graphical interface and has been at present specialized only to deal with triatomic systems [17]. Its goal is to compute large matrices of potential energy values obtained by varying two internuclear distances (for the atom diatom A + BC system one can consider a regular Grid in, for example, r AB and r BC at fixed value of the included angle B and then replicate the effort for the other pairs of the internuclear distances r AB − r AC and r BC − r AC at fixed value of the related included angles A and C . The calculation of the potential energy values is articulated in the following steps: insertion of the input parameters, distribution of the computational tasks and retrieval of the calculated potential energy values. The parameters to be specified in a simple HTML form are: the type of basis set used, the level of theory employed, the electronic state considered, the Grid of nuclear distances defining the considered geometries, etc. After choosing the Grid sites on which the jobs are going to be run and exhibiting the necessary credentials, the user can launch the batch of Ngeom fixed geometry ab initio calculations (see Fig. 3 where the pseudocode of the related package is sketched). Also this package is made of an uncoupled outer loop. However, differently from the scheme of Fig. 2, the outer loop runs over the jgeom (the index of the considered Ngeom molecular geometries) and nests a strongly coupled (recursive) inner loop of convergence over a variational or a perturbative series. At the end of the computation the set of calculated potential energy values V is collected for inspection (eventually after being gathered together in a simple file using an appropriate format) to examine the key features of the computed surface and verify the suitability of the Grid of points used so as to supplement, eliminate, correct or
576
Do jgeom varying from 1 to N geom Generate the geometry coordinates Perform preliminary calculations Do jcon erg varying from 1 to jlim it Perform convergence iteration Check for reached convergence EndDo jcon erg Perform final calculations Write output data: V , geometry coordinates EndDo jgeom
Fig. 3 Pseudocode of the ab initio package
recompute unconverged values. SUPSIM is built using the Java servlet technology and is divided into the following servlets: Sub calc that provides the user interface for job preparation and submission. In other words, it builds the script enabling the user to specify the parameters of the PES by selecting the basis set and the level of theory to be used among those supported by the package chosen. Several other choices are enabled like the type of internal coordinates and the space/spin symmetry in which the choice is guided by the interface. It also generates the threads for submitting the distributed jobs. View Res that provides information about the evolution of the jobs. In other words, this servlet retrieves the values of the parameters of the job status and provides the functionalities needed to collect and display the outcomes of the computation. Two different display methods are available. ASCII records and graphical representations. In the ASCII solution the servlet shows the content of a file giving basic information on the calculation together with the value(s) of energy associated with the given geometry (one per file of a common system directory). This makes the implementation of other formats (like the recently proposed Q5cost one [24]) easy when adherence to a standard for representing quantum chemistry data is required. In the graphical representation the servlet builds and displays a sequence of images showing different cross sections and views of the PES along with the list of the parameters used for the calculations. All images are built the very
A. Laganà et al.
first time that the user asks for visualization of the results. The graphs are stored away for retrieval on subsequent requests. CommErr that provides information on communication errors. In other words, this servlet handles communication with GEMS. When a PES calculation is completed, the information is transmitted to other parts of GEMS. Actually, SUPSIM and GEMS communicate a first time when SUPSIM is asked to compute a given PES (this is obtained by redirecting the user to the Sub calc URL). They also communicate a second time when after completing the calculations, the results are transferred (under user request) to the FITTING module (the already mentioned module taking care of optimizing the parameters of the chosen functional form to reproduce ab initio points [23]) and the user is redirected to GEMS. The availability of a PES is checked already when logging in with GEMS and indicating the reaction of interest. At that moment SUPSIM comes into play if there are no available suitable PESs. 3.2 The FITTING Module FITTING is the other module of INTERACTION. It is also accessible on the web and has a simple structure due to the low complexity of the associated computations. FITTING is managed via a dynamic web server making use of a PHP4 component and a RDBMS to handle user data. More in detail, some pages are generated dynamically to manage the execution of the computational procedure and to define the input parameters through a graphical interface. FITTING is invoked when ab initio data are to be fitted to a global functional form [23, 25]. Among the global methods the least square fitting ones based on the Many Body Expansion (MBE) scheme [26] are usually preferred (sometimes ever via their splined versions). Accordingly, suitable two body components are fitted first to the asymptotic diatomic limits. Then, the ab initio data are substracted of the value of the diatomic terms and the difference is fitted to suitable three body components in the appropriate regions. For triatomic systems a few additional corrective contributions (usually of the Gaussian type) are usually add to incorporate the three-body
COMPCHEM: Progress Towards GEMS
577
components of the interaction. For larger systems the procedure is repeated considering each time one more body up to the desired order. A key ingredient of this approach is the selection of the coordinate system (that is intimately connected to the formulation of the analytical function adopted for the fitting). Frequently, for atom diatom reactions use is made of either the already mentioned internuclear distances or the Jacobi coordinates. FITTING makes also use of the bond order (BO) coordinates [27] which are defined as ni = e−βi (ri −rei ) .
(3)
In (3) the label i indicates the pair of atoms AB, BC or AC (of the atom-diatom A + BC system), while βi and rei are empirical parameters characterizing the spectroscopic properties of the related diatoms. BO coordinates offer the advantage of writing the Morse-like diatomic potential in the following simple way 1/2 Vi (ni ) = −Di ni2 − 2ni
(4)
that can be easily extended to higher order polynomials. The functional form used for the fitting was obtained either by considering the generalization of the polynomial representation given in (4) to polyatomic molecules [28] or adopting a polynomial expression of products of the BO coordinates and the related internuclear distances (as is the case of the GFIT3C routine (see [29])). As an alternative use can be made of the rotating BO (ROBO) approach [30]. In the latter case the fixed B potential V A,BC−C,AB of the atomdiatom reactive process A + BC → AB + C is formulated as in (4) after replacing ni by ρ/ρo (with ρ = [n2AB + n2BC ]1/2 and Di and ρo being parameters depending on α and B , estimated so as to minimize the difference between ab initio and fitted values). The overall potential is formulated as a linear combination of V A,BC−C,AB , V B,C A−A,BC and VC,AB−B,C A using weights depending on the closeness of the molecular geometry to collinearity (from this the name Largest Angle Generalization of the Rotating Bond order (LAGROBO) potential [31]). Extensions to four atoms are also considered [32].
4 The DYNAMICS Block The computational task of the DYNAMICS block (see the central band of Fig. 1) consists in the distributed execution of the programs (or suites of programs) carrying out the integration of the equation i
∂ ψ({X}, t) = HˆN ({X})ψ({X}, t) ∂t
(5)
in which Hˆ N is the nuclear component of the Hamiltonian (including the potential energy function V({X}) incorporating the eigenenergies of (2)) and ψ is the nuclear wavefunction (the coefficients of the expansion of the total wavefunction Z into the electronic eigenfunctions e ). The DYNAMICS block is at present articulated in two main modules: the time dependent (TD) and the time independent (TI) ones. Both modules calculate on the Grid the elements of the reactive scattering S matrix whose square moduli represent the quantum probabilities (the rigorous quantum analogues of the corresponding (approximate) quasiclassical probabilities calculated by ABCtraj). 4.1 The Time Dependent Module The simplest way of solving (5) is to use the TD method in which t acts as a continuity variable in analogy with what happens in classical mechanics approaches. In GEMS, the quantum TD method, usually called wavepacket method, is embodied for atom diatom system into the program RWAVEPR [33] that computationally shows strict analogies with ABCtraj. As a matter of fact, as shown in Fig. 4, RWAVEPR is also made of an outer loop over the initial conditions. RWAVEPR employs the already mentioned Jacobi coordinates (called for simplicity R, r and for the reactants and R , r and for the products) and expands the wavefunction in its partial components ψ J (R, r, , t) that are multiplied by appropriate functions of the three Euler angles α, β and γ . The use of the partial wave decomposition of the nuclear wavefunction allows a partitioning of (5) into a set of lower dimensionality fixed J calculations of the partial S J matrix elements.
578
Do je ents varying from 1 to N e ents Generate initial conditions: , j, k o , J ... Perform preliminary calculations Do t varying from to to t fi n by tstep Perform the time step propagation Perform the asymptotic analysis to update S Check for convergence of the results EndDo t Perform final calculations Write output data: , j, ttotal ,{ E tr , S J } ... EndDo je ents
Fig. 4 Pseudocode of the RWAVEPR program
However, as J increases, the size of the matrices to be handled and, consequently, the demand of integration time grow dramatically. As a consequence, one can rely on an efficient distribution on the EGEE Grid only for low values of the total angular momentum quantum number (as is the case of light systems and low energies). For this reason in GEMS the option of executing the RWAVEPR code by introducing some approximations has also been considered. In particular, one can either limit the number of K, (the projection of the total angular momentum J) values to be included in the calculations or adopt the popular centrifugal sudden (CS) or K conserving approximation [34, 35]. As a result, the CS calculations dealing with matrices of dimension D(2J + 1) (with D being their dimension in the J = 0 case), are splitted into 2J + 1 propagations of dimension not larger than D. This is of a clear benefit for a Grid implementation that can easily execute 2J + 1 jobs of dimension D each of which fitting into smaller memory. The wavepacket is initially set at a given initial pair of vibrational v and rotational j quantum numbers and is given a ko relative wave number along R. Then it is mapped into the product Jacobi coordinates and integrated in time by a forth-and-back Fourier transform from distances to momenta representations [36]. At each time step the overlap of the wavepacket with the open asymptotic product channels at a limiting value of
A. Laganà et al.
the atom diatom distance R is also computed and converted into a contribution to the S matrix. The integration is carried out for a time long enough to consider the vawepacket smeared over all the available phase space. Since for each set of initial conditions the integration of the TD equations is an independent computational task, a distribution scheme similar to the one adopted for trajectory calculations (see Fig. 2) applies to RWAVEPR and is perfectly suited for a parameter study [38] once it is modified as shown in Fig. 4.
4.2 The Time Independent Module Traditionally, however, the most popular approach to the calculation of detailed reactive properties has been the TI one that is embodied in the ABC [39] quantum reactive scattering program. The TI method is based on the integration of the stationary version of the Schrödinger equation for the nuclei at a fixed value of total energy E obtained by factoring out the time dependence from the Schrödinger equation. To integrate the stationary Schrödinger equation it is necessary to define a new continuity coordinate (since t is no longer involved) smoothly connecting reactants to products (also called reaction coordinate). Then one calculates the eigenvalues and the eigenfunctions of the Hamiltonian of the bound state problem in the remaining coordinates on a fairly dense Grid of points along the reaction coordinate, expands the wavefunction at each point of the Grid on the related fixed reaction coordinate eigenfunctions, averages over the related coordinates and propagates the solution from reactants to products by integrating the resulting coupled differential scattering equations in the reaction coordinate. From the value of the solution of the partial atom diatom wavefunction ψ J (R, r, ) at the asymptotes the fixed E detailed information (for all the admissible values of v, j, v and j ) on the efficiency of the reactive process is recovered under the form of the scattering S J matrix. In the ABC program used by GEMS the set of internal coordinates ensuring continuity and smoothness to the connection between reactants and products is the hyperspherical Delves one [40]. The mentioned coordinates are the
COMPCHEM: Progress Towards GEMS
579
hyperradius ρ = [R2 + r2 ]1/2 = [R2 + r2 ]1/2 (that is also the reaction coordinate), the Jacoby τ and the internal Delves hyperspherical θ Dτ = arcsin(Rτ /rτ ) (both labeled after the asymptotic arrangement τ ). Accordingly the wavefunction is expanded in terms of the τ basis functions BτJυMτ jτ Kτ which are also labeled after J and both M and Kτ (the space- and body-fixed projections of J), υτ and jτ the τ asymptotic vibrational and rotational quantum numbers. The BτJυMτ jτ Kτ basis functions depend on both the three Euler angles and the pair of the Jacoby τ and the internal Delves hyperspherical θ Dτ angles. The solution is propagated from small values of ρ (closely packed system) to the large ones (the asymptotes) by linking recursively the matrix of the coefficients (g) of the expansion of ψ in the various regions of the reactive process (according to the following set of equations
Do jsectors varying from 1 to N sectors Calculate sector eigenvalues and eigenfunctions Calculate coupling and border overlap matrices EndDo jsectors Do je ents varying from 1 to N e ents Generate initial conditions: E tr , J ... Perform preliminary calculations Do jsectors varying from 1 to N sectors Perform sector propagation Map the solution into the next sector EndDo jsectors Perform final calculations Write output data: E , SJ EndDo je ents
Fig. 5 Pseudocode of the ABC program
5 The OBSERVABLES Block d2 g = O−1 Ug. dρ 2
(6)
where O is the overlap matrix and U is the coupling matrix (for further details see [39]). To prepare the integration bed for the set of coupled differential equations given in (6), the reaction coordinate ρ is divided into various sectors, and the local (sector) bound state functions (surface functions) are calculated by diagonalizing the matrices describing the related θ Dτ -dependent motion using a carefully chosen reference potential. Accordingly, (see Fig. 5) the pseudocode of ABC can be articulated in a first uncoupled loop over the different arrangements of the various ρ sectors building the O and U matrices of the integration bed. This is followed by a second uncoupled outer loop generating the energy specific quantities in which an inner recursive loop that repeatedly propagates the solution within the sector and from a sector to the next one till the asymptotes (where it is mapped onto the product states) are reached. Sometimes, however, especially when only a limited number of energies are considered (Nevents is small) the second loop is embodied into the first one to save on space when storing the O and U matrices.
The last block of GEMS, OBSERVABLES (see the lower band of Fig. 1), is devoted to the proper statistical averaging of the values of the S matrix elements obtained as an outcome of the DYNAMICS block to estimate the physical observables of the experiment. As already mentioned, this block is basically meant to be taken care of by the experimentalists. The options indicated in Fig. 1 for this block range from the most detailed information (like the scalar and/or vector correlation properties of an elementary mass-mass or light-mass interaction elementary process) to gradually more averaged ones (like the state-to-state or state-specific dependence on energy of the cross sections, the temperature dependence of rate coefficients and some bulk geometric or thermodynamic properties). In the current version, however, OBSERVABLES considers only the calculation of the reactive scattering data measurable in CMB experiments which are described in detail in the literature (see for example [41]). In CMB experimental apparatuses the molecules of the two beams undergo only single collisions in a finite volume. The scattered products of such encounters are detected at an angle and a velocity w using a mass spectrometer rotating on the plane formed by the colliding beams. The collected data is, therefore, the value
580
A. Laganà et al.
of product beam intensity ILab ( , w ) measured in the laboratory (Lab) frame to be compared the “virtual” (a priori) signal worked out in the OBSERVABLES block of GEMS. The calculation of ILab ( , w ) can be performed either in a direct or an indirect fashion.
from Icm (ϑ , u ) to the Icm (ϑ , Etr ) representation, more popular among theorists, is made using the relationship 0∞ Icm ϑ , Etr dEtr Icm ϑ , u = u Icm ϑ , Etr ∞ 0 u Icm ϑ , Etr du
(9) 5.1 The Indirect Method In the indirect method the center of mass intensity Icm (ϑ , u ) (where ϑ and u are the center of mass coordinates corresponding to the and w laboratory ones of the Newton diagram of a bimolecular collision [42]) is linked to ILab ( , w ) by the relationship ILab , w = Icm ϑ , u w 2 /u2 (7) where w2 /u2 is the Jacobian of the transformation between the two sets of coordinates. In this approach [43] the deconvolution of ILab ( , w ) is iteratively performed provided that laboratory data are noise free (or cleaned) and the pulse counts of the mass spectrometer have been calibrated to account for the physical and geometrical features of the apparatus (like the collision volume, the detector efficiency, the solid angle spanned by the detection system slits, the spread of the beams, etc.). To this end the fixed angle count of the pulses is also supported by a modulation technique (that wipes out the background) and repeated scanning procedure (that improves the statistics). Because of the non monocromaticity of the beams, ILab ( , w ) includes the contributions from all the Newton ith diagrams applicable to the beam w2 ILab , w = f (Etr ) fi (w1 , w2 , γ ) 2 Icm ϑi , ui u i (8) fi (w1 , w2 , γ ) = (w12 +w22 )1/2
where f (w1 ) f (w2 ) f (γ ) with w1 and w2 being the velocities of the two reactant beams, γ their intersection angle, f (Etr ) the dependence of the cross section on the collision energy Etr , f (w1 ) and f (w2 ) the distribution functions of the two beams, f (γ ) the intersection angle distribution function and w 2 /u2 the Jacobian of the laboratory onto centre of mass coordinate transformation. Finally a transformation
and Icm (ϑ , Etr ) is factorized in terms of the T(ϑ ) and P(Etr ) distributions which are usually compared with the theoretically calculated angular and translational product ones.
5.2 The Direct Method Despite the fact that, as we have just seen, the direct construction of the (virtual) experimental signal ILab ( , w ) from ab initio data is difficult for a theorist with no help from the experimentalists (because of the need of a detailed knowledge of the experimental apparatus that only researchers carrying out the experiment have) this is now possible thanks to the collaborative nature of COMPCHEM. For atom diatom crossed beam experiments, in fact, GEMS can link the computational procedure of the CMB group of the Chemistry Department [37]. Accordingly, in the direct approach from the fully detailed partial Jp SJ matrix elements Sv jK,v j K (Etr ) in which p is the parity, one can calculate the partial state-tostate probabilities PvJ j,v j (Etr ) using the following expression PvJ j,v j (Etr ) =
K max
1 2Kmax + 1
K PvJ j,v j (Etr ). (10)
K=−Kmax
K with Kmax = min( j, J) and PvJ j,v j (Etr ) being formulated as K PvJ j,v j (Etr )
=
1
Kmax
p=0 K =−Kmax
2 Jp Sv jK,v j K (Etr ) .
(11)
Out of the partial state to state probabilities several popular fixed energy observable properties of atom diatom reactions can be derived among which Icm (ϑ , Etr ) (to this end the methods outlined in [44] can be used). Then all the relationships described in the previous subsection can be utilized in a forward fashion to estimate the measured data. The direct method exploits also
COMPCHEM: Progress Towards GEMS
the possibilty of distributing the calculation of the OBSERVABLES block over the Grid. To this end OBSERVABLES, is managed via a dynamic web server making use of a PHP4 component and a RDBMS to handle user data. In addition, some pages are generated dynamically to manage the execution of the computational procedure and to define the input parameters through a graphical interface.
6 Higher Level Gridification As already mentioned, our efforts to implement GEMS on the EGEE Grid focused mainly on the DYNAMICS block. This is due both to the non licensed nature of the related codes and to our specific expertise on quantum reactive scattering methods. Unfortunately, for implementing on the Grid the TD RWAVEPR code we were unable to go beyond a straightforward parameter sweeping distribution scheme [45] of the whole code. In fact, neither a further coarse grain decomposition nor the tools employed in [46, 47] for a finer grain distribution could be efficiently applied to the Grid implementation of RWAVEPR. A more efficient use of the parameter sweeping scheme (based on a coarse grain decomposition in which the entire code is distributed on the Grid) was instead possible for the TI ABC code. This was obtained by replicating part of the calculations (as we shall further comment in more detail later) concerned with the evaluation of both the overlap O and the coupling U matrices of (6). Further work was also performed to use higher level Grid tools and strategies to develop more advanced Grid approaches. 6.1 The Grid Implementation of ABC Using P-GRADE A first attempt to use a high level tool to handle the workflow of the ABC code on multiple Grid resources with different input files was made using the P-GRADE Grid Portal [48, 49]. P-GRADE provides graphical tools and services that help Grid application developers to port legacy programs on the Grid with no need for reengineering or modifying the code. P-GRADE allows,
581
in fact, to define the job parameters (for a job parameter study) [50] in a graphical environment and generate out of their description the needed Grid specific scripts and commands which actually carry out the execution on the distributed Grid platform. The P-GRADE Portal, in fact, integrates batch programs into a directed acyclic graph by connecting them together with file channels. A batch component can be any executable code which is binary compatible with the Grid resources one wishes to use. A file channel defines directed data flows between two batch components (for example it can specify that the output file of the source program must be used as the input file of the target program). The workflow manager subsystem of P-GRADE resolves such dependencies during the execution of the workflow by transferring and renaming files. After the user has defined the structure of the workflow and has provided executable components for the workflow nodes, he/she can decide whether to execute the workflow with only one input data set, or with multiple input data sets in a parameter study fashion. If the latter option is chosen, then the workflow manager system of P-GRADE creates multiple instances of the workflow—one for each input data set—and executes these workflow instances simultaneously. The Portal server acts as a central component to instantiate workflows, manage their execution and perform the file staging which involves input and output processes. Moreover, the P-GRADE Portal provides also tools for generating automatically input data sets for parameter study workflows. The user has the possibility of attaching the so called “Generator” components to the workflow in order to create the parameter input files. The workflow manager subsystem executes first the generator(s) and then the rest of the workflow, ensuring that each workflow accesses and processes correct and different versions of the data files. The sequential ABC code was implemented as a single Fortran 90 executable [51]. Accordingly, the workflow contains only one component: the Fortran code that was compiled on the User Interface machine of the EGEE Grid. During Grid execution the P-GRADE Portal workflow manager
582
is responsible for preparing the files for the Fortran program and transferring them to the EGEE Computing Element. This makes the executable know nothing about the Grid and need not to be modified. Next to the successful execution of a single ABC run (single input file) workflow, the procedure was scaled up to a bench parameter study. This was done by attaching an Automatic Generator component to the workflow. The Automatic Generator is a special job type in P-GRADE that is used to generate input text file variations for the ABC job. As the parameter definition window of P-GRADE is highly flexible, we can reuse a generic input file of the ABC code and put two parameter keys into it. These parameter keys are automatically replaced by values during the execution of the workflow to generate the actual input files for the ABC running. Then, right after the ABC parameter jobs, another component, the Collector, was inserted into the workflow level. The Collector is again a special job type in a P-GRADE workflow that is responsible for collecting the results of the parameter study, analyzing them and creating a typical user friendly filtered result (to be considered as the final result of the simulation). In our case the Collector does not carry out any post-processing. It simply collects the results from the ABC jobs and compresses them into a single archive file that can be downloaded to the user through the Portal web interface. The purpose of this step is, in fact, to make it easy the access to results for the end users. The ABC workflow was implemented in the Multi-Grid installation of the P-GRADE Portal [52, 53] and all scripts and processes for Grid execution were generated by it. The only executable taken from the developer is the sequential ABC code. The ABC code is a Fortran 90 program, statically compiled using the g95 compiler and the BLAS and LAPACK libraries on the User Interface (UI) machine of COMPCHEM. Static compilation assures that the program is binary compatible with the Computing Elements of the EGEE Grid and will not run into incompatibility errors because of dinamically loaded libraries. The Gridified procedure executes first the input Generator, then the ABC code simultane-
A. Laganà et al.
ously on multiple Grid resources and finally the output Collector. Because the execution times of the Generator and Collector stages are negligible compared with that of the ABC code, the latter ends up to be the dominant contribution to the overall execution time. The execution of the ABC program on a single Intel Pentium 4 machine equipped with a 3.4 GHz CPU and 1GB memory takes from 3 to 6 h. In the Gridified version an ABC job spends on average about as much time on the Grid queue as on the CPU itself. This means that the average execution time of an ABC job on the Grid is about twice as long as that on a dedicated local machine. This implies that an execution on the Grid becomes definitely advantageous over a single CPU run as soon as at least three ABC jobs run simultaneously on three Grid resources. However, in the corresponding massive computational campaigns to calculate the reactive probabilities of N + N2 typically thousands jobs are run simultaneously for a dense Grid of energy values necessary for the modeling of spacecraft reentry [54, 55]. In these cases almost all ABC runs execute about at the same time on the different CPUs leading to performances close to the ideal value (once discounted the above mentioned factor of 2). 6.2 The Hybrid Distribution Model To further structure the Grid version of ABC to scale with the size of the system considered we focused our attention onto the computational kernel of the solution of (6) and exploited the different nature of the two loops of Fig. 5. In detail, the first loop over the sectors of ρ accomplishes the following tasks: A1 Basis set determination. A set of sector vibrational functions are determined using a finite difference method to solve the onedimensional Schrödinger equation associated with the reference potential in each arrangement channel. These potentials are taken as the fixed τ cuts of the full potential at the midpoint of the sector. A2 Overlap and coupling matrix elements calculation. The O and the U matrices of (6) are constructed using a Gauss–Legendre
COMPCHEM: Progress Towards GEMS
quadrature for τ , a trapezoidal rule for θ Dτ , and an analytical integration for the Euler angles of which the most difficult to converge are the interchannel (τ = τ ) ones.
583 Fig. 6 Flow chart of the ABC code
The second loop over the sectors of ρ accomplishes the following tasks: B1 Propagation. The value of the elements of matrix g of (6) is propagated from small to large ρ values using the constant reference potential log derivative method of [56]. B2 Matching to Jacobi coordinate solutions. To save computer time a switch from hyperspherical to Jacobi coordinates is performed when propagating out in ρ just as soon as the fixed ρ barriers between different arrangement channels become sufficiently high when compared with total energy. B3 Asymptotic analysis. The scattering matrix S J is determined (after the Jacobi coordinate solutions have been further integrated up to an Rτ value large enough to consider the potential asymptotic) by applying the usual scattering boundary conditions [57]. B4 Storing the S J matrix. The S J matrix elements are stored on disk for subsequent use and the state-to-state probabilities are calculated by taking the square modulus of the scattering matrix elements. The two loops show a clear difference as far as the usage of the Grid is concerned. The first Loop is in fact strongly coupled and when a system is heavy or has a highly structured PES the use of a HPC computing platform is needed. On the other hand, the second loop is fully decoupled and can be ideally run on many HTC platforms without needing to engage HPC resources. This means that ABC should be ideally implemented on the Grid by adopting a flow chart of the type given in Fig. 6 in which the first section of the application runs on a HPC platform while its second section uses the outcomes of the previous one to run concurrently on a HTC platform. This has prompted us to design and implement a Grid Framework, called GriF [58]. GriF makes it easier to exploit the chemical know how of the users (even of those having an insufficient computer science background) to construct programs
efficiently running on the Grid and suited to be offered as web services. 7 Conclusions and Future Work The work described in this paper singles out the perspectives of developing a robust line of research in Grid computing associated with molecular science studies. It illustrates, in fact, the EGEE activities carried out within the COMPCHEM VO to expand the workflow of the previous demo simulator SIMBEX by including ab initio electronic structure, quantum reactive scattering and virtual beam signal reconstruction calculations. The new simulator (the Grid Empowered Molecular Simulator GEMS) has both the peculiar characteristic of being fully ab initio (for this reason the DYNAMICS block has been founded on full quantum dynamics programs) and clearly service oriented (for this reason it has been modularized and structured as a framework in which high flexibility is offered to the user especially in the last part, OBSERVABLES, made linkable to experiment based software). This makes GEMS not only an extremely useful tool for molecular investigations but candidates it also as the service of election for the Chemistry and Materials Science and Technology Community. Another perspective mission of COMPCHEM discussed in
584
the paper is the exploitation of interoperability between HPC and HTC platforms to the end of making possible quantum dynamics calculations at present severely limited by the insufficient availability of combined high performance and high throughput computing resources. Such a perspective originated from the need for both using data models and designing frameworks and workflows suited to connect the various components of the simulator and to simplify the procedures for articulating them as services. This will avoid the use of HPC resources not as such but as a sum of loosely coupled processors. At the same time it will avoid that massively distributed platforms waste time in distributing around data in an inproductive fashion while attempting to solve tightly coupled problems. This will also greatly enhance the possibility of coordinating the use of the two plattforms to interoperate via a single workflow and will boost as well the composition of the various competences and endeavours of COMPCHEM to carry on in silico design of innovative chemical processes and materials.3 Acknowledgements The authors acknowledge financial support from the European project EGEE III and the COST European initiatives (for actions D23 “METACHEM” and D37 “GRIDCHEM”. In its early stages the work has been financially supported also by MIUR, ASI, CNR. Specific mention is deserved by the Italian MIUR FIRB Grid.it project (RBNE01KNFP) on High performance Grid Platforms and Tools and by the MIUR CNR Strategic Project L 499/97-2000 on High performance Distributed Enabling Platforms. Thanks are also due to A. Ghiselli (CNAF-INFN), E. Rossi (CINECA), S. Evangelisti (University of Toulouse), S. Farantos (FORTH) and F. Huarte (University of Barcelona) for stimulating discussions on the mission of COMPCHEM.
References 1. Laganà, A. (ed.): Supercomputer Algorithms for Reactivity, Dynamics and Kinetics of Small Molecules. Kluwer, Dordrecht. (1989). ISBN 0-7923-0226-5
3 In
order to proceed along this direction we have already agreed with CINECA [59] and CNAF-INFN [60] to join the efforts in assembling an experimental VOspecific framework infrastructure. By VO infrastructure framework we mean the environment administered by the Virtual Organization in which the services are community specific.
A. Laganà et al. 2. Hirst, D.M.: A Computational Approach to Chemistry. Blackwell Scientific Publications, Oxford (1990). ISBN 0-632-02431-3 3. Schatz, G.C., Ratner, M.A.: Quantum Mechanics in Chemistry. Prentice Hall, Englewood Cliffs (1993). ISBN 0-13-075011-5 4. Laganà, A., Riganelli, A. (eds.): Lecture Notes in Chemistry. Springer, Berlin (2000). ISBN 3-54041202-6 5. Laganà, A., Lendvay, G. (eds.): Theory of Chemical Reaction Dynamics. Kluwer, Dordrecht (2004). ISBN 1-4020-2165-6 6. Laganà, A., Riganelli, A.: Computational Reaction and Molecular Dynamics: From Simple Systems and Rigorous Methods to Complex Systems and Approximate Methods. Lecture Notes in Chemistry, vol. 75, 1–10 (2000) 7. Gervasi, O., Crocchianti, S., Pacifici, L., Skouteris, D., Laganà, A.: Towards the Grid design of the dynamics engine of a molecular simulator. In: Lecture Series in Computer and Computational Science, vol. 7, pp. 1425– 1428 (2006) 8. Gervasi, O., Manuali, C., Laganà, A., Costantini, A.: On the structuring of a molecular simulator as a Grid service. In: Chemistry and Material Science Applications on Grid Infrastructures. ICTP Lecture Notes, vol. 24, pp. 63–82 (2009). ISBN 92-9500342-X 9. Laganà, A., Riganelli, A., Gervasi, O.: On the structuring of the computational chemistry virtual organization COMPCHEM. Lect. Notes Comput. Sci. 3980, 665–674 (2006). http://compchem.unipg.it 10. EGEE (Enabling Grids for E-Science in Europe): http://public.eu-egee.org 11. Gervasi, O., Laganà, A.: SIMBEX: a portal for the a priori simulation of crossed beam experiments. Future Gener. Comput. Syst. 20(5), 703–716 (2004) 12. Newhouse, S.: http://web.eu-egi.eu/documents/other/ egi-blueprint/ 13. Costantini, A.: Computational chemistry— requirements and experiences with use of MPI. In: EGEE, EGEE’09, Barcelona, September 09. http:// indico.cern.ch/contributionDisplay.py?contribId=208 &sessionId=19&confId=55893 14. Costantini, A., Laganà, A., Gervasi, O.: Multiscale Study of O3 Tropospheric in Middle Italy. http:// indico.cern.ch/contributionDisplay.py?contribId=82& sessionId=137&confId=55893 15. Chapman, S., Gelb, A., Bunker, D.L.: A + BC: A General Triatomic Classical Trajectory Program. Quantum Chemistry Program Exchange, QCPE 273, Indiana University (1975) 16. Polanyi, J.C., Schreiber, J.L.: The dynamics of bimolcular reactions in physical chemistry. An advanced treatise. In: Eyring, H., Jost, W., Henderson, D. (eds.) Kinetics of Gas Reactions, vol. VI, p. 383. Academic, New York (1974) 17. Storchi, L., Tarantelli, F., Laganà, A.: Computing molecular energy surfaces on a Grid. Lect. Notes Comput. Sci. 3980, 675–683 (2006)
COMPCHEM: Progress Towards GEMS 18. Verdicchio, M.: Thesis of the Euromaster in Theoretical Chemistry and Computational Modelling, Perugia (2009) 19. Gordon, M.S., Schmidt, M.W.: Advances in electronic structure theory: GAMESS a decade later. In: Dykstra, C.E., Frenking, G., Kim, K.S., Scuseria, G.E. (eds.) Theory and Applications of Computational Chemistry: The First Forty Years, pp. 1167–1189. Elsevier, Amsterdam (2005) 20. DALTON: A Molecular Electronic Structure Program, Release 2.0 (2005). See http://daltonprogram.org 21. Werner, H.J., Knowles, P.J.: MOLPRO: A Package of Ab Initio Programs, Version 2002.6. http://www.molpro.net/ 22. Frisch, M.J., Trucks, G.W., Schlegel, H.B., Scuseria, G.E., Robb, M.A., Cheeseman, J.R., Scalmani, G., Barone, V., Mennucci, B., Petersson, G.A., Nakatsuji, H., Caricato, M., Li, X., Hratchian, H.P., Izmaylov, A.F., Bloino, J., Zheng, G., Sonnenberg, J.L., Hada, M., Ehara, M., Toyota, K., Fukuda, R., Hasegawa, J., Ishida, M., Nakajima, T., Honda, Y., Kitao, O., Nakai, H., Vreven, T., Montgomery, Jr., J.A., Peralta, J.E., Ogliaro, F., Bearpark, M., Heyd, J.J., Brothers, E., Kudin, K.N., Staroverov, V.N., Kobayashi, R., Normand, J., Raghavachari, K., Rendell, A., Burant, J.C. Iyengar, S.S. Tomasi, J. Cossi, M. Rega, Millam, N.J., Klene, M. Knox, J.E., Cross, J.B., Bakken, V., Adamo, C., Jaramillo, J., Gomperts, R.E., Stratmann, O., Yazyev, A.J., Austin, R., Cammi, C., Pomelli, J.W., Ochterski, R., Martin, R.L., Morokuma, K., Zakrzewski, V.G., Voth, G.A., Salvador, P., Dannenberg, J.J., Dapprich, S., Daniels, A.D., Farkas, O., Foresman, J.B., Ortiz, J.V., Cioslowski, J., Fox, D.J.: Gaussian 09, Revision A.1. Gaussian, Wallingford (2009) 23. Arteconi, L., Laganà, A., Pacifici, L.: A web based application to fit potential energy functionals to ab initio values. Lect. Notes Comput. Sci. 3980, 694–700 (2006) 24. Angeli, C., Bendazzoli, G.L., Borini, S., Cimiraglia, R., Emeerson, A., Evangelisti, S., Maynau, D., Monari, A., Rossi, E., Sanchez-Marin, J., Szalay, P.G., Tajti, A.: The problem of interoperability: a common data format for quantum chemistry codes. Int. J. Quant. Chem. 107, 2082–2091 (2007) 25. Schatz, G.C.: Potential energy surfaces. Lect. Notes Chem. 75, 15–32 (2000) 26. Murrell, J.N., Carter, S., Farantos, S.C., Huxley, P., Varandas, A.J.C.: Molecular Potential Energy Functions. Wiley, London (1984) 27. Garcia, E., Laganà, A.: Diatomic potential functions for triatomic scattering. Mol. Phys. 56, 621–627 (1985) 28. Garcia, E., Laganà, A.: A new bond order functional form for triatomic molecules: a fit of the BeFH potential energy. Mol. Phys. 56, 629–639 (1985) 29. Aguado, A., Tablero, C., Paniagua, M.: Global fit of ab initio potential energy surfaces I: triatomic systems. Comput. Phys. Commun. 108, 259–266 (1998) 30. Laganà, A.: A rotating bond order formulation of the atom diatom potential energy surface. J. Chem. Phys. 95, 2216–2217 (1991)
585 31. Laganà, A., Ochoa de Aspuru, G., Garcia, E.: The largest angle generalization of the rotating bond order potential: three different atom reactions. J. Chem. Phys. 108, 3886–3896 (1998) 32. Rodriguez, A., Garcia, E., Hernandez, M.L., Laganà, A.: A LAGROBO strategy to fit potential energy surfaces: the OH + HCl reaction. Chem. Phys. Lett. 360, 304–312 (2002) 33. Skouteris, D., Pacifici, L., Laganà, A.: Time dependent wavepacket calculations for the N(4 S) + N2 (1 g+ ) system on a LEPS surface: inelastic and reactive probabilities. Mol. Phys. 102(21–22), 2237–2248 (2004) 34. Pack, R.T.: Space-fixed vs body-fixed axes in atomdiatomic molecule scattering. Sudden approximations. J. Chem. Phys. 60, 633–700 (1974) 35. McGuire, P.M., Kouri, D.: Quantum mechanical close coupling approach to molecular collisions: jzconserving coupled states approximation. J. Chem. Phys. 60 2488–2503 (1974) 36. Balint-Kurti, G.G.: Time dependent quantum approaches to chemical reactivity. Lect. Notes Chem. 75, 74–87 (2000) 37. Garcia, E., Saracibar, A., Laganà, A., Balucani, N.: On the anomaly of the quasiclassical product distributions of the OH + CO → H + CO2 reaction. Theor. Chem. Acc. (in press) 38. Tsai, Y.R., Cheng, L.T., Osher, S., Zhao, H.K.: Fast sweeping algorithms for a cLass of Hamilton–Jacobi equations. SIAM J. Numer. Anal. 41, 673–694 (2003) 39. Skouteris, D., Castillo, J.F., Manolopulos, D.E.: ABC: a quantum reactive scatering program. Comput. Phys. Commun. 133, 128–135 (2000) 40. Schatz, G.C.: Quantum reactive scattering using hyperspherical coordinates: results for H + H2 and Cl + HCl. Chem. Phys. Lett. 150, 92–98 (1998) 41. Casavecchia, P., Balucani, N., Volpi, G.G.: The chemical dynamics and kinetics of small radicals. In: Lin, K., Wagner, A.F. (eds.) Adv. Ser. Phys. Chem., vol. 6, chapter 9. World Scientific, Singapore (1995) 42. Lee, Y.T.: In: Scoles, G. (ed.) Atomic and Molecular Beam Method. Oxford University Press, New York (1987) 43. Siska, P.E.: Iterative unfolding of intensity data, with application to molecular beam scattering. J. Chem. Phys. 59, 6052 (1973) 44. Skouteris, D., De Fazio, D., Cavalli, S., Aquilanti, V.: Quantum stereodynamics for the two product channels of the F + HD reaction from the complete scattering matrix in the stereodirected representation. J. Phys. Chem. A 113, 14807–14812 (2009) 45. Saracibar, A., Sanchez, C., Garcia, E., Laganà, A., Skouteris, D.: Grid computing in time dependent quantum reactive dynamics. Lect. Notes Comput. Sci. 5072, 1065–1080 (2008) 46. Bellucci, D., Tasso, S., Laganà, A.: Parallel model for a discrete variable wavepacket propagation. Lect. Notes Comput. Sci. 2658, 341–349 (2003) 47. Gregori, S., Tasso, S., Laganà, A.: Fine grain parallelization of a discrete variable wavepacket calculation using ASSIST-CL. Lect. Notes Comput. Sci. 3044, 437– 444 (2004)
586 48. Kacsuk, P., Sipos, G.: Multi-Grid, Multi-user workflows in the P-GRADE portal. Journal of Grid Computing 3(3–4), 221–238 (2005) 49. Kacsuk, P., Dózsa, G., Kovács, J., Lovas, R., Podhorszki, N.Z., Balaton Gombás, G.: P-GRADE: a Grid programming environment. Journal of Grid Computing 1. 171–197 (2003) 50. Kacsuk, P., Farkas, Z., Herman, G.: Workflowlevel parameter study support for production Grids. Lect. Notes in Comput. Sci. 4707, 872–885 (2007) 51. Skouteris, D., Costantini, A., Laganà, A., Sipos, G., Balaski, A., Kacsuk, P.: Implementation of the ABC quantum mechanical reactive scattering program on the EGEE Grid platform. Lect. Notes Comput. Sci. 5072, 1108–1120 (2008) 52. Multi-Grid installation of P-GRADE portal: http://portal.p-grade.hu/multi-Grid 53. Kacsuk, P., Sipos, G.: Multi-Grid, Multi-user workflows in the P-GRADE Grid portal. Journal of Grid Computing 3, 221–238 (2006)
A. Laganà et al. 54. Rampino, S., Skouteris, D., Laganà, A., Garcia, E.: A comparison of the isotope effect for the N + N2 reaction calculated on two potential energy surfaces. Lect. Notes Comput. Sci. 5072, 1081–1093 (2008) 55. Rampino, S., Skouteris, D., Laganà, A.: The O + O2 reaction: quantum detailed probabilities and thermal rate coefficients. Theor. Chem. Acc. 123, 249–256 (2009) 56. Manolopulos, D.E.: An improved log derivative method for inelastic scattering. J. Chem. Phys. 85, 6425–6429 (1986) 57. Pack, R.T., Parker, G.A.: Quantum scattering in the three dimentions using hyperspherical (APH) coordinates. Theory. J. Chem. Phys. 87, 3888–3921 (1987) 58. Manuali, C., Laganà, A., Rampino, S.: GRIF: a Grid Framework for a web service approach to reactive scattering. Comp. Phys. Commun. 181, 1179–1185 (2010) 59. www.cineca.it 60. www.cnaf.infn.it