Scalable, Hydrodynamic and Radiation-Hydrodynamic ... - CiteSeerX

3 downloads 0 Views 107KB Size Report
BiCG, Cray T3E, Silicon Graphics Origin 2000. Introduction. Scalable high performance computing has brought about a revolution in astrophysical modeling.
Scalable, Hydrodynamic and Radiation-Hydrodynamic Studies of Neutron Stars Mergers F. Douglas Swesty Laboratory for Computational Astrophysics National Center for Supercomputing Applications (NCSA) and Dept. of Astronomy University of Illinois, Urbana-Champaign Urbana, IL 61801 [email protected] http://www.ncsa.uiuc.edu/~dswesty/

Paul Saylor NCSA Dept. of Computer Science University of Illinois, Urbana-Champaign Urbana, IL 61801 [email protected] http://www.uiuc.edu/ph/www/p-saylor

Dennis C. Smolarski, S.J. Dept. of Mathematics Santa Clara University Santa Clara, CA 95053-0290 [email protected] http://www-acc.scu.edu/~dsmolarski/homepage.html

E. Y. M. Wang NCSA Dept. of Physics St. Louis University St. Louis, MO 63130 [email protected]

http://nsnsgc.ncsa.uiuc.edu/nsnsgc/people.html

Abstract: We discuss the high performance computing issues involved in the numerical simulation of binary neutron star mergers and supernovae. These phenomena, which are of great interest to astronomers and physicists, can only be described by modeling the gravitational field of the objects along with the flow of matter and radiation in a self consistent manner. In turn, such models require the solution of the gravitational field equations, Eulerian hydrodynamic equations, and radiation transport equations. This necessitates the use of scalable, high performance computing assets to conduct the simulations. We discuss some of the parallel computing aspects of this challenging task in this paper. Keywords: astronomy, astrophysics, binary neutron stars, Eulerian, fluid dynamics, gravitational field, hydrodynamics, iterative methods, linear systems, multidimensions, neutron star, parallel computing, precondition, radiation transport, BiCG, Cray T3E, Silicon Graphics Origin 2000

Introduction Scalable high performance computing has brought about a revolution in astrophysical modeling. One aspect of this revolution is the ability we now possess to conduct large scale 3-D fluid dynamics simulations for use in astrophysical modeling. Another aspect, that is only now emerging, is our ability to model the flow of radiation in multi-dimensions as well. In this paper we describe two ongoing astrophysical modeling projects that stress both of these aspects. One project involves the numerical modeling of the merger of two neutron stars in a binary system. This project has been selected as a NASA Earth and Space Sciences (ESS) High Performance Computing and Communication (HPCC) Grand Challenge. The other project involves the numerical modeling of convectively driven supernova explosions via multi-dimensional radiation hydrodynamics explosions. Both of these projects contain unique high performance computing challenges. In the subsequent sections we describe the scientific features of the problem. Then in succeeding sections we discuss high performance computing problems, results, and plans. Binary Neutron Star System Coalescence Coalescing binary neutron star systems are thought to be quite common in the universe.

The best known binary neutron star system is the Hulse-Taylor binary pulsar PSR1913+16. This system’s orbit is decaying slowly as it emits gravitational waves, and in about years the stars will spiral rapidly together over a period of about 15 minutes. During this brief period the gravitational waves will sweep up in frequency from about 10Hz to about 1500Hz, when the final coalescence occurs. This contains the range of the bandwidths of the LIGO and VIRGO observatories, which should be operational around the year 2000. In the final coalescence stage general relativistic effects, hydrodynamic instabilities, the equation of state, and neutrino transport become extremely important governors of the gravitational wave signal that is generated. It is during this final phase that we intend to concentrate our efforts, where we can bring a careful treatment of all these effects that has not been possible previously. A number of independent calculations [2] show that these final coalescence events will occur quite frequently in the universe, the most conservative of which place the observable event rate at between several and several dozen per year. We briefly recount here some of the astrophysical context in which this problem is set and explain how our work fits in. Our calculations will provide important information about observational and theoretical issues in several major areas of astrophysics and relativity. Each of the items is discussed briefly in the following paragraphs. First, gravitational waves from the merger of a neutron star with a neutron star (NSNS) and the merger of a neutron star with a black hole (NSBH) are likely to be detected by LIGO and VIRGO within 5--10 years, at a rate of dozens of events per year [2]. There is much interesting information contained in the gravitational wave signals. With the parameters of the coalescing objects, e.g., masses, radii, spin, etc. extracted from the signal through template matching, one can obtain information on the mass to radius relation, the distribution and range of neutron star masses etc. By tracking the system through the merger phase, the waveforms will tell us a great deal about the overall mass motions of the merging neutron star matter. An exciting possibility exists that the information contained in the waveform, in conjunction with detailed models for the dynamics of the merger, will allow us to constrain the equation of state (EOS) of dense matter. Such constraints would nicely complement neutron star thermal evolution constraints that can be placed on the EOS by observing data on galactic neutron stars from orbiting high energy astronomy satellites. Taken together, these combined constraints give important information about both neutron star structure and the physics of dense matter. All of these ideas call for detailed numerical models of the coalescence. Our intent is to carry out numerical simulations of the mergers to show in detail how variations in the EOS might produce discernible effects in the gravitational wave signal. A second reason for studying NSNS mergers is that the coalescence of NSNS binary systems has been discussed as a potential site for the gamma--ray bursts [3] detected by

the Burst and Transient Source Experiment (BATSE) on board the Compton GRO satellite. Despite the widespread discussion of NSNS mergers as a source of these bursts there have been few numerical calculations to assess the viability of this mechanism. These few calculations have left many unresolved questions about this model for bursts. Our proposed studies will direct attention to determining whether or not NS-NS mergers are compatible with the time scales and energetics of these bursts. Still a third reason for studying NSNS mergers is that they have been cited as one of two possible sites of the astrophysical r--process, which is responsible for the production of many of the heavy neutron rich elements: The merger process of compact binaries may eject extremely neutron rich matter, which then undergoes the classical r--process nucleosynthesis. This could be a major source of r-process nuclei in the galaxy [4]. However, a proper treatment of this problem should include neutrinos and a realistic equation of state, as our calculations will. The numerical modeling of neutron stars mergers presents some unique challenges. The problems requires not only the simulation of the motion of the fluid matter which constitutes the neutron stars, but also the gravitational fields which derives from the matter itself. Since the fluid bodies are self-gravitating, both the fluid and gravitational phenomena are intertwined in a complex synergy. Since the gravitational field of a neutron stars is sufficiently strong, it is more accurately described by Einstein’s relativistic theory of gravity (general relativity, hereafter GR) as opposed to classical Newtonian gravitational theory. In GR, one must solve an extremely complex set of tensor field equations to describe the gravitational field instead of the relatively simple Poisson equation that arises from Newtonian gravity. Fortunately, the relativistic analog of the Euler equations of classical fluid dynamics mimic the structure of their Newtonian counterparts. The differences between the relativistic and non-relativistic fluid dynamics equations mainly arise from Lorentz contraction and space-time curvature effects. Because of the close similarity of the GR and Newtonian hydrodynamics equations we are able to adapt a variety of standard computational fluid dynamics methods to their solution. The work we describe in the remainder of this paper is based on finite-difference techniques for solving the fluid dynamics and gravitational field equations. Convectively Driven Supernovae Explosions Many of the supernovae that astronomers observe are thought to originate from the gravitational collapse of the core of a massive star. Such stars, of mass greater then about ten solar masses, have exhausted their nuclear fuel and the core of the star is unable to sustain itself against the pull of gravity. The core, which is largely composed of iron, nickel, and similar elements, undergoes a rapid gravitational collapse which causes it to

contract from several thousand kilometers to a size of 10-20 kilometers in about a tenth of a second. The outer portion of the core infalls supersonicly while the inner portion collapses subsonicly. During this time electrons in the matter are combining with protons in weak nuclear reactions that produce neutrons and neutrinos. Thus this collapse process gives birth to a neutron star via the death of a normal, everyday, massive star. Eventually the collapse of the core halts when the radius of the object is a few tens of kilometers in size. The inner portion of collapsed core becomes a proto-neutron star, which has a mass of about about solar masses, and has central densities in excess of g/cm . At these densities the neutrinos do not move freely through the matter, but instead diffuse slowly in a random walk process. In contrast, at lower densities, i.e. below g/cm , the neutrinos move freely without interacting with the matter. This wide range of behavior of the neutrinos and the steep density gradients in the newly born neutron star make the phenomenon hard to simulate. In addition to modeling the flow of the matter, which requires solving the Euler equations of hydrodynamics, one must also model the flow of radiation, in the form of neutrinos, as well. The latter task is far more daunting then the first. Once the collapse of the iron-nickel core of the pre-supernova star has halted, the core ‘‘bounces,’’ i.e. it expands outward having overshot the equilibrium configuration during the collapse. The outer part of the core is still infalling supersonicly and is unaware that the inner portion of the core has reversed its motion. A shock wave forms near the sonic point in the flow and begins to propagate outward through the outer core. The shock wave drives a complex reactive flow involving the nuclear dissociation of heavy nuclei into free neutrons and protons. This process, which soaks up energy from the shock, eventually causes a shock to stall before reaching the edge of the iron-nickel core. The major challenge to supernova modelers is to explain the explosion of the star despite the fact that the shock wave stalls before escaping the core. The weakening shock wave leaves behind another legacy: as the shock traverses the core it imprints a negative entropy gradient on the post-shocked core which is convectively unstable. Thus in the post bounce epoch of the supernova we also see convection occurring. The current thinking is that this convection revitalizes the shock and causes it to continue outward to eventually result in the explosion of the star. Thus the modeling of this phenomena requires that one simulate a radiating, reactive, convective flow in at least two (and preferably three) dimensions. Finally, the energy spectrum of the neutrinos must be discretized and this adds another energy dimension to the problem which becomes either three or four dimensional depending on how many spatial dimensions are modeled.

Computational Methodology/Algorithms

Our numerical attack on the neutron star merger and supernova problems makes use of several algorithmic pieces: 3-D numerical hydrodynamics codes, elliptic equation solvers, an equation of state that describes the properties of hot, dense matter. In the case of the supernova studies we will also employ several different radiation transport algorithms in conjunction with both 2-D and 3-D radiation hydrodynamics codes to describe the flow of neutrinos through matter. In the following subsections we discuss these components and their computational performance and scaling characteristics. 3-D Hydrodynamics Codes for Neutron Star Modeling As part of our neutron star modeling efforts we are attempting to carry out simulations at three levels of approximation: Newtonian, post-Newtonian, and fully relativistic (GR). All of our hydrodynamic algorithms are based on explicit, second--order, staggered mesh, artificial viscosity methods. The Newtonian scheme is similar to the ZEUS-2D scheme order developed by Stone and Norman [5]. The post-Newtonian code adds to this the post-Newtonian corrections as described by Blanchet, Damour, & Schäfer [10]. And the GR scheme is similar to that of Hawley, Smarr, & Wilson [6]. Employing these methods we have been able to achieve high flop rates and reduced zone update times. The Newtonian code, V3D, currently achieves nearly 500 Mflops (497 Mflops to be precise) on a single Cray C90 processor. Using the F77 implementation of V3D we have been conducting preliminary simulations of neutron star mergers on the NCSA SGI Power Challenge. We have found that the F77 version of V3D can scale with 70% parallel efficiency across the entire 16-processor machine when we are employing polytropic equations of state. When we employ a more realistic equation the scaling improves beyond this. We are currently re-implementing this algorithm as a message passing code using the MPI message passing interface. We expect the scaling to improve still further when this is complete. 2+1-D Radiation--Hydro Codes for Supernova Modeling In order to attack the multi-dimensional supernova problem we have developed a 2+1-D multigroup radiation hydrodynamics code we call V2D. The ‘‘2’’ in 2+1 refers to the number of spatial dimensions while ‘‘1’’ refers to the additional energy dimension. The latter dimension relates to the number of discrete energy bins or groups (hence the term ‘‘multigroup’’) used to model the spectrum of neutrino energies. In practice the spectrum of neutrinos covers a range of 0-300 MeV and we use approximately 20 groups to cover this range. Unfortunately, unlike the spatial dimensions each energy group is coupled to every other group (unlike the nearest neighbors coupling that arises in hydrodynamics) by scattering processes. This coupling, together with the fact that the radiation transport equations must be implicitly differenced, make for a computationally intensive task. The

2+1-D simulation which includes radiation transport is far more computationally demanding then a 3-D hydrodynamic simulation. The bulk of the computation in V2D is spent in solving a linear system of equations arising from the implicit finite differencing of the radiation transport equations. The system of linear equations is block tridiagonal or pentadiagonal in nature and lends itself well to the use of Krylov subspace methods as we discuss later. Besides the cost of the linear systems solving the additional computational effort in this multigroup method additional computational cost of the multigroup approach is in the embarrassingly parallel operations of evaluating the neutrino-matter interaction terms. The V2D code currently exists as an f90 code and is being ported into an MPI implementation to take advantage of the finer grained parallelism on the Cray T3E. We have already implemented a linear system solver using f90 and MPI and we discuss the performance of this algorithm in a later section. Linear Systems: Krylov Subspace Methods It is well known that a preconditioner is necessary for the efficient solution of linear systems. Over the years, a set of widely used preconditioners has emerged some of which are not appropriate for parallel computing and some of which are. (The recent book of Saad [8, Chap. 10,] includes a nice survey of preconditioners.) Among the former category one would find the popular incomplete LU preconditioners [8, pp 270f,], though much work has been done to improve performance of this important class (cf [8, pp 369f,]). Among the latter, one would find polynomial preconditioning, domain decomposition methods and approximate inverse preconditioning, all surveyed in the book of Saad. In our experiments we have focused on approximate inverse preconditioners. The advantage of these methods can be understood in contrasting them with incomplete LU preconditioners. Incomplete LU preconditioners generate an LU decomposition of the matrix. The preconditioning step then consists of solving the lower triangular system of the incomplete decomposition, followed by solving the upper triangular system, both steps of which are recursive and difficult to make efficient in a parallel solution. In contrast, an approximate inverse preconditioner approximates the inverse of the system matrix. The preconditioning step then consists of a matrix multiplication rather than a matrix solve (by a triangular matrix), which is easy to implement efficiently on a parallel processor. Recent algorithms proposed for approximate inverses, for example, [1], incorporate ingenious ideas that yield effective preconditioners, even compared to incomplete LU preconditioners on serial machines. We have taken a different, simplified approach however. We have selected block submatrices along the diagonal of our given matrix to come up with a nonsingular block diagonal matrix. We then compute the inverse of this

and use it as our block inverse preconditioner. We have found this to be an effective method. (A separate report is under preparation.) Using the aforementioned block inverse preconditioning method we have implemented BiCG in MPI using a 1-D Cartesian process topology. This particular topology is optimal for the solution of the transport equation along radial rays through the supernova core. A typical problem size would be approximately radial zones and energy groups. This problem size will result in a system of

linear equations that must

be solved. The matrix will be block tridiagonal with the block size determined by the in this case. The domain decomposition strategy number of energy groups, results in each processor ‘‘owning’’ a number of rows of blocks of the matrix. The vectors are also distributed accordingly. The bulk of the computational effort in this algorithm is due to matrix-vector multiplies and the computation of the block inverses for both the matrix and its transpose. The block inverse operation is embarrassingly parallel while the matrix and transpose matrix multiplies require communication of ghost rows from the neighboring processes in the topology. The BiCG algorithm requires one matrix multiply and one transpose multiply per iteration. Using asynchronous sends and receives much of this communication can be hidden while work is done on the interior rows of the process. The BiCG algorithm also requires at least two dot products per iteration. A third dot product is required if one wishes to employ the norm of the residual as a stopping criterion. Each of these dot products requires a costly global reduction. In fact, a performance analysis reveals that this particular operation dominates the communication costs of this algorithm. Unfortunately, MPI does not provide an asynchronous global reduction operation so there exists no means of hiding this particular portion of the message passing. An example of the scaling we have achieved with this algorithm is displayed in figure 1. by problem on the Cray T3E and the We compare the scaling for a Silicon Graphics Origin 2000. This particular problem took 13 iterations to converge. For problems requiring fewer iterations the embarrassingly parallel computation of the block inverse dominates the work and the scaling will increase.

Figure 1: Preliminary scaling and parallel efficiency results for the MPI implementation of BiCG on the Cray T3E and the Silicon Graphics Origin 2000. The solid lines indicate the T3E results while the dashed lines indicate the Origin 2000 results. These results may not reflect the true capabilities of the machines. Linear Systems: Multigrid Methods In order to model gravity in the Newtonian and Post-Newtonian regimes one must solve the Poisson equation in order to calculate the gravitational potential everywhere on the 3-D mesh. Furthermore in the Post-Newtonian case one must additionally solve several other elliptic equations in order to self-consistently calculate the Post--Newtonian corrections to the gravitational field. Because of the large condition number associated with the linear system corresponding to the Laplacian operator the choice of multigrid

techniques is preferred for this particular problem. In order to address these needs we have developed a multi-grid Poisson solver that implements a full multi-grid W--cycle (FMW) algorithm. This code was implemented as a f90 code and is currently being used to carry out both Newtonian and post-Newtonian models of neutron star mergers on the NCSA SGI Power Challenge and Origin 2000. The current version of this code which relies on automatically parallelizing f90 compilers do not scale well. However, we are currently working on a port of this code into MPI so that we can implement it in a finer grained parallel environment such as the Cray T3E or the SGI Origin 2000. The port of a multigrid algorithm to MPI is non-trivial. At grid levels where the grid size becomes comparable to the size of the processes topology some other algorithm must be adopted for distributing the mesh among the processors. We are currently investigating several alternate possibilities for dealing with this problem. The Equation of State In order to incorporate a realistic equation of state (EOS) into our numerical hydrodynamics codes in a fast and accurate manner we have developed an algorithm for implementing the EOS as a table in a thermodynamically consistent fashion [9]. The code that implements this algorithm we have accordingly dubbed TCTEOS for Thermodynamically Consistent Tabular EOS. This code allows the use of an EOS table while ensuring self--consistency among the thermodynamic variables. Such thermodynamic self--consistency can manifest itself in the form of an unphysical buildup or decline in entropy and temperature of what should be an adiabatic flow. Since many of the reaction rates in dense matter reactive flows are very sensitive to the temperature such a buildup of temperature could wreck a numerical simulation. This concept of thermodynamic consistency has often been ignored in building EOS tables. The motivation for developing a tabular EOS code was threefold. First, we wanted to provide a means of incorporating equations of state other than the LS EOS into our hydrodynamic codes without having to modify the interface. Also, this tabular EOS allows us to include extant EOS tables, as well as the results of numerically burdensome EOS calculations, directly into our code. The second motivation for the development of such a code was speed. A directly indexed table is usually much faster than most complex EOS codes. The third motivation for developing this code is that it allows us to easily match equations of state, which describe different ranges of density and temperature, in a smooth manner. Using this TCTEOS code we can easily incorporate almost any current dense matter EOS into our numerical simulations in a straightforward fashion.

Since the EOS evaluations usually dominate the flop count for our hydrodynamic simulations we have gone to great lengths to minimize the number of floating point operations needed to implement this tabular EOS scheme. We have also been able to construct the implementation in a fashion such that we are able to achieve nearly 50% of the processors theoretical peak speed on the MIPS R10000 processor which is used in the SGI Power Challenge and the SGI Origin 2000.

Preliminary Results In this section we discuss some preliminary results from both the neutron star merger project and the supernova explosion mechanism project that illustrate why high performance scalable computing is needed for both of these projects. We have recently begun to carry out some preliminary studies of Newtonian neutron star mergers using the V3D code we described in the previous section. The reason for these simulations are fourfold: They give us estimates of the rate of gravitational energy radiated and the estimates of the waveforms (see figures 3 and 4), they give us a baseline to compare subsequent Post-Newtonian and relativistic simulations to, they allow us to begin to understand the physical environment that is present, and finally they allow us to conduct resolution studies. Using Newtonian studies we can also begin to evaluate issues like what the r-process yields from the merger are and how the EOS influences the dynamics of the coalescence. We have found a number of intriguing results in the simulations that we have conducted so far. The simulations have studied the Newtonian coalescence, driven by tidal instabilities, of two neutron stars (modeled as n=1 polytropes) on a grid covering a computational domain with a length of km on a side. Our simulations yield gravitational wave signals that agree with the results of other calculations [11,12] The radiated energy, as predicted by the time variation of the quadrapole moment of the mass, also yields radiated energies that are in good agreement with these calculations. Finally, it seems as though several tenths of solar mass of material is ejected from the outer layers of the stars into wide orbits. This is quite an exciting result from the standpoint of the r-process since such low density neutron rich material is exactly what is needed to make the r-process occur. However, we have found that the onset of the tidal instability first predicted by [7] seems separation. We now understand this result to be a consequence of to occur outside the the need for better resolution in these calculations. By studying the resolution needed to accurately represent static single neutron stars on a grid we have discovered that the

reason for our differences from the result of Lai et al. is that higher resolution is required to adequately resolve the stars advecting on a Cartesian mesh. The calculations of Lai et al. have employed an SPH code, an inherently Lagrangian method, in a co-rotating frame. However, since we ultimately would like to achieve fully relativistic calculations on Cartesian grids, we needed to establish that we could indeed achieve sufficient resolution needed to accurately represent the stars on the grid. Also, since we are very much interested in the r-process nucleosynthesis in the low density material that seems to be ejected from the stars as they coalesce we want to use an Eulerian method that can best track the flow of this low density material accurately. We have verified the resolution needed to represent stars in the orbiting configuration by advecting single stars across the mesh to make sure that they remain stable as they should. For a single star we were able to employ a much smaller domain that yielded higher resolution. Unfortunately, for the merger we cannot restrict the size of our computational domain if we are to study how much mass is shed during the merger, which is in turn important for calculating the nucleosynthetic yields of r-process material. We will need to reach a resolution of at least in order to clearly resolve the stars and eliminate this problem. The resolution of the km domain is less than adequate for resolving the neutron stars on the grid we employed. This agrees with the conclusions of Ruffert et al.[11]. We have also begun testing of our V2D code with low angular resolution supernova models. These low resolution runs are effectively 1+1-D models since convection never develops under such conditions. However, they are important scientifically as they allow us to understand what the contributions of convection is to the whole phenomena. By comparing these low angular resolution model with high angular resolution models that are forthcoming we will be able to understand which aspects of the dynamics are due to convection and which are unrelated. The completion of our port of the V2D code to MPI will also enhance our ability to conduct these simulations at high resolution.

Acknowledgments We would like to thank our collaborators on the Neutron Star Merger Grand Challenge and to thank NASA for support of the neutron star merger effort under contract contract CAN NCC S5-153. FDS would like to acknowledge support under NASA ATP grant NAG 5-3099. FDS and PS would like to acknowledge computing support from NCSA and PSC under NSF MetaCenter Allocation MCA97S011.

References 1

M. Benzi, C. D. Meyer, and M. T uma, ‘‘A sparse approximate inverse

preconditioner for the conjugate gradient method,’’ SIAM J. Sci. Comput., Vol. 17, N. 5 (Sept 1996), pp. 1135--49. 2

L. S. FINN, Preprint (1995).

3

T. PIRAN, Gamma-Ray Bursts and Binary Neutron Star Mergers, in Proc. of the IAU Symposium 165 ‘‘Compact Stars in Binaries’’. 1995.

4

G. MATHEWS, G. J. BAZAN AND J. J. COWAN, Ap. J. 391, 719 (1992).

5

J. M. STONE AND M. L. NORMAN, Astrophys. Jour. Suppl. 80, 753 (1992).

6

J. F. HAWLEY, L. L. SMARR, AND J. R. WILSON, Astrophys. Journ. Suppl. 55, 211 (1984).

7

D. Lai, F. A. Rasio, and S.L. Shapiro Astrophys. Jour. 420, 811, 1993

8

Y. Saad, Iterative Methods for Sparse Linear Systems. Boston: PWS Publishing Co., 1996.

9

F. D. SWESTY, J. Comp. Phys. 127, 118 (1996).

10

L. BLANCHET, T. DAMOUR, G. SCHÄFER, Mon. Not. Roy. Astron. Soc. 242, 289 (1990).

11

M. RUFFERT, H.-T. JANKA, AND G. SCHÄFER, Astron. Astrophys., 311, 532 (1996)

12

X. ZHUGE, J. M. CENTRELLA, AND S. L. W. MCMILLAN Physical Review D54, 7261, 1996

Author Biography

F. Douglas Swesty

F. Douglas Swesty has been working in the fields of Computational Astrophysics and Nuclear Astrophysics for 10 years. He received his Ph.D. in Physics from SUNY Stony Brook in 1993. He currently holds positions as a Visiting Research Assistant Professor in the University of Illinois Department of Astronomy and as a Research Scientist at the National Center for Supercomputing Applications. His research interests are primarily focused on the physics of compact objects (black holes, neutron stars, supernovae, white dwarfs), radiation hydrodynamics, numerical relativity, and computational astrophysics. Some of his currently active projects are: research on the equation of state of hot, dense matter; 2-D and 3-D modeling of core collapse supernovae; modeling the merger of binary neutron star systems in 3-D; the development of adaptive mesh radiation hydrodynamic algorithms; and the development of parallel methods for radiation transport.

Paul Saylor Paul Saylor has been a member of the Computer Science Department at Illinois since 1967. Currently affiliated with Lawrence Livermore Laboratory, and in the past with Lawrence Berkeley Lab, Los Alamos Lab and Oak Ridge Lab. Principal Investigator for the NASA Earth and Space Grand Challenge on Simulating the Merger of Binary Neutron Stars. Interests are in high performance computing and the solution of large linear and nonlinear systems with applications to astrophysics, computational electromagnetics and groundwater flow.

Dennis C. Smolarski, S.J. Dennis C. Smolarski, S.J. is an associate professor in the Department of Mathematics at Santa Clara University, specializing in Computer Science. His research areas focus on iterative solutions of linear systems.

E. Y. M. Wang E. Y. M. Wang is a third year graduate student at Washington University in St. Louis in the department of physics. He is currently working on his Ph.D. thesis at the National Center for Supercomputing Applications as a research assistant. The thesis endeavours to examine compact object astrophysics, in particular the numerical modelling of neutron stars and their mergers in binary systems. This study will involve a knowledge and understanding of the structure of compact objects, the equation of state of dense matter and the dynamical behaviour Newtonian and general relativistic spacetimes. An understanding of numerical relativity and computational astrophysics will be necessary for this work. A tentative date for the completion of his Ph.D. work is July 1999.