Large-Scale Inversion-Based Modeling of Complex ...

10 downloads 0 Views 2MB Size Report
May 17, 1999 - simulations to source inversion, beginning with the Landers and ...... [82] Alan Su, Francine Berman, Richard Wolski, and Michelle Mills Strout.
Large-Scale Inversion-Based Modeling of Complex Earthquake Ground Motion in Sedimentary Basins Jacobo Bielak, Steven Day, Omar Ghattas, David O’Hallaron, and Jonathan Shewchuk Carnegie Mellon, UC Berkeley, and San Diego State University (This page will not be included in the submitted version) May 17, 1999

Proposal submitted to the National Science Foundation Directorate for KDI program (NSF 99-29).

-1

Part A

Project Summary Large-Scale Inversion-Based Modeling of Complex Earthquake Ground Motion in Sedimentary Basins J. Bielak, S. Day, O. Ghattas, D. O’Hallaron, and J. Shewchuk, Carnegie Mellon University, UC-Berkeley, and San Diego State University. The main objective of the proposed research is to develop the capability for generating realistic inversion-based models of complex basin geology and earthquake sources by computer simulation, and to use this capability to model and forecast strong ground motion during earthquakes in the Los Angeles Basin and the San Francisco Bay Area. This problem is of great importance to hazard mitigation because assessing the ground motion to which structures will be exposed during their lifetimes is an essential first step in designing earthquake-resistant facilities and retrofitting existing structures. Thus, ground motion modeling and forecasting are a necessary precursor of the design process. Computer modeling and forecasting earthquake ground motion in large basins is a challenging and complex task. The complexity arises from several sources. First, multiple spatial scales characterize the basin response: the shortest wavelengths are measured in tens of meters, whereas the longest measure in kilometers, and basin dimensions are on the order of tens of kilometers. Second, temporal scales vary from the hundredths of a second necessary to resolve the highest frequencies of the earthquake source up to a couple of minutes of shaking within the basin. Third, many basins have highly irregular geometry. Fourth, the soils’ material properties are highly heterogeneous. And fifth, geology and source parameters are only indirectly observable, and thus introduce uncertainty into the modeling process. While providing much useful information, current earthquake simulations (ours and others’) in many cases are not capable of adequately reproducing observed seismograms. The likely reason for this is that these models are based on a number of restrictive assumptions, made largely to reduce the computational effort. Driven by the need for greater fidelity in earthquake ground motion modeling, we propose to enhance our models by incorporating the following: (1) The ability to represent physical domains an order of magnitude larger than current models; (2) The ability to model frequencies that are greater than currently modeled; (3) Improved earthquake source models that we will derive from available observations by solving 3D inverse problems; (4) Improved basin material models, which, like our source models, will be based on the inversion of observations of ground motion within the basin; and (5) The ability to resolve boundary surfaces and sharp interfaces. The drive toward greater fidelity in earthquake modeling introduces computational challenges in all stages of the simulation process, from preprocessing to solving to postprocessing. To address these challenges, we propose to pursue a concerted, unified effort in parallel 3D mesh generation, parallel 3D seismic inversion, and large-scale distributed visualization. We expect to make important advances in physical modeling and algorithm and software tool development for multi-teraflops computers, while gaining physical insight into earthquake ground motion. Because of the critical role that ground motion plays in infrastructure design, the accelerated availability of suitable simulation methodologies will have a direct impact on public safety and welfare.

A-1

Part C

Project Description 1 Introduction Our main objective is to develop the computational capability for generating realistic inversion-based models of complex basin geology and earthquake sources, and to use this capability to simulate and forecast strong ground motion during earthquakes in the Greater Los Angeles Basin (GLAB) and the San Francisco Bay Area (SFBA). This problem is of great importance to hazard mitigation because assessing the ground motion to which structures will be exposed during their lifetimes is an essential first step in designing earthquake-resistant facilities and retrofitting existing structures. Knowledge of the anticipated ground motion is necessary to determine the inertial forces that a structure must withstand during an earthquake. Thus, ground motion modeling and forecasting are a necessary precursor of the design process. Our reasons for choosing the Los Angeles and San Francisco regions are that (1) they are the most highlypopulated seismic regions in the country, (2) they have well-characterized geological structures (including a varied fault system), and (3) extensive records of past earthquakes are available. While the two regions share these characteristics, their surficial geology differs significantly. The GLAB consists of mostly stiff soils, whereas the SFBA contains soft fill and mud deposits. Hence, in addition to their own intrinsic importance, the two areas represent a range of behavior that occurs in many other seismic regions. Computer modeling and forecasting of earthquake ground motion in large basins is a challenging and complex task. The complexity arises from several sources. First, multiple spatial scales characterize the basin response: the shortest wavelengths are measured in tens of meters, whereas the longest measure in kilometers, and basin dimensions are on the order of tens of kilometers. Second, temporal scales vary from the hundredths of a second necessary to resolve the highest frequencies of the earthquake source up to a couple of minutes of shaking within the basin. Third, many basins have highly irregular geometry. Fourth, the soils’ material properties are highly heterogeneous. Fifth, geology and source parameters are observable only indirectly, and thus introduce uncertainty into the modeling process. Because of its modeling and computational complexity and its importance to hazard mitigation, earthquake simulation is among the most challenging of the Grand Challenges. In recognition of this, the National Science Foundation established a Grand Challenge Applications Group in 1993 dedicated to earthquake ground motion modeling. The group, centered at Carnegie Mellon University, consists of earthquake engineers, seismologists, geologists, computational mechanicists, computer scientists, and computer graphics and visualization specialists. The present proposal builds on the accomplishments of the Quake Project, as we call it, and broadens its scope to encompass the complexity needed to forecast ground motion due to strong earthquakes. To address the new modeling challenges we propose to pursue a concerted, unified effort in parallel 3D mesh generation, parallel nonlinear optimization methods, and large-scale distributed visualization.

2 Current capabilities The Quake group at CMU has been working since 1991 on modeling earthquakes in large basins on parallel supercomputers (see www.cs.cmu.edu/ quake). Over the past eight years we have used such distributed memory machines as Intel’s iWarp and Paragon XPS, Thinking Machines’ CM-2 and CM-5, and Cray’s T3D and T3E for the most computationally demanding portion of the modeling. The basic computational steps include (1) generating an unstructured mesh that resolves the elastic wavelengths in the soil; (2) partitioning the mesh into subdomains that are mapped to processors of a target parallel machine; (3) discretizing the governing elastic wave propagation equations by finite elements; (4) integrating these equations in time by an explicit finite difference scheme; and (5) visualizing and postprocessing the resulting ground motion [11]. The main computational effort is in Steps 3 and 4; that is why these are done on parallel machines. However, as our problem size has increased,

C-1

the remaining (currently sequential) steps are becoming bottlenecks. Our computations are based on unstructured mesh algorithms that we have developed over the last eight years. Unstructured meshes introduce significant complexities on parallel machines. However, in geological structures like sedimentary basins, where seismic wavelengths vary significantly throughout the domain, unstructured meshes allow a tremendous reduction in the number of grid points (compared to uniform meshes) because element sizes can adapt locally to the wavelengths of propagating waves. Furthermore, a mesh tailored to local wavelengths permits much longer time steps without suffering instability, since it is less hampered by the Courant condition. Currently, our largest simulation stems from modeling the 1994 Northridge Earthquake in the San Fernando Valley (SFV). Typical snapshots from the simulation are depicted in Figure 1. The SFV simulation uses 98 million tetrahedral finite elements, 17 million grid points, 50 million resulting ODEs, 6,000 time steps, 24 GB of main memory, 256 Cray T3E processors, and 4 hours of runtime to simulate wave propagation. The preprocessing computations (generating and partitioning the mesh, which is done just once for each basin model) are carried out on a large memory sequential machine (actually one processor of a DEC 8400 with 8 Gb memory), and require several days of wall clock time. Much of this time is spent swapping, since the mesh generation requires 10 Gb of memory. Parallelizing the meshing component is a very difficult problem, but will be essential for solving larger problems.

(a)

(b)

Figure 1: Visualization of the simulated ground motion of a Northridge aftershock. (a) Interior view, showing displacement pattern around earthquake source 2.5 s after onset of aftershock. (b) Oblique view from above ground, showing amplification of ground motion within basin after 5.8 s. (Animation by G. Foss, Pittsburgh Supercomputing Center; Electronic Theater, ACM SIGGRAPH 1997) The propagation path illustrated in Figure 1(a) exhibits the clear three-dimensional (3D) character of the seismic excitation, while Figure 1(b) illustrates how the basin affects the resulting motion of the free surface. The important influence of 3D basin structures on ground motion is well established empirically (e.g., [43, 34, 18]) and theoretically (e.g., [86, 36, 92, 59, 41, 68, 89]. Another earthquake simulation produced by our current finite element (FEM) code is depicted in Figure 2, taken from [45]. In this example, we compare recorded ground motion at two sites from an aftershock of the 1995 Kobe (Hyogoken-Nambu) earthquake with an FEM simulation of the event. The simulation uses a 3D model of a portion of the Osaka basin in the Kobe area, upon which many of the recording stations lie. Also shown are simulations based on flat-layer 1D models of the structure beneath each station. For this small event, the source is relatively simple. Recordings at stations outside the basin, such as KBU, can be simulated reasonably well with a flat-layer model, which neglects the basin edge effects. For stations in the basin, such as RKI, however, 3D wave propagation effects are large, affecting both amplitude and duration of ground motion. Figure 2 shows that 3D simulations can successfully predict features of the ground motion in basins which are of considerable engineering importance, yet would not be predicted by flat-layered models. For example, the C-2







NS

0.12

 



Obs (max=0.006573, min=−0.00768)

0.1

3−D (max=0.00782, min=−0.0119)



0.08





0.04



0.02



0

−0.02 0

1−D (max=0.006993, min=−0.01032)



0.06



5

10

15

 



EW Obs (max=0.003257, min=−0.002379) 3−D (max=0.001838, min=−0.00141)





20 25 time (sec)



30



35



40



Obs (max=0.02663, min=−0.02613)



3−D (max=0.03362, min=−0.04206)

0.4



0.3



0.2



0.1



1−D (max=0.01458, min=−0.009618)



0.35



NS



Obs (max=0.0185, min=−0.02976)



0.25







−0.05 0

(a)

1−D (max=0.01548, min=−0.02524)



0.05 0

3−D (max=0.01851, min=−0.02754)



0.15

1−D (max=0.002725, min=−0.001848)

UD



Displacements at RKI (Obs., 3−D FEM, and 1−D Green Func.)

0.45

1−D (max=0.01171, min=−0.008114)

0.14





3−D (max=0.01145, min=−0.007433)

0.16





Obs (max=0.01557, min=−0.01241)

0.18

 Displacement (cm)



Displacements at KBU (Obs., 3−D FEM, and 1−D Green Func.)

Displacement (cm)







5

10

15

 

EW Obs (max=0.007715, min=−0.006788) 3−D (max=0.02022, min=−0.02014) 1−D (max=0.006246, min=−0.004077)



20 25 time (sec)



30



UD 35



40

(b)

Figure 2: Comparison of ground response due to an aftershock of the 1995 Kobe (Hyogoken-Nambu) earthquake with a FEM simulation [45]. (a) KBU site. (b) RKI site. enhanced duration of strong shaking, and the prominence of secondary arrivals in the RKI data is captured by the 3D simulations. At the same time, substantial misfits between data and 3D simulations are still present. Since the source in this case is very simple, the remaining misfit is clearly attributable to our incomplete knowledge of the 3D seismic velocity structure of the Kobe area. Our simulations of the much more complex mainshock [45], as well as simulations by others (e.g., [65, 46]) highlight additional issues arising from our incomplete knowledge of the source. Clearly, then, progress in ground motion simulation requires more than just computational advances. It is essential that advances in our computational capabilities be matched by advances in our models for both the source and the 3D seismic veocity structure in regions of high seismic hazard. For such advances in modeling capabilities to occur, extensive use must be made of available records obtained during actual earthquakes, within an inversion framework.

3 Proposed research—Overview As we noted in the previous section, while current earthquake simulations (ours and others’) provide much useful information, in many cases they are not capable of adequately reproducing observed seismograms. The reason for this is that these models are based on a number of restrictive assumptions, made largely to reduce the computational effort. Our Southern California model is restricted to (1) the SFV, because the entire GLAB is too large for the frequencies and velocities we model; (2) a highest resolved frequency of 1 Hz; (3) a softest soil having a 200 m/s shear wave velocity; (3) a linear constitutive model; (4) a simple source based on just 1D inversion; (5) a simple geological model not based on inversion at all; (6) kinematic earthquake sources; and (7) deterministic material and earthquake source parameters. Similar restrictions apply to our Kobe simulations. There, the highest resolved frequency is also 1 Hz, but the softest soil has a shear wave velocity of 600 m/s and only a small portion (26 km 40 km) of the Osaka basin has been considered. To relax these restrictions increases the computational stakes significantly. In particular, (1) extending our SFV model to the entire GLAB will increase our problem size by a factor of ten; (2) increasing the highest resolved frequencies (to values desirable for structural engineering purposes) implies a many-fold increase in problem size (since local mesh size is proportional to the cube of the resolved frequency); (3) solution of the inverse problem to determine basin and source parameters necessitates repeated solutions of the forward problem, perhaps 100s; (4) the estimation of seismic ground motion in a particular basin also requires repeated simulations for all possible earthquake scenarios—perhaps ten to fifty times. Anticipated increases in raw computing power over the next five years will not be sufficient by themselves for the successful understanding and forecasting of earthquake ground motion. C-3

An important limitation to accuracy is our insufficient knowledge of the earthquake source and basin structure and behavior of the constituent soils, on the length scale of a seismic wavelength. To attain our goals, simultaneous advances need to be made in physical modeling, parallel algorithms for seismic inversion, mesh generation, and software tools for the next generation of multi-teraflops computers. Thus we arrive at the present proposal. Below we introduce the main physical and computational aspects of the proposed research. Subsequent sections describe the problems in more detail and outline our technical approach Driven by the need for greater fidelity in earthquake ground motion modeling, we propose to develop models that will incorporate the following capabilities: (1) the ability to model physical domains at least an order of magnitude larger than current models, in order to incorporate meta-basins such as GLAB, as well as remote faults; (2) the ability to model shorter wavelengths, higher frequencies, and softer soils than are currently modeled, which becomes important as we develop more detailed models of the earthquake source, attain higher resolution of basin structure and lithology, and (in the more distant future) incorporate inelastic soil behavior; (3) improved earthquake source models, which we will derive from available observations by solving inverse problems using optimization techniques. However, unlike previous (flat-layered) inversion-based models, we will incorporate (3D) local site and basin effects into the inversion process; (4) improved basin material models which, like our source models, will be based on the inversion of observations of ground motion within the basin; and (5) the ability to resolve boundary surfaces and sharp interfaces in unstructured meshes with highly heterogeneous element density. This capability will be essential for representing topography and localized regions with large material contrasts. Furthermore, it will be very attractive for future studies of earthquake rupture dynamics. The drive toward greater fidelity in earthquake modeling introduces severe computational challenges in all stages of the simulation process, from preprocessing, to solving, to postprocessing. The focus of our research will be on the following three problems, which are currently the biggest obstacles to progress: Parallel mesh generation.The move to high-resolution models of the GLAB that will accurately resolve the appropriate physical features for frequencies of engineering interest will necessitate the use of meshes of the order of 1010 elements. This number will be even greater if the computational domain needs to be extended to encompass earthquake sources at moderate distances from the basin, as in the 1992 Landers earthquake. The increase in problem size poses difficulties because our current meshes are at the limit of what can be generated on sequential machines. Thus, we are faced with the challenge of developing parallel meshing algorithms that scale to huge sizes. Seismic Inversion. We propose to develop scalable parallel nonlinear optimization methods for inversion of the basin and source model parameters, using the 3D wavefield simulation capabilities developed under the Quake project. This inversion task is made possible by the extensive availability of ground motion records obtained in basins during past earthquakes. The inversion of source and basin model parameters gives rise to extremely large nonlinear least squares optimization problems, with variables and constraints numbering in the millions or greater. Seismic inverse problems of such scale have not been attempted to date. Their solution will be possible only with the use of highly parallel computers. Most of the focus on parallel algorithms in computational science, however, has been on the forward problem. The proposed work will build on our past research on seismic inversion and parallel numerical algorithms for large-scale PDE-constrained nonlinear optimization. Remote interactive visualization. The output of ground motion simulations in large basins can be of the order of hundreds of gigabytes to terabytes. Because of their large size, the datasets must reside at the supercomputing site where they are generated. But visualizing and analyzing these massive remote datasets interactively from our local sites is crucial for interpretation and physical insight. To address this problem we aim to develop a framework for providing remote interactive visualization service using existing visualization packages. We will use this framework as the basis for developing new techniques in performance-portable and collaborative visualization. Our services will be implemented and tested first among members of the Quake team located in Pittsburgh, and later between all the members located in Pittsburgh and California. In solving these problems, we plan to achieve the higher goal of creating an infrastructure for earthquake simulation that can be employed in the future (by other researchers or ourselves) for a broad variety of investigations; for example, of nonlinear constitutive laws governing soil behavior, of probabilistic source models, or of questions yet unforeseen. C-4

4 Proposed research—Physical modeling aspects Key objectives of the proposed research are (1) to develop an advanced capability for computer simulation of earthquake strong ground motion in complex geologic environments, using inversion-based models, and (2) to apply this capability to model and forecast strong ground motion during earthquakes in the Greater Los Angeles Basin (GLAB) and the San Francisco Bay Area (SFBA). Each of these regions is highly urbanized, seismically active, and geologically complex. In each case, the earthquake hazard is enhanced by the presence of deep sedimentary basins within the urbanized region. Realistic simulations of earthquake ground motion require realistic models for both the source (fault slip as a function of space and time) and regional geology (seismic velocities as a function of geographical and depth coordinates). The ground motion modeling problem is complex because of the large range of spatial scales present in the geology and the wavefield, the three-dimensional variability of both, and the large range of spatial and temporal scales present in the source. Below we discuss briefly the status of regional-scale seismic velocity models, the status of kinematic source models for earthquakes, and our proposed application of 3D FEM simulations to advance both velocity and source models through inversion of earthquake recordings. Velocity model and source model inversions are interdependent, and improvements in each will feed back into the other.

4.1 Regional seismic velocity models Under the sponsorship of the Southern California Earthquake Center (SCEC), we have assembled a 3D seismic velocity model of the GLAB [51, 50], illustrated in Figure 3(a), which provides a starting point for earthquake simulation research in southern California. A crude model of the SFBA is also available [81]; during the period of the proposed work, a more detailed model of SFBA will become available from the USGS (e.g., [20]), and will be further refined under a joint USGS/SCEC study funded by Pacific Gas and Electric Co. That model will provide a valuable starting point for SFBA simulations. One of us (Day) is the coordinating PI for the latter study, and two of us (Bielak and Day) are participating researchers. Our GLAB model includes the major populated basins (Los Angeles basin, Ventura basin, San Gabriel Valley, San Fernando Valley, and San Bernardino Valley). Several features are particularly important for the proposed work. (1) The GLAB model incorporates information from geological mapping, parameterized as a set of reference surfaces, combined with very flexible rule-based algorithms for constructing the velocity function between bounding surfaces. This approach ensures that where a sharp, geometrically complex boundary such as the basement/sediment interface is identified, its geometry is retained in the model. This approach is ideally adapted to the unstructured mesh methodology of the proposed study, since we can conform the mesh to precisely mimic those sharp boundaries in the velocity model. (2) The velocity model has been adopted as a reference model by the SCEC community. Thus, we can take advantage of a great deal of SCEC research directed at refining and extending it using additional geophysical, geological, and geotechnical data. (Two of us, Magistrale and Day, are co-chairs of the SCEC task group that is coordinating these efforts.) (3) The object-oriented model specification has sufficient generality that velocity heterogeneity can be represented over an extremely wide range of length scales, with much greater flexibility than would be the case for a simple mesh-based model description. A consideration of some of these length scales illustrates why new algorithms are imperative if we are to achieve major advances in ground motion modeling capability in the future. The maximum length scale that must be represented in an earthquake simulation for the GLAB model region is set by the size of the basins and the source length of major earthquakes (several hundred kilometers). The minimum length scale that must be represented is of the order of the minimum seismic wavelength, which in turn is set by the quotient of the minimum S wave velocity and maximum frequency to be modeled. For example, for a model with minimum seismic velocity given by the threshold between NEHRP site classes D and E (S velocity 180 m/s averaged over the upper 30 m), and a maximum frequency of 1.5 Hz, the minimum wavelength is roughly 120 m. Thus, the ratio of maximum to minimum physical length scale exceeds 103, to which must be added the requirements of numerical resolution (another factor of order 10). This ratio of scales presents a formidable challenge to 3D simulation methods. All current 3D, regional-scale ground motion simulation work, based on regular grids, must

C-5

circumvent this barrier by imposing an artificial lower bound on seismic velocities nearly an order of magnitude above actual minimum measured values. Because near-surface seismic velocities exert a strong influence on ground motion (e.g., [19, 4]), the artifical bound on seismic velocities produces artificially low ground motions in simulations (e.g., [47]). Furthermore, this barrier is so large that it cannot be overcome through increases in computing speed alone. Only through multi-resolution algorithms, such as the ones we have been developing, can we overcome this barrier. The GLAB and SFBA models were constructed using a large volume of geological, geotechnical, and geophysical data. To achieve major further advancements in our modeling capabilities, we must now make extensive use of available ground motion records obtained during actual earthquakes, within an inversion framework.

4.2 Seismic velocity inversion We will apply formal optimization approaches to the problem, focused initially on simultaneously fitting large suites of long-period ground motion data from large earthquakes (e.g.,Landers, Northridge, Whittier Narrows) and their larger aftershocks. We propose an incremental approach, beginning with simple model parameterizations and selected earthquake recordings, progressing to more complex models and comprehensive data sets. Concurrently, SCEC will be engaged in extensive efforts to refine the GLAB model by testing it against seismic travel time and waveform data; we will incorporate all well-validated model modifications that devolve from the SCEC work. Initially, we will apply linearized inversion, using simple, smooth parameterizations of the spatial variation of the sediment velocity. We have already found, from numerical experiments with the GLAB model, that long-period ground motions in the basins are quite sensitive to long-wavelength spatial averages of the basin sediment velocity, and that long-wavelength seismic observations can discriminate among sediment velocity models (Figure 3(b)). Subsequent comparison with oil well sonic logs confirmed the results of the long-period waveform fitting in the example of Figure 3(b). For the purpose of these preliminary velocity inversions, we will want to minimize the complicating effects of the source. This can be accomplished to some extent by focusing initially on recordings of relatively simple events, including aftershocks (with the stipulation that they must be large enough to have a good signal over long periods). We can also reduce somewhat our sensitivity to source details of large earthquakes, by using recordings from relatively large distances. As our velocity and source models improve in tandem, we will relax these restrictions and simultaneously invert a larger volume of waveform data. As we gain experience with an initial approach based on linearization and smooth model perturbations, we will use more finely scaled representations of the seismic velocity structure, and apply nonlinear optimization techniques. For example, patterns of waveform amplitudes can be used to invert for changes in the interface surface configurations (e.g., [7, 6]). The feasibility of such an approach is suggested by a study of a simple 3D basin model by the Quake Project. In that study, buried ridges (convex upward features) defocus upcoming waves, producing lower amplitudes at the surface, and buried basins (concave upward features) focus upcoming waves, producing higher amplitudes at the surface above them.

4.3 Seismic sources inversion Realistic characterization of the earthquake source is an essential requirement for reliable strong motion forecasting. For simulating earthquake ground motion, the source is specified as a propagating (space- and timedependent) displacement discontinuity, or dislocation, on the fault surface. Up to the shaking amplitudes at which soil nonlinearity becomes significant, the ground motion prediction is linearly related to the dislocation function, and uncertainty in the dislocation specification is one of the dominant sources of uncertainty in ground motion simulations. In order to have a reliable methodology for strong motion simulation, we must improve the spatial and temporal resolution with which we can characterize the source, in tandem with improved capabilities achieved in our computational capability. Most of our empirical understanding of earthquake sources on the temporal and spatial scales most relevant to strong motion modeling comes from the analysis of recorded seismic waves. In recent years, our understanding

C-6

FEATHERLY (x3) S1 O1 S2 O2 SZ OZ

POMONA (x4) S1 O1 S2 O2 SZ OZ

DOWNEY (x1) S1 O1 S2 O2 SZ OZ

PASADENA (x3)

LAS (x1)

S1 O1 S2 O2 SZ OZ

S1 O1 S2 O2 SZ OZ

INGLEWOOD (x1) S1 O1 S2 O2 SZ OZ

TARZANA (x4) S1 O1 S2 O2 SZ OZ

OBREGON (x3) S1 O1 S2 O2 SZ OZ

50 sec

(b) (a) Figure 3: (a) A slice at 30 meters depth through the GLAB model (Magistrale et al., 1998). Variations in p-wave velocities are shown (color bar); note the complex basin structure. A yellow line indicates the current model boundary. (b) Sediment velocity sensitivity study from unpublished work by K. Olsen, H. Magistrale and S. Day comparing long period Landers waveforms (red; O1 and O2 horizontal, OZ vertical traces) recorded in the Los Angeles basin area (sites shown in center panel) to synthetic seismograms from 3D finite difference calculations using a preliminary version of the GLAB model (black) and a modified version of the GLAB model with perturbed basin sediment seismic velocity values (blue). Note that the waveform fits of the perturbed model are worse than the original model at basin sites (Inglewood, Tarzana, Downey, Las, Obregon). of the source has advanced rapidly as extensive sets of high-quality, near-source strong motion recordings have become available from a number of large earthquakes such as the Mexico (1985), Loma Prieta (1989), Landers (1992), Northridge (1994), and Kobe (1995) earthquakes. The picture of rupture that has emerged from the study of these and other earthquakes is one of very irregular space-time behavior, with complexity present at all observationally resolvable scales. The dislocation amplitude is invariably highly variable over the fault surface, as are its propagation velocity and rise time (e.g., [60, 42, 8, 13, 88, 26, 90, 71, 54]). The computational tools developed for the Quake project can be applied to earthquake source inversion, and provide an opportunity to improve source imaging in complicated geologic environments such as GLAB and the SFBA. An earthquake source inversion requires a model to account for the effects of the propagation path, that is, a Green’s function. In this context, the distortions introduced into the wavefield during propagation from source to recording site can be considered as “noise” that interferes with our ability to image source processes, but which can be removed if we can model the path Green’s function accurately. Source inversions performed to date have all approximated the geologic structure with (at best) flat-lying layers. We can improve source inversions that use seismic data recorded in complex geologic environments such as the GLAB by computing Green’s functions for a realistic, 3D model of the geology. We need to evaluate the Green’s function for a very large number of source points (a dense grid on the fault surface) and receiver points (all recording sites to be used in the inversion). Since the number of recording sites will always be far smaller than the number of fault points we  want to include in our image, the most approach is to solve the reciprocal problem [3], which

runs  isefficient of the 3D model, where the number of recording stations. We propose to apply 3D requires 3 simulations to source inversion, beginning with the Landers and Northridge earthquakes. At the beginning of this effort, we will use conventional linear inversion methods, but with Green’s functions for the 3D GLAB model. As we advance in our understanding of how 3D wave propagation influences the inversion, we will proceed to more advanced, and more flexible, nonlinear optimization algorithms such as those discussed in Section 5.2.

C-7

5 Proposed research—Computational aspects 5.1 Mesh generation Mesh generation is so inherently complex that any meshing algorithm is likely to be foiled by unusual input geometries having small angles, features of greatly disparate size, or complicated topologies—especially when a mesh having millions of elements is needed—unless the algorithm’s results are theoretically verifiable. Fortunately, this decade has brought about Delaunay refinement algorithms [23, 24, 25, 30, 66, 76, 78] that not only work well in practice, but have provable bounds on element quality and mesh grading. Delaunay refinement methods operate by maintaining a Delaunay triangulation or constrained Delaunay triangulation, which is refined by inserting additional vertices until all boundaries are represented by edges or faces of the triangulation, and all constraints on element quality and size are met. Part of our work has been to remove the last barrier to the practicality of two-dimensional Delaunay refinement algorithms—their tendency to fail in the presence of small input angles—by proposing (and proving the correctness of) a modification that ensures that they always terminate, and that the mesh quality degrades gracefully as the input degrades [76]. Our program Triangle [74] is an industrial-strength implementation of two-dimensional Delaunay refinement that has been freely available to the public for three and a half years. Triangle has thousands of users, with applications ranging from radiosity rendering and terrain databases to stereo vision and image orientation, as well as dozens of variants of numerical methods. Triangle is also powerful and robust enough to have been licensed for inclusion in ten commercial programs, for purposes ranging from ocean floor database interpolation to cartoon animation. We have also devised the first tetrahedral mesh generation algorithms that simultaneously offer guaranteed bounds on tetrahedron quality, edge lengths, and spatial grading of tetrahedron sizes (as opposed to uniform sizes), the ability to tetrahedralize general straight-line domains (as opposed to unconstrained point sets or polyhedra with holes), and truly satisfying performance in practice [76, 78]. The jump from two dimensions to three is surprisingly difficult, and is impeded by fundamental geometric limitations [77]. Nonetheless, the algorithm embodied in Pyramid, our three-dimensional mesh generator, provably generates a nicely graded mesh whose tetrahedral elements have circumradius-to-shortest edge ratios bounded below two. In practice, our Delaunay refinement algorithms outperform their provable bounds. Pyramid, for instance, regularly generates meshes of tetrahedra whose dihedral angles are bounded between 20  and 150  . We expect to release Pyramid to the public within a year, giving researchers free access to tetrahedral mesh generation facilities that in some ways surpass commercial programs. Because we need meshes larger than can be stored in a single processor’s memory, we will require out-of-core or parallel versions of Pyramid. The challenge of parallelizing mesh generation is daunting, but not because there is no parallelism to exploit. Delaunay refinement algorithms operate by inserting vertices, and each vertex insertion is in principle a local operation; it causes a small subset of triangles or tetrahedra to be deleted and replaced by a different set. Ideally, many such vertex insertion operations could take place simultaneously if their effects were independent—but that is the catch. It is difficult to tell a priori whether or not two simultaneously inserted vertices will cause the replacement of an overlapping or adjacent set of triangles or tetrahedra. (The majority of the running time of a vertex insertion operation is spent computing the set of deleted elements.) Were two simultaneous vertex insertions to take place with overlapping sets of elements and without careful handshaking between the two processors, the result would be disastrous. Furthermore, each processor must have fast access to the data structures describing the portion of the mesh it is modifying. We intend to pursue two approaches to parallelizing general-purpose unstructured mesh generation, so that we may generate meshes larger than any one machine’s physical memory. The first approach, which seems certain to succeed but tedious to implement, is based on the use of bucketing to break a mesh into smaller pieces. The second approach is based on data speculation and is riskier, but may yield faster speed and much greater ease of parallelization of Pyramid and many other applications. Only during the course of our research will it become clear which approach we are wisest to choose.

C-8

Our first approach is composed of two separate steps: first, implement an out-of-core mesh generation strategy; then expand this strategy to allow multiple processors to simultaneously work on a single mesh. We take advantage of spatial locality by dividing the domain being meshed into a grid of square or cubical buckets. Each data structure (i.e., node or element) is associated with a bucket, and each bucket might be stored in a separate file on disk. A processor is (temporarily) allocated a 6 6 6 block of buckets. Vertex insertions may only take place in a processor’s inner buckets, to ensure that a vertex insertion does not change data structures that are not in the processor’s memory. (We presuppose that the mesh has been sufficiently refined in an initial sequential step, prior to bucketing, to ensure that the effect of a vertex insertion cannot travel further than one bucket away.) Once a processor is satisfied with the quality of the elements in the inner buckets assigned to it, it writes its buckets to files, then chooses a different block of buckets (which may overlap the previous block) and refines the new region of the mesh until it is satisfactory. In this manner, one processor can eventually generate as large a mesh as will fit in all the disk space available to it. Several processors can operate simultaneously, as long as they are assigned disjoint sets of buckets. The reason for assigning a processor a 6 6 6 array of buckets, rather than one bucket at a time, is to make it possible to “sew up the seams”—recall that a processor cannot insert a vertex too near the edges of its working region. Figure 4 gives a small example of how a region’s buckets might be divided into 6 6 arrays so that all buckets of the mesh may be refined.

Figure 4: Four separate “phases” in the generation of a very large mesh. Thin lines show the division of the domain into buckets. Bold lines show how the buckets might be grouped into regions that a single processor can hold in memory at once. Vertices cannot be inserted into shaded buckets, because such an insertion might require access to data structures in an adjacent bucket not currently in the processor’s memory. However, every bucket can be refined during at least one phase. Our second, riskier approach to parallel unstructured mesh generation employs a technique for resolving data dependencies called thread-level data speculation (TLDS) [79, 80]. TLDS has been investigated as a method of improving modern processors by exploiting a program’s inherent thread-level parallelism. However, we intend to use TLDS not within a processor, but rather built atop the shared memory mechanism in a shared memory multiprocessor (SMP) or a software distributed shared memory (DSM) system. TLDS can extend an SMP or DSM with the ability to speculate on the data dependences between multiple threads (running on separate processors), and to detect and recover from data hazards when they occur. This method is generally applicable and will also benefit PDE modeling efforts, as well as other applications far afield of earthquake engineering. The effort to incorporate TLDS into a DSM, which we will be able to leverage, is currently being spearheaded by Carnegie Mellon’s Todd Mowry under NASA funding. Previous approaches to parallel meshing are surveyed by de Cougny and Shephard [27]; the most successful ¨ approach is probably that of de Cougny, Shephard, and Ozturan [28]. De Cougny et al. partition the domain with an octree and assign each processor a subset of octants in a manner that, one hopes, will ensure that the mesh generation process is load balanced. Octants at the leaves of the octree are meshed using templates (in the interior of the domain) or advancing front techniques (near the domain boundaries). One distinguishing feature of our approach is that we use simpler, more memory-efficient data structures—we have sequentially created meshes of nearly one hundred million tetrahedra, a feat probably not now possible with the De Cougny et al. approach due to memory limitations. Additionally, our proposed parallelization techniques are simpler to implement, and are likely to achieve good load balance straightforwardly. We do not require complicated algorithms that carefully subdivide the domain for optimal load balance, because assignments of portions of the domain to processors are C-9

only temporary. Because of the complexity of parallel mesh generation, we believe that the simplest approaches are most likely to yield fully practical implementations. Of course, as with our sequential implementations, we plan to release our parallel meshers for public use, and we expect them to be used in applications far afield of earthquake engineering.

5.2 Seismic inversion Full nonlinear optimization techniques are the most general, powerful, and flexible approach to the inverse problem. They apply when the surface response does not depend linearly on the inversion parameter, as in inversion of spatially-heterogeneous seismic velocities. They allow more general misfit functions between observations and model predictions, as well as the ability to apply additional inequality constraints on the inversion, e.g. that the inverse parameters not stray too much from anticipated models. Finally, they apply when the wave propagation problem is nonlinear, which is of long-term interest. However, the great computational costs of nonlinear optimization have limited its application largely to 2D seismic inversion. These costs stem from the need to compute gradients and possibly Hessian information for the misfit function, and the possibility of many iterations before convergence is reached. We propose to develop advanced parallel numerical methods and exploit highly parallel supercomputers to create efficient nonlinear optimization methods for seismic inversion of complex 3D basin geology and sources. We will capitalize on our past work in nonlinear optimization of systems governed by partial differential equations [5, 37, 40, 38, 44, 62, 63], and in particular the development of parallel algorithms for such problems [16, 17, 39, 61]. Our methods are based on Sequential Quadratic Programming (SQP) ideas, and have been used to solve difficult shape optimization problems with as many as 4 million state variables and 1000 optimization parameters [52]. Our most recent variant iterates in the full space of state and optimization variables, using a quasi-Newton approximation of the reduced space of optimization parameters as a preconditioner [16, 17]. Solution of an adjoint-like problem is used to compute gradients of the objective function in the reduced space, which can be done only approximately because the reduced space is used for preconditioning. This reduces  just (the number of model parameters) in the number of 3D simulations that must be done at each iteration from the direct sensitivity case to just two: one for the original model (the “forward problem”) and one of its adjoint.   This greatly decreases the work per iteration, especially for large values of . The method can be thought of as being analogous to Newton-Krylov techniques for solving PDEs. We have built a parallel implementation of our full space SQP method on top of the PETSc library for parallel solution of PDEs [9]. We use PETSc’s Krylov solvers and Schwarz preconditioners to (approximately) solve the forward linear systems arising at each optimization iteration. Initial results have been very encouraging: we observe speedups of over an order of magnitude compared with reduced SQP methods, which represent the current state-of-the-art. Furthermore, parallel and algorithmic scalability are excellent on tests up to 64 processors, for a moderately-sized problem (1/3 million unknown state variables) [17]. We face numerous challenges in enhancing, extending, and tailoring our parallel optimization methods to the problem class of inverse seismic wave propagation problems, and in particular those that involve inversion for velocity and source parameters in complex basin geologies. The main issues we will address in the proposed research include: Nonconvexity. The objective function representing misfit between predicted and measured ground motion can be severely multimodal [69, 83]. For example, a variation in the background velocity will cause a phase shift in synthetic waveform; new local optima are created for each shift greater than a multiple of a half-period. This has led some to pursue such meta-heuristics as simulated annealing and genetic algorithms to search for global optima (e.g., [53, 72, 94, 93]). However, since all such non-derivative methods rely on some form of space-sampling, they suffer from the “curse of dimensionality” and tend to work only for limited numbers of optimization variables. Thus, we will not consider them for the proposed work, which is necessarily large-scale. Several approaches motivated by exploration geophysics reformulate the norm in which misfit is expressed, as well as the choice of optimization parameters. This results in misfit functions that are provably convex, at least for certain model problems. More generally, these methods enlarge the basin of attraction of the global minimum,

C-10

thereby enhancing the chances that a local optimization will find the optimum. Two examples are the differential semblance method of Symes and Carazzone [83] and the migration-based travel time method of Chavent and Cl´ement [22]. We will investigate the applicability of these methods for large-scale seismic inversion in 3D. One important difference between exploration geophysics and earthquake inversion is the sparsity of observations associated with typical earthquakes. Whether sufficiently large basins of attraction result in these cases remains to be seen. Range of observations. How can a broader set of “observations” be incorporated into the inversion problem? These include near-source strong ground motion recordings, teleseismic waves, Global Positioning System (GPS) displacement vectors, and leveling data. In what “norms” should these data be incorporated into the misfit function? What effect will their inclusion have on the well-posedness, solvability, convexity, and conditioning of the problem, and on the robustness, stability, convergence rate, parallelism, and computational efficiency of the numerical methodology? Simultaneous inversion. Should the inverse problem be solved simultaneously for optimal earthquake source and basin parameters, or should these two sets of parameters be considered sequentially, perhaps alternating the inversion of one set and then the other (in the spirit of “multiplicative Schwarz” methods)? As above, what effect will this have on well-posedness, solvability, convexity, conditioning, stability, convergence rate, parallelism, and computational efficiency? Many inversion parameters. How will large numbers of basin/source parameters be handled? Most optimization methods use quasi-Newton ideas to approximate the Hessian of the objective function using only gradient information. The main reason is that second derivatives are usually very difficult to compute. However, the number of iterations taken by a quasi-Newton method typically grows at least linearly with the number of  optimization parameters, so incorporating some kind of higher order information will be essential for large . Fortunately, due to the special nature of the nonlinear least squares objective, one can approximate the Hessian using only gradient information (the so-called Gauss-Newton approximation), in the case of “model-consistent data” (i.e. when the model provides a good fit to the data). This gives (asymptotic) quadratic convergence and often a significant reduction in the number of iterations (which is important because the iteration space is sequential: employing additional processors doesn’t help). The big difficulty however is that the number of necessary simulations per iteration increases from one to  (the number of recording stations) despite the use of adjoints. Is the reduction iterations justified by the  , butin how large? Can iterative increase in cost per iteration? The answer is probably yes for large enough  systems methods be tailored sufficiently well to multiple righthand sides to exploit the fact that the to be solved are governed by the same linear adjoint operator? (Direct methods would be ideal here, but are not viable for the large-scale problems we have in mind, especially on highly parallel machines.) Parallelism across parameter space. In addition to the slower convergence first-order methods for  isofsignificant problems with many optimization variables, a second difficulty that arises when respect to  is that the reduced space computations must be parallelized. It with the number of state variables is not obvious how to do this in the general case, but there is a natural parallel decomposition for the special case of piecewise linear approximation of the seismic velocities using the same finite element mesh as the state variables. In this case the whole domain-based parallelism philosophy can be invoked, and optimization parameters can be handled in the same way as state variables. Whether we want to accept such a large set of inversion parameters is a natural question, in light of the ill-conditioning discussed above. Recent promising results for large-scale 3D electromagnetic inversion using mesh-based material parameterization [55] encourage us to explore fine-scale parameterization for our problem. Additionally, there are intrinsic numerical issues that are independent of whether we solve sequentially or in . parallel. A basic issue is that the Hessian approximation (which will be dense) cannot be formed for large We will pursue inexact Newton-Krylov ideas, i.e. combining inner conjugate gradient iterations with the outer Newton steps. It will again be necessary to exploit adjoint ideas to limit the number of operators “inversions.” Constructing a good parallel preconditioner for the Hessian will be a key idea, since the matrix is not formed. Model-inconsistent data. The Gauss-Newton approximation described above breaks down when the model does not provide a good fit to the data. These are known as large-residual problems, and of course cannot be C-11

anticipated in advance. In this case, bona fide second derivatives will have to be brought into the formulation. How to best do this is a challenging research question. Multiple sources. When records are available for multiple earthquakes, it is desirable to base velocity inversion on these multiple events. A central question is: should inversion be done sequentially or simultaneously  sources across all sources? Simultaneous treatment of drives up the per-iteration number of simulation runs to    , even if adjoint methods are used. On the other hand, iterating the at least , and under some conditions model inversion from one source to another is not terribly attractive. In any case, 3D multiple-source inversion will be extremely expensive. Coarse-grained parallelism has much to offer here, especially since the events are independent. Ill-conditioning and sparse data. Another difficulty in inversion of earthquake records is the sparse nature of available observations. This can lead to an ill-posed inverse problem, in the case when the number of optimization parameters exceeds available data, and more generally, ill-conditoning. In contrast, seismic prospecting benefits from seismograph arrays that typically provide regular, dense data, yielding better conditioned problems. This suggests that using all available earthquake sources simultaneously might be the preferred approach, despite the number of forward simulations that must be performed. When dealing with ill-posed inverse problems, it is customary to add smoothing terms (“regularization”) to the misfit functions to exclude spurious minimizers. The proper tradeoff between the resulting better conditioning and the excessive perturbation of the solution must be examined in the context of our problem, and in particular the Krylov iterative schemes we plan to use to solve the stationary conditions. In conclusion, the seismic inverse problem for complex 3D basins presents enormous computational challenges. The simulation problems we have solved, with  108  elements, are perhaps the largest unstructured mesh problems solved to date, requiring many gigabytes of memory and Cray T3E-hours of CPU time. However, these forward problems are but a mere “inner iteration” of the inverse problem, which will invoke the (forward and adjoint) simulations repeatedly. These problems are among the largest nonlinear optimization problems currently contemplated.

5.3 Remote interactive visualization The general objective of this phase of the research is to improve the ability of scientists and engineers to visualize the contents of the massive remote datasets (i.e., hundreds of gigabytes to multiple terabytes) produced by computer simulations. Specifically, we aim (1) to develop a framework (called Dv) for remote interactive visualization based on the notion of active frames; (2) to use the Dv framework to build a remote interactive visualization service for Quake datasets; and (3) to distribute the Dv framework to the research community at large, so that others can provide their own remote visualization services. Earthquake ground motion visualization. In a typical Quake visualization, a dataset on the order of hundreds of gigabytes to terabytes is stored at a remote site. The dataset consists of thousands of frames, where each frame records the displacement amplitude at each node of an unstructured 3D mesh. A user at a local site interactively requests a visualization of some region of interest (ROI) in the dataset. The ROI can be expressed both in space within a frame and in time across multiple frames. Figure 5 shows the form of a typical Quake visualization flowgraph. Stage I reads the appropriate part of the dataset. Stage II interpolates the displacements from the original unstructured mesh onto a smaller regular mesh. Stage III computes isosurfaces based on the soil densities and the displacement amplitudes. Stage IV synthesizes the scene according to various parameters such as point of view and camera distance. Finally, Stage V renders the scene from polygons into pixels. Research issues. Most Internet services are lightweight in the sense that individual requests (e.g., HTTP) do not require a significant amount of computation. In contrast, the remote visualization service that we envision is a heavyweight service that requires significant computation for each request. Providing heavyweight services are challenging for a number of reasons: First, simply copying multi-terabyte files to the local site for processing is not feasible because of limited network, storage, and backup resources. The crucial fact is that massive scientific datasets must reside on the remote site where they are computed. So in order to visualize them, we must develop tools for providing

C-12

I simulation results

remote dataset

materials database

II ROI

reading

III resolution

interpolation

V

IV contours

isosurface extraction

scene

scene synthesis

resolution

rendering

local display and user

Figure 5: An earthquake ground motion visualization application. visualization services over computational grids [35]. Further, whatever tools we develop must be able to incorporate existing visualization packages such as AVS [85] and vtk [70], building on the work of visualization researchers whenever possible. Second, the resources available for grid-enabled visualizations are wildly heterogeneous, and the availability of these resources can change over time. Thus, our visualization codes must be performance-portable applications that sense the available resources and automatically adapt themselves to run as well as possible. Finally, at times we will need to multicast a visualization to colleagues around the world so that we can discuss and analyze it in real time. The key issue in this kind of collaborative visualization is synchronizing the multicast streams so that each person sees roughly the same thing at the same time. This is particularly challenging given that each local site will have different network, computing, and graphics resources. Technical approach. To address these issues, we propose a framework called Dv (distributed visualization) for building remote interactive visualization services. Our approach is based on the following key ideas: (1) a heavyweight grid service model that generalizes the conventional Internet service model; (2) active frames and their servers as the basic building blocks for remote visualization services. (Some of these ideas are also present in a pending NSF CISE proposal.) We propose that visualization services be provided according to the following heavyweight grid service model [2]: The service provider at a remote site identifies a list of  1 hosts that are available for satisfying service requests. A service user at a local site visualizes the remote dataset by issuing a series of requests to the remote site. Each request contains a visualization program, the ROI of the datset, an application-level scheduler [12, 82], and a list of ! 1 hosts that are available at the local site for running the visualization program. When a request arrives at the remote host, the host executes the scheduler on the visualization program, which schedules the program on the #"$ hosts identified by the remote site (on a per service basis) and the local site (on a per request basis), and then executes the program on the appropriate frame(s) in the dataset. Note that this model is a simple generalization of other grid service models. For example, the usual lightweight Internet service model assumes &%'(% 1. On the other hand, grid-enabled tools such as Netsolve, a network-enabled solver toolkit [21], have ) 1 and $% 1. The basic component for providing remote visualization services is the active frame, an application-level mobile object that contains frame data and a frame program that manipulates the data [2]. The frame program implements a single run() method that computes an output frame and returns the network location of the destination host for the output frame. Active frames are processed by active frame servers, which are processes that run on grid hosts. A server consists of two components: an application-independent interpreter that executes active frame programs, and an optional collection of application-specific library routines (e.g., the vtk library) that can be called from the frame program. Figure 6 shows the basic architecture. At run-time, an active frame server waits on a well-defined port for an input active frame to arrive from the network. The server reads the input from the network, extracts and demarshals the frame program and data, and passes them to the interpreter, which executes the run() method on the input frame data to produce a new output frame data. After the execution finishes, the server marshals the output frame data and the frame program into a new output frame, and sends this frame to the destination host. This idea of bundling programs with network data has been exploited effectively by active messages in the C-13

Input Active Frame Frame data

Active Frame Server Active frame interpreter

Frame program

Output Active Frame Frame data’

Frame program’

Application libraries

Host

Figure 6: An active frame server. context of parallel processing [87], and by active networks in the context of adding additional functionality to network routers [84]. The rationale behind active frames is the expectation that a similar notion will also prove effective in the context of grid computing. A potential disadvantage of active frames is the overhead of processing the frame program. However, for large-scale visualization applications, the size of the frame data and the time required to process it is likely to swamp the overhead of processing the frame program. Indeed, measurements of our prototype active frame server written in Java with calls to a native C++ vtk library provide some preliminary support for this claim. For small frames of 800 Kbytes, the processing of the frame program introduces an additional delay of 199 ms, which is only 16% of the total frame processing time [2]. A more significant issue will likely be how to minimize the overhead incurred by passing large datasets around the network. Almost certainly, we will need to develop mechanisms to cache parts of the frames (such as the unstructured meshes) that are constant from request to request, or that have been computed by previous requests. Active frames and their servers are the basic building blocks for providing remote visualization services (Figure 7). A Dv system is a collection of identical active frame servers (Dv servers) running on the hosts of a computational grid, plus an additional active frame server (the local Dv client) specialized with a user interface that accepts user inputs and displays rendered images. During the course of a visualization session, the Dv client sends a series of request frames to a Dv server (the request server) that has direct access to the remote dataset. Each request frame contains visualization parameters such as ROI, a description of the visualization flowgraph, a scheduler that assigns flowgraph nodes to Dv servers, and a frame generator that produces a sequence of one or more response frames originating from the request server. Response frames pass from server to server until they arrive at Dv client, where they are displayed. User inputs

Request frame (Dv program, scheduler, and local hosts)

Local Dv client

Display

Response frames Remote dataset

Dv Server (request server)

Response frames

Dv Server

...

Response frames

Remote DV Active Frame Servers

Dv Server

Response frames

Dv Server

Local DV Active Frame Servers

Figure 7: Providing a remote visualization service using the Dv framework. The Dv framework is appealing for a number of reasons. It is based on a collection of identical stateless servers, so it should be feasible to deploy on a wide scale. More important, the architecture provides a convenient framework for experimenting with both performance-portability and collaborative real-time visualization. Performance-portability is supported by Dv in a number of ways. First, Dv request servers execute schedulers C-14

that they receive from Dv clients (a reverse-applet of sorts). Thus we can experiment with different applicationlevel schedulers [12] from our desktop systems, without having to reinstall new code on the servers. Second, the frame lifetime provides convenient points for making scheduling decisions, depending on the desired degree of adaptivity. Scheduling can be done when the request frame is created, when each response frame is created by the request server (more adaptive), or when each frame is sent from an intermediate server to its successor (most adaptive). In all cases, the scheduling decisions must be based on a network monitoring system such as the CMU Remos system [48, 32, 31, 33, 49] or Network Weather Service (NWS) [91]. For our work, we will use Remos, extending it when necessary. Collaborative visualization is supported by Dv in the following sense. We can think of Dv as implementing an application-level “network-layer” protocol where Dv frames correspond to source-routed packets and Dv servers correspond to routers. Collaborative visualization on this “Dv network” will require a form of synchronized multicast [29] where users at the leaves of the multicast see roughly the same thing at roughly the same time, depending on the network and computing resources available from the root to the leaf. The Dv frames provide a convenient data unit with which to begin experimenting with this special form of multicast.

6 Concluding remarks Our ultimate goal is to develop the ability to forecast earthquake ground motion in large basins using physicallybased models, and to apply our tools to the Greater Los Angeles Basin and the San Francisco Bay Area. Because of the critical role that ground motion plays in infrastructure design, the accelerated availability of suitable simulation methodologies and tools will have a direct impact on public safety and welfare. The duration of our proposed project is three years. It will require a concerted, unified effort in parallel 3D mesh generation, 3D seismic inversion, and large-scale distributed visualization. During this period we expect to make advances in physical modeling and in parallel nonlinear optimization algorithms and software tool development, all the while gaining physical insight into earthquake ground motion. However, earthquake modeling is a fundamental, long-term Grand Challenge that will require the efforts of many researchers for years. Our research will provide a platform for ourselves and others to incorporate and test more accurate future models as knowledge improves in such areas as basin models, nonlinear soil models, and earthquake rupture dynamics. We should also emphasize that the proposed research focuses on development and application of an optimized methodology for deterministic simulations of earthquake ground motion. We will develop simulation tools that are scalable to multi-teraflops, and ultimately petaflops, computing. With anticipated advances in computing power, we envision that it will become feasible to perform ground motion studies entailing large ensembles of such simulations, which can then provide a basis for probabilistic hazard assessment. Thus, while our own applications will emphasize deterministic estimation, the computational tools we provide to the community will enable the development of new, more advanced, approaches to probabilistic seismic hazard assessment. The problems we are addressing are representative of many other PDE-based simulations; our tools and algorithms will thus benefit a much wider community of scientists and engineers. We expect our work to have an influence even outside the PDE simulation community, as have the mesh generators created under our past Grand Challenge grant.

C-15

Part D

References Cited [1] B. Adams, R. Davis, and J. Berrill. Modeling site effects in the Lower Hutt Valley, New Zealand. 12th World Conference on Earthquake Engineering (Auckland, NZ), 2000. to appear. [2] M. Aeschlimann, P. Dinda, L. Kallivokas, J. Lopez, B. Lowekamp, and D. O’Hallaron. Preliminary report on the design of a framework for distributed visualization. Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’99) (Las Vegas, NV), June 1999. Invited paper. [3] K. Aki and P.G. Richards. Quantitative Seismology: Theory and Methods, pages 25–28. W. H. Freeman, San Francisco, CA, 1980. [4] J. G. Anderson, Y. Lee, Y. Zeng, and S. M. Day. Control of strong motion by upper 30 meters. Bull. Seism. Soc. Am. 86:1749–1759, 1996. [5] J.F. Antaki, O. Ghattas, G.W. Burgreen, and B. He. Computational flow optimization of rotory blood pump components. Artificial Organs 19(7):608–615, 1995. [6] S. Aoi, T. Iwata, H. Fujiwara, and K. Irikura. Boundary shape waveform inversion for two-dimensional basin structure using three-component array data of plane incident wave with an abritrary azimuth. Bull. Seism. Soc. Am. 87:222–233, 1997. [7] S. Aoi, T. Iwata, K. Irikura, , and F.J. Sanchez-Sesma. Waveform inversion for determining the boundary shape of a basin structure. Bull. Seism. Soc. Am. 85:1445–1455, 1995. [8] R.J. Archuleta. A faulting model for the 1979 Imperial Valley earthquake. J. Geophys. Res. 89:4559–4585, 1984. [9] Satish Balay, William Gropp, Lois Curfman McInnes, and Barry Smith. PETSc 2.0 Users Manual. Mathematics and Computer Science Division, Argonne National Laboratory, 1997. ANL-95/11. [10] H. Bao, J. Bielak, O. Ghattas, L. Kallivokas, D. O’Hallaron, J. Shewchuk, and J. Xu. Earthquake ground motion modeling on parallel computers. Proc. Supercomputing ’96 (Pittsburgh, PA), November 1996. See also www.cs.cmu.edu/ quake/. [11]

. Large-scale simulation of elastic wave propagation in heterogeneous media on parallel computers. Computer Methods in Applied Mechanics and Engineering 152:85–102, January 1998.

[12] Francine Berman and Richard Wolski. Scheduling from the perspective of the application. Proceedings of the Fifth IEEE Symposium on High Performance Distributed Computing HPDC96, pages 100–111, August 1996. [13] G.C. Beroza and P. Spudich. Linearized inversion for fault rupture behavior: application to the 1984 Morgan Hill, California, earthquake. J. Geophys. Res. 93:6275–6296, 1988. [14] J. Bielak, L. Kallivokas, J. Xu, and R. Monopoli. Finite element absorbing boundary for the wave equation in a halfplane with an application to engineering seismology. Proc. of the Third International Conference on Mathematical and Numerical Aspects of Wave Propagation (Mandelieu-la-Napule, France), pages 489–498. INRIA-SIAM, April 1995. D-1

[15] J. Bielak, J. Xu, and O. Ghattas. Earthquake ground motion and structural response in alluvial valleys. Journal of Geotechnical and Geoenvironmental Engineering 125:413–423, 1999. [16] G. Biros and O. Ghattas. Parallel domain decomposition methods for optimal control of viscous incompressible flows. Proceedings of Parallel CFD ’99 (Williamsburg, VA), May 1999. http://cs.cmu.edu/ oghattas/papers/pcfd99/pcfd99.ps. [17]

. Parallel SQP algorithms for PDE–constrained optimization, 1999. Submitted to SC99.

[18] D. M. Boore. Basin waves on a seafloor recording of the 1990 Upland, California, earthquake: Implications for ground motions from a larger earthquake. Bull. Seism. Soc. Am. 89:317–324, 1999. [19] D. M. Boore, W. B. Joyner, and T. E. Fumal. Estimation of response spectra and peak accelerations from western North American earthquakes: An interim report. Technical Report Open-File Report 93-509, U. S. Geological Survey, 1993. [20] T. Brocher, E. Brabb, R. Catchings, G. Fuis, T. Fumal, R. Jachens, A. Jayko, R. Kayen, R. McLaughlin, T. Parsons, M. Rymer, R. Stanley, and C. Wentworth. A crustal-scale 3-D seismic velocity model for the San Francisco Bay area, California. Eos Transactions AGU 78:F435, 1997. [21] H. Casanova and J. Dongarra. Netsolve: A network server for solving computational science problems. Technical Report CS-95-313, University of Tennessee, November 1995. [22] G. Chavent. Duality methods for waveform inversion. Technical Report 2975, INRIA, Rocquencourt, France, September 1996. [23] L. Paul Chew. Guaranteed-Quality Triangular Meshes. Technical Report TR-89-983, Department of Computer Science, Cornell University, 1989. [24]

. Guaranteed-Quality Mesh Generation for Curved Surfaces. Proceedings of the Ninth Annual Symposium on Computational Geometry (San Diego, California), pages 274–280. Association for Computing Machinery, May 1993.

[25]

. Guaranteed-Quality Delaunay Meshing in 3D. Proceedings of the Thirteenth Annual Symposium on Computational Geometry, pages 391–393. Association for Computing Machinery, June 1997.

[26] F. Cotton and M. Campillo. Frequency domain inversion of strong motions: Application to the 1992 Landers earthquake. J. Geophys. Res. 100:3961–3975, 1995. [27] Hugues L. de Cougny and Mark S. Shephard. Parallel Unstructured Grid Generation. Technical Report 10-1997, Scientific Computation Research Center, Rensselaer Polytechnic Institute, Troy, New York, 1997. ¨ [28] Hugues L. de Cougny, Mark S. Shephard, and Can Ozturan. Parallel Three-Dimensional Mesh Generation on Distributed Memory MIMD Computers. Engineering with Computers 12:94–106, 1996. [29] S. Deering and D. Cheriton. Multicast routing in datagram internetworks and extended LANS. ACM Transactions on Computer Systems 8(2):85–100, 1990. [30] Tamal Krishna Dey, Chanderjit L. Bajaj, and Kokichi Sugihara. On Good Triangulations in Three Dimensions. International Journal of Computational Geometry & Applications 2(1):75–95, 1992. [31] P. Dinda, B. Lowekamp, L. Kallivokas, and D. O’Hallaron. The case for prediction-based best-effort real-time systems. Proc. of the 7th International Workshop on Parallel and Distributed Real-Time Systems (WPDRTS 1999), Lecture Notes in Computer Science, volume 1586, pages 309–318. Springer-Verlag, San Juan, PR, May 1999. D-2

[32] P. Dinda and D. O’Hallaron. Statistical properties of host load. Technical Report CMU-SCS-98-143, School of Computer Science, Carnegie Mellon University, July 1998. [33]

. An evaluation of linear models for host load prediction. Proc. 8th IEEE Symposium on HighPerformance Distributed Computing (HPDC-8), August 1999.

[34] E. H. Field. Spectral amplification in a sediment-filled valley exhibiting clear basin-edge-induced waves. Bull. Seism. Soc. Am. 86:991–1005, 1996. [35] Ian Foster and Carl Kesselman, editors. The Grid: Blueprint for a New Computating Infrastructure. Morgan Kaufman, 1999. [36] A. Frankel and J.E. Vidale. A three-dimensional simulation of seismic waves in the Santa Clara Valley, California, from a Loma Prieta aftershock. Bull. Seism. Soc. Am. 82:2045–2074, 1992. [37] O. Ghattas and J. Bark. Large-scale SQP methods for optimization of Navier Stokes flows. Large-Scale Optimization with Applications. Part II: Optimal Control and Design (L.T. Biegler, T.F. Coleman, A.R. Conn, and F.N. Santosa, editors), pages 247–270. Springer Verlag, Berlin, 1997. [38] O. Ghattas and X. Li. Domain decomposition methods for sensitivity analysis of a nonlinear aeroelasticity problem. International Journal of Computational Fluid Dynamics 11:113–130, 1998. Invited paper. [39] O. Ghattas and C. E. Orozco. A parallel reduced Hessian SQP method for shape optimization. Multidisciplinary Design Optimization: State-of-the-Art (N. Alexandrov and M.Y. Hussaini, editors), pages 133–152. SIAM, 1997. [40] Omar Ghattas and Jai-Hyeong Bark. Optimal control of two- and three-dimensional incompressible Navier– Stokes flows. Journal of Computational Physics 136:231–244, 1997. [41] R. Graves. Preliminary analysis of long-period basin response in the Los Angeles region from the 1994 Northridge earthquake. Geophys. Res. Lett. 22:101–104, 1995. [42] S.H. Hartzell and T.H. Heaton. Inversion of strong-ground motion and teleseismic waveform data for the fault rupture history of the 1979 Imperial Valley, California 1983 earthquake. Bull. Seism. Soc. Am. 73:1553–1583, 1983. [43] K. Hatayama, K. Matsunami, T. Iwata, and K. Irikura. Basin-induced Love waves in the eastern part of the Osaka Basin. J. Phys. Earth 43:131–155, 1995. [44] Beichang He, Omar Ghattas, and James F. Antaki. Computational strategies for shape optimization of time dependent Navier-Stokes flows. Technical Report CMU-CML-97-102, Computational Mechanics Lab, Department of Civil and Environmental Engineering, Carnegie Mellon University, June 1997. To appear, Computer Methods in Applied Mechanics and Engineering. [45] Y. Hisada, H. Bao, J. Bielak, O. Ghattas, and D. O’Hallaron. Simulations of long-period ground motions during the 1995 Hyogoken-Nanbu (Kobe) earthquake using 3D finite element method. 2nd International Symposium on Effect of Surface Geology on Seismic Motion, Special Volume on Simultaneous Simulation for Kobe (Yokohama, Japan) (K. Irikura, H. Kawase, and T. Iwata, editors), pages 59–66, December 1998. [46] H. Kawase and T. Iwata. Report on the submitted results of the simultaneous simulation for Kobe. 2nd International Symposium on Effect of Surface Geology on Seismic Motion, Special Volume on Simultaneous Simulation for Kobe (Yokohama, Japan) (K. Irikura, H. Kawase, and T. Iwata, editors), December 1998. [47] M. Kohler, R. Graves, and D. Wald. The effect of localized sedimentary environment and subsurface structure variations on teleseismic waveform amplitudes in the Los Angeles basin. Eos Transactions AGU 79:F605, 1998. D-3

[48] B. Lowekamp, N. Miller, D. Sutherland, T. Gross, P. Steenkiste, and J. Subhlok. A resource query interface for network-aware applications. Proc. 7th IEEE Symp. High-Performance Distr. Comp., July 1998. [49] B. Lowekamp, D. O’Hallaron, and T. Gross. Direct network queries for discovering network resource properties in a distributed environment. Proc. 8th IEEE Symposium on High-Performance Distributed Computing (HPDC-8), August 1999. [50] H. Magistrale, R. Graves, and R. Clayton. A standard three-dimensional seismic velocity model for southern California: version 1. Eos Transactions AGU 79:F605, 1998. [51] H. Magistrale, K. L. McLaughlin, and S. M. Day. A geology based 3-D velocity model of the Los Angeles Basin. Bull. Seism. Soc. Am. 86:1161–1166, 1996. [52] I. Malcevic. Large-scale unstructured mesh shape optimization on parallel computers. Master’s thesis, School of Civil and Environmental Engineering, Pittsburgh, PA, 1997. [53] S. Mallick. Model-based inversion of amplitude-variations-with-offset data using a genetic algorithm. Geophysics 60:939–954, 1995. [54] H. Nakahara, H. Sato, M. Ohtake, and T. Nishimura. Spatial distribution of high-frequency energy radiation on the fault of the 1995 Hyogo-Ken Nanbu, Japan, earthquake (mw 6.9) on the basis of the seismogram envelope inversion. Bull. Seism. Soc. Am. 89:22–35, 1999. [55] G.A. Newman and D.L. Alumbaugh. Three-dimensional massively parallel electromagnetic inversion—I. Theory. Geophys. J. Int. 128:345–354, 1997. [56] D. O’Hallaron. Spark98: Sparse matrix kernels for shared memory and message passing systems. Technical Report CMU-CS-97-178, School of Computer Science, Carnegie Mellon University, October 1997. [57] D. O’Hallaron, J. Shewchuk, and T. Gross. Architectural implications of a family of irregular computations. Fourth International Symposium on High Performance Computer Architecture (Las Vegas, NV), pages 80–89. IEEE, February 1998. [58] David R. O’Hallaron and Jonathan Richard Shewchuk. Properties of a Family of Parallel Finite Element Simulations. Technical Report CMU-CS-96-141, School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, December 1996. [59] K. Olsen, R.J. Archuleta, and J.R. Matarese. Three-dimensional simulation of a magnitude 7.75 earthquake on the San Andreas fault in southern California. Science 270:1628–1632, 1995. [60] A.H. Olson and R. Apsel. Finite faults and inverse theory with applications to the 1979 Imperial Valley earthquake. Bull. Seism. Soc. Am. 72:1969–2002, 1982. [61] C. E. Orozco and O. Ghattas. Massively parallel aerodynamic shape optimization. Computing Systems in Engineering 1–4:311–320, 1992. [62]

. Infeasible path optimal design methods, with application to aerodynamic shape optimization. AIAA Journal 34(2):217–224, 1996.

[63]

. A reduced SAND method for optimal design of nonlinear structures. International Journal for Numerical Methods in Engineering 40:2759–2774, 1997.

[64] A. Papalou and J. Bielak. Seismic interaction effects in earth and rockfill dams. Proceedings of the 11th World Conference on Earthquake Engineering (Acapulco, Mexico), pages 23–28, June 1996. Paper No. 2084. D-4

[65] A. Pitarka, K. Irikura, T. Iwata, and H. Sekiguchi. Three-dimensional simulation of the near-fault ground motion for the 1995 Hyogo-ken Nanbu (Kobe), Japan, earthquake. Bull. Seism. Soc. Am. 88:428–440, 1998. [66] Jim Ruppert. A Delaunay Refinement Algorithm for Quality 2-Dimensional Mesh Generation. Journal of Algorithms 18(3):548–585, May 1995. [67] F. S´anchez-Sesma, R. Benites, and J. Bielak. The assessment of strong ground motion – what lies ahead? Proc. of the 11th World Conference on Earthquake Engineering (Acapulco, Mexico), pages 23–28, June 1996. [68] F.J. Sanchez-Sesma and F. Luzon. Seismic response of three-dimensional valleys for incident P, S, and Rayleigh waves. Bull. Seism. Soc. Am. 85:269–284, 1995. [69] J. A. Scales, M. L. Smith, and T. L. Fischer. Global optimization methods for multimodal inverse problems. Journal of Computational Physics 103:258–268, 1992. [70] W. Schroeder, K. Martin, and B. Lorensen, editors. The Visualization Toolkit: An Object-Oriented Approach to 3D Graphics, second edition. Prentice Hall PTR, Upper Saddle River, NJ, 1998. www.kitware.com. [71] H. Sekiguchi, K. Irikura, T. Iwata, Y. Kakehi, and M. Hoshiba. Minute locating of fault planes and source process of the 1995 Hyogoken Nanbu earthquake from the waveform inversion of strong ground motion. J. Phys. Earth 44:473–487, 1996. [72] M. K. Sen and P. L. Stoffa. Global optimization methods in geophysical inversion. Advances in Exploration Geophysics Series. Elsevier, The Netherlands, 1995. [73] Jonathan Richard Shewchuk. Robust Adaptive Floating-Point Geometric Predicates. Proceedings of the Twelfth Annual Symposium on Computational Geometry, pages 141–150. Association for Computing Machinery, May 1996. [74]

. Triangle: Engineering a 2D quality mesh generator and Delaunay triangulator. First Workshop on Applied Computational Geometry, pages 124–133. Association for Computing Machinery, May 1996.

[75]

. Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates. Discrete & Computational Geometry 18(3):305–363, October 1997.

[76]

. Delaunay Refinement Mesh Generation. Ph.D. thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, May 1997. Available as Technical Report CMU-CS-97-137.

[77]

. A Condition Guaranteeing the Existence of Higher-Dimensional Constrained Delaunay Triangulations. Proceedings of the Fourteenth Annual Symposium on Computational Geometry (Minneapolis, Minnesota), pages 76–85. Association for Computing Machinery, June 1998.

[78]

. Tetrahedral Mesh Generation by Delaunay Refinement. Proceedings of the Fourteenth Annual Symposium on Computational Geometry (Minneapolis, Minnesota), pages 86–95. Association for Computing Machinery, June 1998.

[79] J. Steffan, J. Colohan, and T. Mowry. Architectural support for thread-level data speculation. Technical Report 97-188, Carnegie Mellon, School of Computer Science, November 1997. [80] J. G. Steffan and T. C. Mowry. The potential for using thread-level data speculation to facilitate automatic parallelization. Proc. 4th Symp. on High Performance Computer Architecture (Las Vegas), pages 2–13. IEEE, February 1998.

D-5

[81] C. Stidham, D. Dreger, B. Romanowicz, M. Antolik, and S. Larsen. Investigating the 3D structure of the San Francisco Bay area. Eos Transactions AGU 79:F597, 1998. [82] Alan Su, Francine Berman, Richard Wolski, and Michelle Mills Strout. Using AppLeS to schedule a distributed visualization tool on the computational grid. Technical Report CS99-609, University of California San Diego, January 1999. [83] W. W. Symes. Layered velocity inversion: A model problem in reflection seismology. SIAM Journal on Mathematical Analysis 22:680–716, 1991. [84] David Tennenhouse and David Wetherall. Towards an active network architecture. Computer Communication Review 26(2):5–18, August 1995. [85] C. Upson, T. Faulhaber, D. Kamins, et al. The application visualization system: A computational environment for scientific visualization. IEEE Computer Graphics and Applications 9(4):30–42, July 1989. [86] J.E. Vidale and D.V. Helmberger. Elastic finite-differencemodeling of the 1971 San Fernando, California earthquake. Bull. Seism. Soc. Am. 78:122–141, 1988. [87] T. von Eicken, D. Culler, S. Goldstein, and K. Schauser. Active messages: a mechanism for integrated communication and computation. Proc. 19th Intl. Conf. on Computer Architecture, pages 256–266, May 1992. [88] D. J. Wald and T. H. Heaton. Spatial and temporal distribution of slip for the 1992 Landers, California, earthquake. Bull. Seism. Soc. Am. 84:668–691, 1994. [89] D.J. Wald and R.W. Graves. The seismic response of the Los Angeles Basin, California. Bull. Seism. Soc. Am. 88:337–356, 1998. [90] D.J. Wald, T.H. Heaton, and K.W. Hudnut. The slip history of the 1994 Northridge, California, earthquake determined from strong-motion, teleseismic, GPS, and leveling data. Bull. Seism. Soc. Am. 86:S49–S70, 1996. [91] Rich Wolski. Forecasting network performance to support dynamic scheduling using the network weather service. Proceedings of the 6th High-Performance Distributed Computing Conference (HPDC97), pages 316–325, August 1997. extended version available as UCSD Technical Report TR-CS96-494. [92] K. Yomogida and J.T. Etgen. 3-D wave propagation in the Los Angeles basin for the Whittier-Narrows earthquake. Bull. Seism. Soc. Am. 83:1325–1344, 1993. [93] Y. Zeng and J.G. Anderson. A composite source modeling of the 1994 Northridge earthquake using genetic algorithm. Bulletin of the Seismological Society of America 86:s71–s83, 1996. [94] R. Zhou, F. Tajima, and P.L. Stoffa. Earthquake source parameter determination using genetic algorithms. Geophysical Research Letters 22(4):517–520, 1992.

D-6

Part E

Biographical Sketches of the Key Personnel JACOBO BIELAK Professor of Civil and Environmental Engineering Department of Civil and Environmental Engineering Carnegie Mellon University Pittsburgh, PA 15213-3890 Jacobo Bielak is a Professor in the Department of Civil and Environmental Engineering and Director of the Computational Mechanics Laboratory at Carnegie Mellon University. He is also an Affiliated Scientist at the Southern California Earthquake Center. He received his Ph.D. from the California Institute of Technology. His primary research interests are in the areas of applied and computational mechanics, with particular emphasis on solid mechanics, earthquake engineering and engineering seismology, and structural acoustics, and on the development of numerical and computational techniques for the efficient solution of these and other problems governed by partial differential or integral equations on parallel architectures. He was principal investigator on the NSF Grand Challenges in High Performance Computing project entitled “Earthquake Ground Motion Modeling in Large Basins.” The section on soil-structure interaction contained in the National Earthquake Reduction Program “Recommended Provisions for the Development of Seismic Regulations for New Buildings” is based primarily on his work. He has done extensive work for engineering software development companies, including Swanson Analysis Systems, Inc. (now ANSYS, Inc.) and Algor, Inc. He was Convener of the NSF Sponsored Workshop on Scientific Supercomputing, Visualization, and Animation in Geotechnical Earthquake Engineering and Engineering Seismology (1994). He has served as Chairman of the Dynamics Committee of the Engineering Mechanics Division of the American Society of Civil Engineers, and as a member of the editorial boards of the Journal of Engineering Mechanics, Numerical Methods for Partial Differential Equations, and the Electronic Journal of Geotechnical Engineering. He was a member of the International Organizing Committee of the 1997 International Symposium on Parallel Computing in Engineering and Science, as well as a member of the International Organizing Committee of the Second (1998) International Symposium on the Effect of Surface Geology on Seismic Motion and of the Scientific Committee of the Fourth (1999) International Conference on Theoretical and Computational Acoustics. He is currently a member of the Editorial Boards of the Journal of Geotechnical and Geoenvironmental Engineering and of the International Journal for Computational Civil and Structural Engineering, and a member of the International Association of Seismology and Physics of the Earth’s Interior/International Association of Earthquake Engineering Joint Working Group on Effects of Surface Geology.

Related publications J. Bielak, J. Xu, and O. Ghattas, “Earthquake Ground Motion and Structural Response in Alluvial Valleys,” Journal of Geotechnical and Geoenvironmental Engineering, 125, pp. 413–423, 1999. H. Bao, J. Bielak, O. Ghattas, L. F. Kallivokas, D. R. O’Hallaron, J. R. Shewchuk, and J. Xu, “Large-Scale Simulation of Elastic Wave Propagation in Heterogeneous Media on Parallel Computers,” Computer Methods in Applied Mechanics and Engineering 152, pp. 85–102, 1998. F. J. Sanchez-Sesma, R. Benites, and J. Bielak, “The Assessment of Strong Ground Motion—What Lies Ahead?” Paper 2014, Proc. 11th World Conf. on Earthquake Engineering, Acapulco, Mexico, pp. 23–28, June 1996. J. Bielak, L. F. Kallivokas, J. Xu, and R. Monopoli, “Finite Element Absorbing Boundary for the Wave Equation in a Halfplane with an Application to Engineering Seismology,” Proc. of the Third International Conf. on

E-1

Mathematical and Numerical Aspects of Wave Propagation (INRIA-SIAM), pp. 489–498, Mandelieu-la-Napule, France, April 1995. J. Bielak, R. C. MacCamy, D. S. McGhee, and A. Barry, “Unified Symmetric BEM-FEM for Site Effects on Ground Motion—SH-Waves,” Journal of Engineering Mechanics 117, pp. 2027–2048, 1991.

Additional publications P. C. Jennings and J. Bielak, “Dynamics of Building-Soil Interaction,” Bulletin of the Seismological Society of America 63, pp. 9–43, 1973. J. Bielak and R. C. MacCamy, “An Exterior Interface Problem in Two-Dimensional Elastodynamics,” Quarterly of Applied Mathematics 41, pp. 143–159, 1983. J. Bielak and P. C. Christiano, “On the Effective Seismic Input for Nonlinear Soil-Structure Interaction Systems,” Earthquake Engineering and Structural Dynamics 12, pp. 107–119, 1984. A. Barry, J. Bielak, and R. C. MacCamy, “On Absorbing Boundary Conditions for Wave Propagation,” Journal of Computational Physics 79, pp. 449–468, 1988. J. Bielak, R. C. MacCamy, and X. Zeng, “Stable Coupling Methods for Interface Scattering Problems by Combined Integral Equations and Finite Elements, ” Journal of Computational Physics 119, pp. 374–384, 1995.

E-2

STEVEN M. DAY Rollin and Caroline Eckis Professor of Seismology Department of Geological Sciences San Diego State University San Diego, CA 92182 Current Positions Rollin and Caroline Eckis Professor of Seismology, Department of Geological Sciences, San Diego State University (since 1988) Visiting Research Geophysicist, Institute of Geophysics and Planetary Physics, University of California at San Diego (since 1995) Previous Positions Program Manager for Theoretical Geophysics, Maxwell Laboratories, Inc., 1983–1987 Research Geophysicist, S-Cubed, Inc., 1977–1983 Education B.S. (Geology), University of Southern California, 1971 Ph.D. (Geophysics), University of California, San Diego, 1977 Professional Activity Chairman, Earthquake Physics Working Group, Southern California Earthquake Center, 1998–present Chairman, Strong Motion Working Group, Southern California Earthquake Center, 1993–1998 Chairman, High Performance Computing Committee, Southern California Earthquake Center, 1998–present Member, Editorial Board, Bulletin of the Seismological Society of America, 1997–present Convener, National Academy of Sciences Workshop on High Performance Computing in Seismology, 1994 Member, Steering Committee, Southern California Earthquake Center, 1993– Member, Committee on Seismology of the National Research Council, 1990–1996 Member, Seismic Review Panel, Air Force Technical Applications Center, 1989–1997 Member, Earthquake Initiative Review Panel, Lawrence Livermore National Laboratory, 1992 Member, Earthquake Ground Motion Review Panel, Nuclear Regulatory Commission, 1985–1991 Consultant, U.S. Congress Office of Technology Assessment, 1987 Related Publications Magistrale, H., K. L. McLaughlin, and S. M. Day (1996). “A Geology Based 3-D Velocity Model of the Los Angeles Basin,” Bull. Seism. Soc. Am., Vol. 86, pp. 1161–1166. Day, S.M., G. Yu, and D. Wald (1998). “Dynamic stress changes during earthquake rupture,” Bull. Seism. Soc. Am, 88, 512–522. Day, S.M. (1998). “Efficient simulation of constant Q using coarse-grained memory variables,” Bull. Seism. Soc. Am., 88, pp. 1051–1062. Harris, R.A., and S.M. Day (1999). “Dynamic 3D simulations of earthquakes on en echelon faults,” Geophysical Research Letters, accepted March 1999. Magistrale, H., and S.M. Day (1999). “Three dimensional simulation of multi-segment thrust fault rupture,” E-3

Geophysical Research Letters, accepted March 1999. Additional Publications Day, S.M. (1982). “Three-dimensional finite difference simulation of fault dynamics: Rectangular faults with fixed rupture velocity,” Bull. Seism. Soc. Am., Vol. 72, pp. 705–727. Day, S.M. (1982). “Three-dimensional simulation of spontaneous rupture: The effect of nonuniform prestress,” Bull. Seism. Soc. Am., Vol. 72, pp. 1881–1902. Day, S.M. and J.B. Minster (1984). “Numerical simulation of attenuated wavefields using a Pade approximant method,” Geophys. J. R. astr. Soc., Vol. 78, pp. 105–118. Harris, R. A., and S. M. Day (1993). “Dynamics of fault interaction: Parallel strike-slip faults,” J. Geophys. Res., Vol. 98, pp. 4461–4472. Harris, R. A., and S. M. Day (1997). “Effects of a low-velocity zone on a dynamic rupture,” Bull. Seism. Soc. Am., 87, pp. 1267–1280.

E-4

GREG FOSS Pittsburgh Supercomputing Center 4400 Fifth Ave. Pittsburgh, PA 15213 Greg Foss is a computer animation specialist who has provided scientific visualization graphics for the Pittsburgh Supercomputing Center since the fall of 1993. He works with a wide variety of scientific researchers producing data visualizations for print, video and stereoscopic viewing. His video “Simulation of 1994 Northridge Earthquake Aftershock” was accepted into the prestigious Electronic Theatre presentation at ACM SIGGRAPH’s annual conference in Los Angeles in 1997. He designed and maintains a world wide web gallery of PSC scientific visualization projects. Previously he worked five years in commercial video production creating threedimensional computer animation for broadcast and corporate communication. He holds a Master of Arts in Computer Graphics and Animation from The Ohio State University. Some of Greg’s work can be viewed at www.pmw.org/ foss and www.psc.edu/research/graphics/gallery/gallery.html Exhibitions and Illustrations "Simulation of 1994 Northridge Earthquake Aftershock," Animation. Electronic Theatre, ACM SIGGRAPH Annual Conference and Exhibition, Los Angeles, August 1997. "Helix-Helix Packing," Image illustrating the article: "Computer Graphics." The World Book Encyclopedia(1997 ed.), World Book Publishing, Chicago, 1997. "Pump Up the Volume," Animation included in the video: Journey into a Living Cell, Carnegie Science Center’s Buhl Planetarium in collaboration with Carnegie Mellon University, Pittsburgh, 1996. "Formation of Accretion Disks and Jets Around Black Holes," Animation shown in the Screening Rooms, ACM SIGGRAPH Annual Conference and Exhibition, New Orleans, 1996. "Comet Shoemaker-Levy 9 and Planet Jupiter: An Introductory Representation of the Impact," "Tornado Watch (Run Toto, Run!)." "Pump Up the Volume," Animation shown in the Screening Rooms, ACM SIGGRAPH Annual Conference and Exhibition, Orlando, Florida, 1994.

E-5

OMAR GHATTAS Associate Professor Computational Mechanics Laboratory Carnegie Mellon University Pittsburgh, PA 15213 [email protected] www.cs.cmu.edu/ oghattas Omar Ghattas is Associate Professor and Director of the Computational Mechanics Laboratory at Carnegie Mellon University. He has appointments or affiliations with the Biomedical Engineering Program, the Department of Civil and Environmental Engineering, the Institute for Complex Engineered Systems, and the School of Computer Science. He also serves on the technical advisory board of Algor, Inc., a CSM/CFD software company. He received his B.S. in civil engineering in 1984, and his M.S. and Ph.D. in computational mechanics in 1986 and 1988, all from Duke University. He also served as a postdoctoral research associate at Duke, and joined the faculty of Carnegie Mellon University in 1989. He has general research interests in high performance scientific computation, with particular emphasis on simulation and optimization of complex systems governed by fluid- and solid-mechanical phenomena. He specializes in large-scale problems, and their solution on parallel supercomputers. His recent research relevant to this proposal includes design and implementation of parallel algorithms for solution, sensitivity analysis, and optimization of large-scale unstructured mesh PDE problems, numerical methods for optimal design, optimal control, and inverse problems, and variational methods for nonlinear fluid-solid interaction. His recent activities relevant to this proposal include serving as a co-principal investigator on two National Science Foundation HPCC Projects (the Earthquake Ground Motion Grand Challenge and the Computer-Assisted Surgery National Challenge), involvement in planning for the new SSI/APEX initiative in the numerical algorithms area, and participation on NSF advisory review panels for the NPACI and NCSA programs. Selected publications E. Schwabe, G. Blelloch, A. Feldmann, O. Ghattas, J. Gilbert, G. Miller, D. O’Hallaron, J. Shewchuk, and S. Teng, “A Separator-Based Framework for Automated Partitioning and Mapping of Parallel Algorithms for Numerical Solution of PDEs,” in Issues and Obstacles in the Practical Implementation of Parallel Algorithms and the Use of Parallel Machines, Dartmouth, June, 1992. J.R. Shewchuk and O. Ghattas, “A Compiler for Parallel Finite Element Methods with Domain-Decomposed Unstructured Meshes,” Contemporary Mathematics, V. 180, p. 445-450, 1994. O. Ghattas and X. Li, “A Variational Finite Element Method for Nonlinear Fluid-Solid Interaction,” Journal of Computational Physics, Vol. 121, pp. 347–356, 1995. J.F. Antaki, O. Ghattas, G.W. Burgreen, B. He, “Computational Flow Optimization of Rotary Blood Pump Components”, Artificial Organs, Vol.19, No.7, pp. 608–615, 1995. H. Bao, J. Bielak, O. Ghattas, D. O’Hallaron, L. Kallivokas, J. Shewchuk, J. Xu, “Earthquake ground motion modeling on parallel computers,” Proceedings of Supercomputing ’96 (Pittsburgh, PA, Nov. 1996). O. Ghattas and C.E. Orozco, “A Parallel Reduced Hessian SQP Method for Shape Optimization,” Multidisciplinary Design Optimization: State-of-the-Art, N.M. Alexandrov and M.Y. Hussaini, eds., SIAM, 1997, pp. 133-152. O. Ghattas and J. Bark, “Large-Scale SQP Methods for Optimization of Navier-Stokes Flows”, in Large-Scale Optimization, IMA Volumes in Mathematics and its Applications, L. Biegler and A.R. Conn, eds., Springer-

E-6

Verlag, 1997. O. Ghattas and J. Bark, “Optimal Control of Two- and Three-dimensional Incompressible Navier Stokes Flows”, Journal of Computational Physics, Vol.136, p.231–244, 1997. O. Ghattas and X. Li, “Domain decomposition methods for sensitivity analysis of a nonlinear aeroelasticity problem,” International Journal of Computational Fluid Dynamics, Vol. 11, pp. 113–130, 1998. (Invited.) G. Biros and O. Ghattas, “Parallel domain decomposition methods for optimal control of viscous incompressible flows,” Proceedings of Parallel CFD ’99, Williamsburg, VA, May, 1999.

E-7

HAROLD MAGISTRALE Adjunct Professor of Geology Department of Geological Sciences San Diego State University San Diego, CA 92182-1020 Harold Magistrale is an Adjunct Professor of Geology at San Diego State University. He earned a B.S. in Earth Sciences from University of California, Santa Cruz in 1979, and a Ph.D. in Geophysics from California Institute of Technology in 1990. Dr. Magistrale has been at SDSU since 1990. His primary research interests are the crustal structure, seismotectonics, and seismic hazard of southern California. Related Publications Magistrale, H. and S. Day, 1999, “3D Simulations of multi-segment thrust fault rupture,” Geophys. Res. Lett., in press. Magistrale, H., R. Graves, and R. Clayton, 1998, “A standard three-dimensional seismic velocity model for southern California: Version 1,” EOS Trans. AGU 79, p. F605. Magistrale, H., K. McLaughlin, and S. Day, 1996, “A geology based 3-D velocity model of the Los Angeles basin sediments,” Bull. Seism. Soc. Am. 86, 1161–1166. van de Vrugt, H., S. Day, H. Magistrale, and J. Wedburg, 1996, “Inversion of local earthquake data for site response in San Diego, California,” Bull. Seism. Soc. Am. 86, 1147–1458. Magistrale, H., H. Kanamori, and C. Jones, 1992, “Forward and inverse three-dimensional P-wave velocity models of the southern California crust,” J. Geophys. Res. 97, 14, 115–14, 136. Additional Publications Sanders, C. and H. Magistrale, 1997, “Segmentation of the northern San Jacinto fault zone, southern California,” J. Geophys. Res. 102, 27, 453–27, 467. Magistrale, H. and H. Zhou, 1996, “Lithologic control of the depth of earthquakes in southern California,” Science 273, 639–643. Magistrale, H. and T. Rockwell, 1996, “The central and southern Elsinore fault zone, southern California,” Bull. Seism. Soc. Am. 86, 1793–1803. Magistrale, H. and C. Sanders, 1996, “Evidence from precise earthquake hypocenters for segmentation of the San Andreas fault in San Gorgonio Pass,” J. Geophys. Res. 101, 3031–3044. Magistrale, H. and C. Sanders, 1995, “P wave image of the Peninsular Ranges batholith, southern California,” Geophys. Res. Lett. 22, 2549–2552.

E-8

DAVID R. O’HALLARON Associate Professor of Computer Science and Electrical and Computer Engineering School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3890 David O’Hallaron is an Associate Professor of Computer Science and Electrical and Computer Engineering at Carnegie Mellon University. He received his Ph.D. in Computer Science from the University of Virginia. After a stint at the GE R&D Center, he joined the Carnegie Mellon faculty in 1989. Dr. O’Hallaron works on tools and applications for the Internet and high performance distributed systems. While at Carnegie Mellon he has helped lead the teams that developed the iWarp computer system with Intel, the Fx parallelizing Fortran compiler, the Archimedes tool chain for developing large-scale finite element simulations on parallel computers, and the NSF Grand Challenge Quake project to model earthquake-induced ground motion in the Los Angeles basin. In 1998 the CMU School of Computer Science awarded him and the other members of the Quake Project the Allen Newell Medal for Research Excellence.

Related publications P. Dinda and D. O’Hallaron, “An Evaluation of Linear Models for Host Load Prediction”, Proc. 8th IEEE Symposium on High-Performance Distributed Computing (HPDC-8), August, 1999, to appear. B. Lowekamp, D. O’Hallaron, and T. Gross, “Direct Network Queries for Discovering Network Resource Properties in a Distributed Environment”, Proc. 8th IEEE Symposium on High-Performance Distributed Computing (HPDC-8), August, 1999, to appear. M. Aeschlimann, P. Dinda, L. Kallivokas, J. Lopez, B. Lowekamp, D. O’Hallaron, “Preliminary Report on the Design of a Framework for Distributed Visualization”, Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’99), June, 1999, Las Vegas, NV, invited paper. D. O’Hallaron, J. Shewchuk, and T. Gross, “Architectural Implications of a Family of Irregular Computations”, Fourth International Symposium on High Performance Computer Architecture (Las Vegas, Nevada), IEEE, February 1998, 80–89. H. Bao, J. Bielak, O. Ghattas, L. Kallivokas, D. O’Hallaron, J. R. Shewchuk, and J. Xu, “Large-scale Simulation of Elastic Wave Propagation in Heterogeneous Media on Parallel Computers”, Computer Methods in Applied Mechanics and Engineering 152 (Jan. 1998), 85–102.

Additional publications P. Dinda, B. Lowekamp, L. Kallivokas, and D. O’Hallaron, “The Case for Prediction-Based Best-Effort RealTime Systems”, Proc. of the 7th Inter. Workshop on Par. and Dist. Real-Time Systems (WPDRTS 1999), Lecture Notes in Comp. Sci. 1586, 1999, San Juan, PR, Springer-Verlag, 309–318. T. Gross, D. O’Hallaron, and J. Subhlok, “Task Parallelism in a High Performance Fortran Framework”, IEEE Parallel & Distributed Technology 2, 3 (1994), 16–26. D. Nicol and D. O’Hallaron, “Efficient Algorithms for Mapping Pipelined and Parallel Computations”, IEEE Transactions on Computers 40, 3 (Mar. 1991), 295–306.

E-9

JONATHAN SHEWCHUK Assistant Professor of Computer Science Department of Electrical Engineering and Computer Science University of California at Berkeley Berkeley, CA 94720-1776 Jonathan Shewchuk received a B.Sc. in Physics and Computing Science from Simon Fraser University in 1990 and a Ph.D. in Computer Science from Carnegie Mellon University in 1997, winning the Computer Science Department’s Doctoral Dissertation Award. Dr. Shewchuk’s research interests include large-scale scientific computing, computational geometry (especially mesh generation and numerical robustness), numerical methods, and compilers for parallel and numerical computing. His emphasis is on implementing practical scientific computing algorithms and tools that solve real engineering problems. The products of his research include Archimedes, a software system for performing large-scale finite element simulations on parallel computers. Archimedes has been the linchpin of the Quake Project, a four-year interdisciplinary investigation of earthquake-induced ground motion in the Los Angeles Basin funded under the auspices of the High Performance Computing and Communications program. The first component of Archimedes released to the public is a triangular mesh generator named Triangle, which has achieved widespread use for many different applications in academia and industry and has been licensed for inclusion in ten commercial products. Dr. Shewchuk was recently awarded an NSF Faculty Early Career Development Award to support his work in mesh generation.

Related publications J. R. Shewchuk, “Tetrahedral Mesh Generation by Delaunay Refinement,” in Proceedings of the Fourteenth Annual Symposium on Computational Geometry (Minneapolis, Minnesota), ACM, pp. 86–95, June 1998. David O’Hallaron, Jonathan Richard Shewchuk, and Thomas Gross, “Architectural Implications of a Family of Irregular Applications,” Proceedings of the Fourth International Symposium on High-Performance Computer Architecture (Las Vegas, Nevada), pages 80–89, IEEE Press, February 1998. H. Bao, J. Bielak, O. Ghattas, L. F. Kallivokas, D. R. O’Hallaron, J. R. Shewchuk, and J. Xu, “Large-scale Simulation of Elastic Wave Propagation in Heterogeneous Media on Parallel Computers,” Computer Methods in Applied Mechanics and Engineering 152(1–2):85–102, 22 January 1998. J. R. Shewchuk, “Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates,” Discrete & Computational Geometry 18(3), pp. 305–363, October 1997. J. R. Shewchuk, “Triangle: Engineering a 2D Quality Mesh Generator and Delaunay Triangulator,” Applied Computational Geometry: Towards Geometric Engineering (M. C. Lin and D. Manocha, editors), Lecture Notes in Computer Science, volume 1148, pp. 203–222. Springer-Verlag, Berlin, May 1996.

Additional publications J. R. Shewchuk, “A Condition Guaranteeing the Existence of Higher-Dimensional Constrained Delaunay Triangulations,” in Proceedings of the Fourteenth Annual Symposium on Computational Geometry (Minneapolis, Minnesota), ACM, pp. 76–85, June 1998. K. Z. Haigh, J. R. Shewchuk, and M. M. Veloso, “Exploiting Domain Geometry in Analogical Route Planning,” Journal of Experimental and Theoretical Artificial Intelligence 9:509–541, 1997. J. R. Shewchuk, “An Introduction to the Conjugate Gradient Method Without the Agonizing Pain,” Technical

E-10

Report CMU-CS-94-125, School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, March 1994. J. R. Shewchuk and O. Ghattas, “A Compiler for Parallel Finite Element Methods with Domain-Decomposed Unstructured Meshes,” Proceedings of the Seventh International Conference on Domain Decomposition Methods in Scientific and Engineering Computing (D. E. Keyes and J. Xu, editors), volume 180 of Contemporary Mathematics, pages 445–450, American Mathematical Society, Providence, Rhode Island, October 1993. E. J. Schwabe, G. E. Blelloch, A. Feldmann, O. Ghattas, J. R. Gilbert, G. L. Miller, D. R. O’Hallaron, J. R. Shewchuk, and S.-H. Teng, “A Separator-Based Framework for Automated Partitioning and Mapping of Parallel Algorithms for Numerical Solution of PDEs,” Proceedings of the First Annual Summer Institute on Issues and Obstacles in the Practical Implementation of Parallel Algorithms and the Use of Parallel Machines in Parallel Computation (DAGS/PC ’92), pages 48–62, Dartmouth Institute for Advanced Graduate Studies, June 1992.

E-11

Suggest Documents