DOCUMENTATION OF SOFTWARE PACKAGE ...

1 downloads 0 Views 368KB Size Report
Trial Run 1 – Generating and Checking the Grid . ... Trial Run 3 – Estimating Computation Time. ... Trial Run 4 – Testing the Adequacy of nk2.
DOCUMENTATION OF SOFTWARE PACKAGE COMPSYN sxv3.11: PROGRAMS FOR EARTHQUAKE GROUND MOTION CALCULATION USING COMPLETE 1-D GREEN’S FUNCTIONS

by

Paul SPUDICH U.S. Geological Survey Menlo Park, CA 94025 USA Lisheng XU Institute of Geophysics China Seismological Bureau Beijing 100018, PRC published in a CD accompanying International Handbook of Earthquake and Engineering Seismology  International Association of Seismology and Physics of the Earth's Interior Academic Press, 2002 Date of this documentation: July 9, 2002

1

INTRODUCTION .....................................................................................................................................................4 DISCLAIMERS .........................................................................................................................................................4 DISCLAIMER 1 – THE ACCURACY OF THIS SOFTWARE IS NOT WARRANTED ...............................................................4 DISCLAIMER 2 – WE CANNOT SOLVE ALL YOUR PROBLEMS ......................................................................................4 CONVENTIONS IN THIS DOCUMENT ...............................................................................................................5 FONT CONVENTIONS.................................................................................................................................................5 SOURCES, RECEIVERS, AND RECIPROCITY ................................................................................................................5 EXAMPLE INPUT FILES DIFFER FROM SAMPLE DATA FILES ......................................................................................5 LONG FILE NAMES ARE NOT REQUIRED ...................................................................................................................5 OVERVIEW OF THE APPLICATIONS................................................................................................................5 STRENGTHS AND WEAKNESSES OF THE COMPSYN PACKAGE ................................................................................5 APPLICATION FLOWCHART – WHAT GOES INTO WHAT ............................................................................................7 BRIEF DESCRIPTION OF THE APPLICATIONS .............................................................................................................8 OLSON ................................................................................................................................................................8 XOLSON ..............................................................................................................................................................9 TFAULT...............................................................................................................................................................9 SLIP .....................................................................................................................................................................9 SEESLO ...............................................................................................................................................................9 VALIDATIONS .........................................................................................................................................................9 WHOLESPACE TESTS .................................................................................................................................................9 POINT AND FINITE FAULTS IN LAYERED MEDIA ......................................................................................................10 Comparisons with Bouchon...............................................................................................................................10 PEER Validations..............................................................................................................................................10 Other Tests.........................................................................................................................................................13 SITUATIONS IN WHICH COMPSYN CAN FAIL.............................................................................................14 OBSERVERS VERY CLOSE TO A SURFACE FAULT RUPTURE ......................................................................................14 THE PROBLEM OF EQUAL FAULT AND OBSERVER DEPTHS........................................................................................14 INADEQUATE SAMPLING FOR OBSERVERS VERY CLOSE TO A FAULT ........................................................................15 NOTES ON THE FORTRAN CODE ....................................................................................................................15 SYSTEM REQUIREMENTS ........................................................................................................................................15 POSSIBLE PECULIARITIES OF THE FORTRAN .....................................................................................................16 Upper / Lower Case Portability Problem..........................................................................................................16 BLOCK DATA Problem.....................................................................................................................................16 Expected Numerical Accuracy of the Calculations ...........................................................................................16 Vestigial Comment Lines ...................................................................................................................................18 Disabled Feature of the Code............................................................................................................................18 Comment Lines – Nonportable Debug ..............................................................................................................18 Comment Lines – Portable Debug.....................................................................................................................18 Compilation Warnings and Options That You Should Ignore ...........................................................................19 GENERAL STYLE .....................................................................................................................................................20 File Naming Conventions – Stems and Extensions............................................................................................20 TYPICAL STYLE OF USER-PREPARED DATA FILES ..................................................................................................21 OLSON DETAILED INSTRUCTIONS ................................................................................................................22 MATHEMATICAL DEVELOPMENT ............................................................................................................................22 HOW TO PREPARE AN OLSON RUN .......................................................................................................................25 OLSON User-Prepared Data File *.OLD .........................................................................................................25 How to Set Up a Problem ..................................................................................................................................31 OLSON Terminal Session ..................................................................................................................................35 TRIAL RUNS THE USER SHOULD PERFORM .............................................................................................................38

2

Trial Run 1 – Generating and Checking the Grid .............................................................................................39 Trial Run 2 – Verifying Numerical Stability......................................................................................................40 Trial Run 3 – Estimating Computation Time.....................................................................................................41 Trial Run 4 – Testing the Adequacy of nk2........................................................................................................41 HOW THE GRID IS PROGRESSIVELY TRUNCATED, AND A LVZ WARNING ...............................................................41 SETTING OLSON DIMENSIONS:..............................................................................................................................42 IDENTITIES OF OUTPUT VARIABLES ........................................................................................................................43 XOLSON DETAILED INSTRUCTIONS..............................................................................................................44 XOLSON FUNCTION..............................................................................................................................................44 XOLSON DIMENSIONS – WHAT TO DO IF XOLSON DOES NOT FIT IN RAM ................................................44 Dimension Parameters and How to Set Them ...................................................................................................45 XOLSON TERMINAL SESSION ...............................................................................................................................45 TFAULT DETAILED INSTRUCTIONS ..............................................................................................................46 USER-PREPARED DATA FILE (.TFD FILE) ..............................................................................................................49 TFAULT DIMENSIONS ...........................................................................................................................................52 TFAULT TERMINAL SESSION ................................................................................................................................53 SLIP DETAILED INSTRUCTIONS......................................................................................................................54 SLIP USER-PREPARED DATA FILE (.SLD FILE) .....................................................................................................56 SUGGESTIONS FOR SPECIFYING SLIP MODELS:........................................................................................................62 SLIP TERMINAL SESSION .......................................................................................................................................63 SLIP DIMENSIONS ..................................................................................................................................................64 SEESLO DETAILED INSTRUCTIONS ...............................................................................................................65 SEESLO TERMINAL SESSION .................................................................................................................................67 FORMAT OF THE SEESLO OUTPUT TIME SERIES ..................................................................................................69 WHAT TO DO WHEN COMPSYN FAILS .........................................................................................................70 EXECUTION PROBLEMS ..........................................................................................................................................70 If They Do Not Fit in Your Computer's Memory ...............................................................................................70 Fatal Errors, Warnings, and Notes ...................................................................................................................70 Finding Errors Using the Output Print Files ....................................................................................................71 WHAT TO DO IF YOUR SEISMOGRAMS ARE UGLY.....................................................................................................71 Always try these steps first: ...............................................................................................................................71 Energy arriving before P or after the last expected surface wave.....................................................................72 Pulse in velocity or a step in displacement at or shortly after t=0 ....................................................................72 High frequency oscillations following impulsive phases ...................................................................................72 High frequency ringing that extends through the entire seismogram................................................................72 If nothing else works..........................................................................................................................................72 REFERENCES ........................................................................................................................................................73

3

INTRODUCTION The COMPSYN package contains FORTRAN applications for calculating synthetic ground motion seismograms for hypothetical fault ruptures occuring on faults of finite spatial extent. Also included in the package are applications for displaying the seismograms, and test input and output files. The COMPSYN applications use the numerical techniques of Spudich and Archuleta (1987) to evaluate the representation theorem integrals on a fault surface. The Green’s functions for wave propagation are calculated using the Discrete Wavenumber / Finite Element (DWFE) method of Olson et al. (1984). The applications assume that the Earth model is defined in a 3-dimensional cartesian space, and that the Earth structure is a function of 1 dimension (z, depth) and has a free surface at z=0. These applications have two primary strengths. First, the Green’s functions include the complete response of the Earth structure, so that all P and S waves, surface waves, leaky modes, and near-field terms are included in the calculated seismograms. The main omission is that anelastic attenuation cannot be modelled using these codes. Second, the codes are computationally fairly fast compared to 3-dimensional codes, so that the user can simulate ground motions from many hypothetical rupture models in minimal time.

DISCLAIMERS Disclaimer 1 – The accuracy of this software is not warranted The software described in this document has been developed for research purposes. The authors have tried to ensure that the basic physics and algorithms of the software are correct. The authors have implemented some tests within the software to verify that the user’s input is reasonable. However, because the software was developed as a research tool, there are many possible uses of the software for which it has not been checked, and there are many possible combinations of input parameters that have not been tested. In addition there are some algorithms in the software that give results whose accuracy is controlled by user input, and some algorithms used in this software package produce more accurate results than do other algorithms in this package. The software is known to be inaccurate in some cases (see documentation below). It is the complete responsibility of the users of this software to verify that the software produces results that are sufficiently accurate to be acceptable to the user. Although this software has been used by the US Geological Survey and the China Seismological Bureau, no warranty, expressed or implied, is made by the USGS or the United States Government or the China Seismological Bureau as to the accuracy and functioning of the software and related program material, nor shall the fact of distribution constitute any such warranty, and no responsibility is assumed by the USGS or the CSB in connection therewith. The user is strongly encouraged to use the software to attempt to duplicate published theoretical seismograms calculated for extended earthquake sources. In this way the user will obtain a better appreciation for the accuracy or inaccuracy of the software.

Disclaimer 2 – We cannot solve all your problems The users will undoubtedly discover errors in the codes, mistakes in the documentation, and other problems. We hope that the users will notify us of such errors. We will try to offer suggestions to solve user problems when we are notified, but owing to limitations on our time we cannot promise to supply replacement code that repairs every error brought to our attention, and we cannot promise to review user input files to find user mistakes. Our release of this software is not an implicit promise to

4

maintain the software forever. We apologize in advance for our probable inability to respond to all queries. If the codes malfunction, we ask the user to be sure to read the relevant parts of the documentation thoroughly before contacting us. In addition, the user should look for error reports and solutions that might be posted at http://quake.usgs.gov/~spudich/compsyn.html before contacting us about problems.

CONVENTIONS IN THIS DOCUMENT Font Conventions user input from the terminal is Courier bold user input from a file is Courier application output is Helvetica CR means carriage return

Sources, Receivers, and Reciprocity Because of the form of the representation theorem used, the terms "source" and "receiver" are ambiguous. Hence, we use the term "observer" to refer to the locations (usually on the Earth's surface) where we will ultimately want the extended source seismograms calculated, and we use the term "fault" to refer to the places in the earth's interior where we will ultimately place the earthquake rupture. Because OLSON uses Greens function reciprocity, it calls the observer locations the "sources" and the it calls the points at depth where the fault will be, the "receivers".

Example Input Files Differ From Sample Data Files Examples of inputs are given below. Often these examples of inputs differ from the sample data files that might be supplied with this software package. This difference occurs because sometimes the sample data files do no illustrate some feature of the software, so we have used different input in this document chosen to illustrate the desired software feature.

Long File Names are Not Required In the document below we sometimes use example file names like "some_crazy_model.sld". The version of DOS found in Windows NT 4.0 accepts long file names. File names are not required to be this long, and if your operating system does not accept such long names, don't use them.

OVERVIEW OF THE APPLICATIONS Strengths and Weaknesses of the COMPSYN Package There are two ways the COMPSYN package differs from most other software applications for calculating ground motions caused by finite sources in vertically 5

varying media, and these differences cause the COMPSYN package to have strengths and weaknesses compared to the other methods. First, the COMPSYN package uses the discrete-wavenumber finite-element method of Olson et al. (1984) (software application OLSON) to calculate Green's functions for the medium. The strengths of the OLSON method are: • it calculates the complete response of an arbitrarily complicated Earth structure • it is computationally efficient for complicated structures. The weaknesses of the OLSON code are: • it is inefficient for a simple Earth structure consisting of a few homogeneous layers, since the algorithm it uses to handle a complicated structure is used to handle a simple structure • it does not include the effects of anelastic attenuation • it shares with other wavenumber integration methods the inaccurate calculation of results for sources and observers at the same depths. Consequently, if you want to calculate ground motions up to 20 Hz in an Earth model that consists of 1 layer over a halfspace, COMPSYN will be at best inefficient and at worst impossibly slow. Homogeneous layer methods, such as reflectivity (Kennett, 1983) would be much more efficient for such a calculation. However, if you want to calculate complete synthetics for a model consisting of a series of gradients in velocity, as might occur in a deep sedimentary basin, COMPSYN will be competitive or faster than a reflectivity-based code that uses many layers to simulate the velocity gradients. Second, the COMPSYN package uses a crudely adaptive integration technique when integrating the product of slip model and Green's function over the fault surface (see Spudich and Archuleta, 1987, pp. 231-236, 251). The density of sample points on the fault is proportional to frequency. This integration scheme obviates the necessity to perform some sort of time-domain interpolation of the theoretical Green's functions (see Spudich and Archuleta, 1987, pp. 235-237). The time domain interpolation methods have been used by most investigators, such as Hartzell and Heaton (1983), Wald et al. (1991), and other papers by these authors and their colleagues. Of course, in many papers there is no interpolation of the Green's functions at all; a simple point source summation is performed, which is the least accurate of the possible methods if used outside its permissible frequency band (see Spudich and Archuleta, 1987, pp. 248-251). The strengths of the COMPSYN integration method are: • it is the most accurate of the methods that use explicit calculation of Green's functions • it is equally accurate for waves having different phase velocities (i.e. S waves and surface waves) • it is optimally efficient at every frequency • it is very efficient if you want to calculate seismograms for many slip models in the same Earth structure. 6

The weaknesses of the COMPSYN integration method are: • it is computationally slow • it uses large disk storage for Green's functions • it does not adapt its quadrature to sample densely portions of the fault nearest the observer, which causes inaccuracies for observers very close to surface rupture.. Before using COMPSYN, the user should be sure to read the sections Validations and Situations in which Compsyn Can Fail.

Application Flowchart – What Goes Into What The sequence in which the applications are run, and the related input and output files, are shown in the flowcharts in Figures COMPSYN-1 and COMPSYN-2.

PROGRAM FLOW - COMPLETE 1D SYNTHETIC PACKAGE, p. 1 *.OLD - user-prepared file containing Earth velocity structure and various run parameters for OLSON

OLSON

*.OLH - various output parameters

*.OLO - Green's functions in frequency, wavenumber, and depth domain

*.OLP - print file summarizing run parameters and results

XOLSON

*.XOO - Green's functions in frequency, wavenumber, and depth domain, with order rearranged *.TFD - user-prepared file containing fault geometry definition, observer locations, and sampling parameters

*.XOP - print file summarizing run parameters and results

TFAULT

*.TFO - traction Green's functions evaluated on the fault for various observer locations

*.TFP - print file summarizing run parameters and results

continued

Figure COMPSYN-1. Flowchart of the first half of the COMPSYN package, showing the sequence in which applications are run, and showing which files are created and/or used by which applications. Boxes with dashed borders are userprepared text files. Boxes with heavy black borders are applications. Boxes with light black borders are files created by an application. File names are shown as 7

*.xxy, where xx shows the identity of the creating application and y shows the type of file (see Tables 1 and 2 below).

PROGRAM FLOW - COMPLETE 1D SYNTHETIC PACKAGE, p. 2

continued *.SLD - user-prepared file containing particular earthquake slip distribution

SLIP

*.SLO - spectrum of ground motion seismograms

*.SLM - spectrum of slip distribution on fault

*.SLP - print file summarizing run parameters and results

SEESLO

*nn.PS - postscript files showing plots of calculated seismograms

*.SEP - print file summarizing run parameters and results

( )( )

X V nn. Y D Z formatted X,Y, or Z component velocity (V) or displacement (D) time series

*

Figure COMPSYN-2. Flowchart of the second half of the COMPSYN package, showing the sequence in which applications are run, and showing which files are created and/or used by which applications. Boxes with dashed borders are userprepared text files. Boxes with heavy black borders are applications. Boxes with light black borders are files created by an application. . File names are shown as *.xxy, where xx shows the identity of the creating application and y shows the type of file (see Tables 1 and 2 below).

Brief Description Of The Applications OLSON OLSON calculates Green’s functions in the wavenumber and frequency domain for laterally homogeneous velocity structures which consist of piecewise linear functions of depth with possible discontinuities. OLSON is a modified version of the DWFE code of Olson et al. (1984). OLSON is the most computationally time-consuming application in the COMPSYN package, but it only has to be run once. Although the goal of the COMPSYN package is to calculate seismograms for a fault having finite extent, the geometry of the extended source does not have to be defined before running OLSON. OLSON simply calculates Green’s functions. However, ultimately we need to know the Green’s functions evaluated for surface 8

observers as functions of position on the desired finite-sized fault. These functions will be determined in TFAULT by Bessel-transforming the OLSON output. XOLSON XOLSON is a utility program that rearranges the contents of OLSON output files. XOLSON is fairly fast and it only has to be run once. TFAULT TFAULT reads the user’s definition of a fault plane and the output files (.XOO files) created by XOLSON. It Bessel transforms the XOLSON output from the wavenumber-frequency domain to the space-frequency domain, and it evaluates these Green’s functions on a user-defined grid of points on the fault plane. An example of the output of TFAULT is shown in Figure 12 of Spudich and Archuleta (1987). TFAULT can be computationally time consuming. SLIP SLIP reads a rupture model from the .SLD file, and it reads the TFAULT output files. SLIP forms the dot product of the slip and Green’s functions (e.g. Figure 13b of Spudich and Archuleta, 1987) and SLIP integrates this function over the fault to produce the spectra of ground motions at the observer locations. SEESLO SEESLO Fourier transforms the seismic spectra to the time domain, filters the seismograms according to user-input parameters, plots the seismograms, and writes the filtered velocity and displacement time series to ASCII output files for convenient user post-processing.

VALIDATIONS In this section we summarize the tests we have specifically conducted on the current or earlier verisons of COMPSYN. The reader is invited to perform additional tests and send us the results. In the following we use LOH to mean “layer-over-a-halfspace.”

Wholespace tests COMPSYN can be tested in a wholespace mode by burying the observer far from the free surface in a uniform medium (specified in the .OLD file). (However, it is necessary to be aware of the problem with sources and observers at the same depth, described later.) We have done numerous tests verifying that COMPSYN gives the proper result for a vertical Haskell model in a wholespace. We have tested the ability of COMPSYN to handle dipping faults by creating a test having a heterogenous slip model observed at a nearby observer, and by rotating the entire problem to verify that that the same result (after rotation) is obtained for dips of 5 and 80 degrees. 9

Point and Finite Faults in Layered Media Comparisons with Bouchon We have done several comparisons with results published by Bouchon (one of which is a test problem accompanying this software release). These comparisons have not been extremely precise because Bouchon’s results appear as rather small plots of displacement. However, we have achieved visually pleasing comparisons with his vertical Haskell model in a LOH model show in Figures 3, 4, and 5 of Bouchon (1979a), and his model of a Haskell source on a dipping thrust fault in a halfspace (Figure 5 of Bouchon, 1979b). We have also achieved good visual agreement with his expanding circular uniform rupture in a LOH structure in Bouchon (1982). PEER Validations We have achieved excellent agreement with three test problems, LOH.1, LOH.2, and LOH.4 published by Day (2001). LOH.4 is described in Day (2001), but the results were emailed to us by Day. In all cases we compared our COMPSYN results with electronic figures. The velocity model in all three cases was a layer over a halfspace. The source in LOH.1 was a point dislocation, in LOH.2 the source was a circularly expanding rupture on a vertically dipping fault having uniform slip, and in LOH.4 the same type of source was placed on a fault with a 40 degree dip. None of these faults ruptured the surface, so they did not test one of the main vulnerabilities of the COMPSYN package, the problem of sources and receivers at the same depth. The results of the comparisons are shown below. In all COMPSYN calculations, fmax=fcut=10 Hz was use in the .OLD file, minimum points in the horizontal and downdip directions were set to 100 in both the .TFD and .SLD file. All results were convolved with a symmetric Gaussian with a time constant of 0.06 s.

10

COMPSYN, analytic, and UCB/LLNL results for LOH-1 1

radial 0

UCB/LLNL ANALYTIC COMPSYN

velocity, m/s

-1

transverse

vertical

0

1

2

3

4

5

6

7

8

9

10

time, s FIGURE VALIDATION-1. PEER LOH.1 test problem. Comparison of ground motions in a LOH model from a point source. See Day (2001) for details. COMSYN result is the red line. Other lines are results from Day (2001). Dotted line is the analytical result (probably a frequency-wavenumber calculation), and the dashed line is the result of a 3D finite difference calculation by Shawn Larsen and Douglas Dreger at LLNL and UCB, respectively.

11

velocity, m/s

COMPSYN and UCB/LLNL results for LOH-2 0.1

radial

0 -0.1 UCB/LLNL COMPSYN

transverse

vertical

0

1

2

3

4

5

6

7

8

9

10

time, s FIGURE VALIDATION-2. PEER LOH.2 test problem. Comparison of ground motions in a LOH model from a vertically dipping fault with uniform slip and a circularly expanding rupture. See Day (2001) for details. COMPSYN result is the red line. Black line, taken from Day (2001) is the result of 3D finite difference calculation by Shawn Larsen and Douglas Dreger at LLNL and UCB, respectively.

12

velocity, m/s

COMPSYN and UCB/LLNL for LOH-4

UCB/LLNL COMPSYN

1

radial

0

-1

transverse vertical

0

1

2

3

4

5

6

7

8

9

10

time, s FIGURE VALIDATION-3. PEER LOH.4 test problem Comparison of ground motions in a LOH model from a thrust fault dipping at 40 degrees, with uniform slip and a circularly expanding rupture. See Day (2001) for details. COMPSYN result is the red line. Black line, taken from Day (2001) is the result of 3D finite difference calculation by Shawn Larsen and Douglas Dreger at LLNL and UCB, respectively. . Other Tests The previous tests tested COMPSYN in a uniform or layered medium. We compared COMPSYN to the results of Spudich and Ascher (1983) for a point source in a medium with vertical gradients. The comparison published in that paper show the results of an early version of COMPSYN. We have subsequently repeated the test with very good agreement.

13

SITUATIONS IN WHICH COMPSYN CAN FAIL Observers very close to a surface fault rupture The COMPSYN package has two currently known significant limitations, both of which make it difficult to calculate static displacements very close to a fault which ruptures the Earth’s surface. To see these problems, you can create a test in which two observers are placed on the free surface on opposite sides of a fault which breaks the surface. If the fault length is L, place the receivers at distances of ±L/100, and compare the input fault slip to the calculated difference in static offset of the two observers. If you follow the procedure in How to Set Up a Problem, your calculated displacement difference will be much smaller than the input fault dislocation. We do not have much experience with these problems, and can only offer vague suggestions for increased accuracy. We have found that greatly increasing NK2 and FMAX (leaving FCUT unchanged) in the .OLD file can sometimes solve this problem. Unfortunately, these problems make the calculation of near-fault motions somewhat problematic, but other investigators using wavenumber integration methods have had to face similar problems. For example, studying the 1999 Chi-Chi Taiwan earthquake, Ma et al. (2001) synthesized near-fault motions by summing motions from a large fault with coarse sampling and a small, near-station fault with fine sampling. Zeng and Chen (2001) used a separate code to calculate static offsets which were added to their dynamic motions. We can only give you a few suggestions, and you will have to depend on your own ingenuity to get a good result for your particular problem.

The problem of equal fault and observer depths Hisada (1996) gives a good description of cause of problems that occur in wavenumber integration codes when sources and observers are at the same depth. In this situation, the seismic m

response of the medium (the terms like U φ k

( z, t ) in equation 1 below) decay very slowly

with wavenumber k, so the integral over wavenumber in (1) performed in TFAULT might not converge. This problem sometimes occurs in OLSON receiver depths which are less than 6 grid points (in the mesh used by OLSON) from the point force depth. We were tempted to disable the ability of OLSON to allow buried observers (point forces) so that it was less likely that an unwary user could calculate bad results, but we have left this option in the code, hoping this warning will suffice. One manifestation of this problem is a pulse in velocity (and a resulting step in displacement seismograms) that appears almost exactly at time t = 0, well before the P wave arrival. This problem might be alleviated by rerunning OLSON with a higher RMAX and/or a higher NK2. If you raise RMAX be sure to raise NK2 according to the instructions in How to Set Up a Problem.There may be other manifestations of this problem that we have not recognized. If your shallowest OLSON receiver is within the top 6 grid points, OLSON will not automatically stop (because of the slow convergence of the seismic response as a function of wavenumber). Rather, OLSON will calculate all wavenumbers up to the value of NK2 that you input in the .OLD file, and it will give a warning to watch for signs of a nonconvergent wavenumber integral. 14

Inadequate sampling for observers very close to a fault Please look at Figure 22 in Spudich and Archuleta (1987). Because Green’s function 2

4

terms go like 1 / r , 1 / r , and 1 / r , the traction Green’s functions will peak very sharply at points on the fault that are very close to an observer. There is nothing in TFAULT and SLIP to check that you have sampled the fault sufficiently densely to give a good quadrature result for observers close to the fault. In both TFAULT and SLIP you should select the minimum number of samples in the U and V directions so that the distance between samples is less than the distance to the observer. It would be easy for the interested user to modify these codes to provide denser sampling near the observer.

NOTES ON THE FORTRAN CODE The FORTRAN codes in this package (except OLSON) were originally developed to run on a Digital Equipment Corporation PDP-1170 computer running the RSX operating system in 1980-1982. Because that computer had only 128Kb (not Mb) of memory, the codes were written to use very little RAM and to use copious disk i/o. Consequently, the codes are less efficient that they could be because of the large amount of disk i/o they perform. However, they are unlimited in the size of the problem they can handle (except OLSON). The interested user might be able to modify the codes to run much faster by writing fewer, bigger disk records in TFAULT and modifying SLIP accordingly. In the mid-1980s the codes were moved to a Digital Equipment Corporation ‘VAX’ series computers running the VMS operating system. To adapt these codes for the IASPEI Handbook, these codes were moved to a Windows NT 4.0 system and extensively modified using the Digital Equipment Corporation Visual FORTRAN Compiler v6.0. The current version of the code is written to run in line-oriented operating systems, like UNIX, VMS, or DOS. Most of the code in this package is standard ANSI FORTRAN 77 and should compile without difficulty. There is some non-standard FORTRAN, but most compilers recognize these nonstandard features. We have verified that this code operates properly on a VMS and a Windows system (in a DOS window). We have not checked these codes on a UNIX system. However, other users have transported earlier versions of this code to UNIX and LINUX systems, requiring either no or minor modifications of the code. We note that the version of DOS available under Windows NT 4.0 is far superior to DOS available under Windows 98 or Windows ME. In addition, Windows NT is more stable than Windows 98, so DOS users are encouraged to use Windows NT.

System Requirements To run the example problems, you will need the following: •

FORTRAN compiler, FORTRAN 77 or higher 15

• • • •

a line-oriented operating system, like DOS, VMS, UNIX, or LINUX at least 95 Mb of free RAM about 250 Mb of free disk space a method to view black-and-white postscript files, e.g. a printer and/or Ghostview or other postscript viewer software

Currently the applications OLSON and XOLSON require about 43 Mb and 95Mb, respectively, of RAM, although both applications can be made larger or smaller by changing dimension parameters. These parameters are explained for each application later in this document. All other applications in the COMPSYN package use 2 Mb or less.

Possible PECULIARITIES Of The Fortran Upper / Lower Case Portability Problem Neither DOS nor VMS is case-sensitive, although UNIX and LINUX are. The primary problems with case will occur in the names of input and output files. These codes might under some circumstances create output files with names that contain mixed case (e.g. "test1.PS"). Also, these codes might try to read files with names in different cases than the examples in this documentation. BLOCK DATA Problem These codes have been compiled with many compilers, and only the Lahey F77L3 and LF95 compilers (so far) has objected to our use of DATA statements to initialize variables in COMMON blocks. Code like the following, which follows our style, COMMON /LUNS/ INDAT, KTR DATA INDAT/1/, KTR/5/

must be changed to: BLOCK DATA COMMON /LUNS/ INDAT, KTR DATA INDAT/1/, KTR/5/ END

in order to generate no warnings using the Lahey compilers. However, despite the compilation warnings, the LF95 compiler will link and execute the programs properly (see, however, the section on accuracy, below). Expected Numerical Accuracy of the Calculations There are at least two types of inaccuracies that can occur in these codes. Consider for example a hypothetical subroutine which uses a numerical integration technique, for example a Simpson's rule integration, to integrate a continuous function. If the user of the subroutine does not allow the subroutine to sample the continuous 16

function sufficiently densely, the subroutine will produce an inaccurate answer. However, if the subroutine is coded so that it does not lose precision, running the subroutine using two different compilers should produce exactly the same (inaccurate) answers (here, "exactly" means agreement to machine precision). However, if the subroutine is sloppily coded with no regard for maintaining precision, then the same subroutine run on two different compilers might produce different results because the compilers might assume different values for the least significant portion of lowprecision numbers. The following comparisons using different compilers give some indication of the level of precision problems we have found in the COMPSYN package. We have successfully compiled, linked, and run these codes on the • Microsoft Powerstation Fortran v4.0 run on Windows NT 4.0 (MS) • Lahey/Fujitsu Fortran 95 Release 5.60a run on Windows 98 (LF) • Digital Visual Fortran Compiler Version 6.0 (Update A) on Windows NT 4.0 (DF) where we use the abbreviations MS, LF, and DF in tables below. Using these three different compilers with various compilation options, we have compared the computed results for the example problems included in the COMPSYN package. A summary of the differences between results computed with various compilers and options is given in Table NOTES-1 below. In this table each row corresponds to an example problem. Each column corresponds to the difference in results calculated using two different compilers. If ts1 is the time series calculated by compiler 1 and ts2 is that from compiler 2, and rms(ts1) is the rms value of time series ts1, then the value shown in the table is log10( rms(ts1-ts2)/rms(ts1)). For all example problems, the .XV output time series (from SEESLO) was used in the calculations. The specific compiler options, indicated by abbreviations MS, DF, DFno, ... LFap, etc., are given in Table NOTES-2. Basically, we were comparing optimized and non-optimized code, and also codes which did not store variables in registers for added precision. Table NOTES-1. Difference (1) between time series computed using two different compilers (Compiler 1 results) minus (Compiler 2 results)(1) Test MS – MS MS - LF – LF – LF LFap DF0 DFno LF DF3 DF0 DFno DFfc exact -3.0 -3.0 -5.5 -3.0 -5.4 Compsyn example 1 exact -7.1 -5.0 -7.1 -4.0 Compsyn example 2 (1) LOG10[ RMS(time series 1 – time series 2)/RMS(time series 1)] Single precision accuracy should lead to numbers around –7 for the log10 of the normalized difference, but you can see that, for example, the results using LF and DF can disagree by 1 part in 1000 for some of the COMPSYN examples. This probably

indicates some poor programming practices (somewhere) that lead to 17

loss of precision in some of the codes. Sorry!! Remember – you use these codes at your own risk!! Table NOTES-2. Definition of Abbreviations in Table NOTES-1 Abbreviation Optimization Consistent Underflow handling (switch) arithmetic (switch) LF yes (-o1) no (-nap) default (-ntrap) LFap yes (-o1) yes (-ap) default (-ntrap) DF0 yes (/opt) no (/nofltcon) /fpe:0 (1) (2) DF3 yes (/opt) no (/nofltcon) /fpe:3 DFfc yes (/opt) yes (/fltcon) /fpe:0 (1) (1) DFno no (/noopt) no (/nofltcon) /fpe:0 (1) Denormalized values are set to zero (2) Denormalized values are not altered. Vestigial Comment Lines Many of the codes have copious comment lines in them. These comments have been accreted over many years, and we cannot guarantee that the comment lines correctly describe the current function of the codes. If the comment lines disagree with this document, this document is probably correct. Disabled Feature of the Code We disabled several capabilities of the codes while preparing this version of the software package. These capabilities were disabled because they were either too confusing or insufficiently well tested to allow a user to use them without great frustration and error. Often we disabled a feature simply by "commenting out" (i.e. adding a 'C' in column 1) lines of code that ask the user whether she wants the code to do something. If you use this package heavily, you might want to read all the comment lines just to see what moribund features lurk inside the code. However, you should test any feature you revive. Comment Lines – Nonportable Debug The user might find lines of code that begin with the character D. These are "debug" lines. The DEC supplied FORTRAN compilers allow the user to specify certain lines of code as "debug" lines by placing a D in column 1. The compiler has an option which allows these lines either to be compiled along with the rest of the code or ignored, as the user wishes. Since this is a nonstandard feature, we have attempted to remove all of these lines. We many have missed a few, though. If the user find any, the user many safely comment them out, or even delete them from the code entirely. Comment Lines – Portable Debug

18

Another variant of "debug" comment lines are lines that have the character strings "C%nn" or "C!nn" in them, where nn is 01, 02, ... 99. They are manipulated by another program we wrote (not supplied with this package). Lines corresponding to a particular value of nn do a particular task, usually used for debugging. For example: 1 2 3 4 5 6 7 8 12345678901234567890123456789012345678901234567890123456789012345678901234567890 SUBROUTINE JUNK C C!01__OFF: write the value of I C!02__ON: write the value of J C DO 100 I = 1,100 C%01 WRITE (6,6000) I C%01 6000 FORMAT (' THE VALUE OF I IS ', I20) J = 2*I WRITE (6,6001) J 6001 FORMAT (' THE VALUE OF J IS ', I20) 100 CONTINUE RETURN END

C%02 C%02

Lines beginning C!nn tell the status of lines containing C%nn. In the above case C!01__OFF says all C%01 lines are inactive, and C!02__ON says that all C%02 are active in subroutine JUNK. When lines are inactive, C%nn appears in columns 1-4, and when lines are active, the C%nn appears in columns 77-80. Users of the COMPSYN package should note that our intention has been that C%nn lines are used for debug and testing purposes only, and that all C%nn lines should be inactive in the COMPSYN package. No C%nn lines should be required for the proper functioning of the codes. If you find active C%nn lines in the code, please notify us. Compilation Warnings and Options That You Should Ignore When you compile these routines, • • •

do not use array bounds checking do not cause the calculation to stop if an underflow is detected ignore warnings of unused variables

Both the Microsoft Powerstation FORTRAN v4.0 and the Digital Visual Fortran v6.0 compilers properly compile statement functions, but their array-bound checking option and their unused-variable detection mechanism get confused by statement functions. For example, in the following sample FORTRAN code, GUK is a statement function: PROGRAM JUNK C REAL GUK, A, B C C GUK is a statement function, not an array. C defines it GUK(A,B) = A + B C

19

The following statement

DO 100 I = 1,100 C = 2*I D = 3*I C if you compile JUNK with array bounds checking, you might get an error C message "array bound exceeded" when you execute the next statement E = GUK(C,D) C E now has the value 2*I+3*I 100 CONTINUE END

If you compile your fortran with array-bound checking activated as a compiler option, your executable file might fail during execution with an incorrect "array bounds exceeded" error. The problem is that some compilers fail to realize that statement functions are not arrays. The only solution to this problem is to refrain from using the array-bounds checking option when you compile routines having statement functions. Also, some compilers will warn you that GUK, A, and B are unused variables. However, don't delete these variables!!! You will be sorry!! There are in fact a few genuinely unused variables lurking in the codes, but we decided not to delete them because they were generally arguments of subroutines, and deleting them would require numerous changes. Under some circumstances, the finite element algorithm in OLSON will experience underflows when the wave functions become very small. We believe that this situation does not significantly affect the accuracy of the calculated seismograms, and typically we run OLSON without checking for underflows, but the user should verify the accuracy of the resulting seismograms personally. We note that in Table NOTES-1 above the largest errors were somewhere in the COMPSYN package, which might be related to underflows in OLSON.

General Style File Naming Conventions – Stems and Extensions We have tried to adopt a consistent file-naming convention. Most file names follow this format:

d   o stem.xx  , p   *  where the "stem" is the string before the period character, and the extension is the character string after the period. In the above example, stem is a character string selected by the user, xx is a two-character code indicating the name of the application that created the file, and the last character (d, o, p, ...) indicates the use of the file. For example, application SLIP prompts the user for a stem. If the user gives the stem banana, application SLIP creates an output file called banana.slo. Typically, the last character has the following meaning: 20

Table 1. Usual meaning of the final character in a file name final character D O P

H M

meaning Data file - user-prepared text input data Output file – text or binary file of numeric output results, typically for input into some other application Print file – a text file containing application output the user can use to monitor the proper functioning of the application and to inspect for errors in the input data or other unusual conditions Header file (generated by OLSON only). Binary file containing miscellaneous parameters needed by XOLSON. Spectrum of slip function evaluated on the fault (see Spudich and Archuleta, 1987, Figure 13a). Not typically useful.

Typically, the application code xx has the following meaning:

Table 2. Usual meaning of the first two characters in a file name suffix characters OL XO TF SL SE

application that creates file OLSON XOLSON TFAULT SLIP SEESLO

Typical Style Of User-Prepared Data Files Most of the user-prepared data files used by these applications are templates which show the user the proper locations for the input numbers. For example, a few records from a data file might look like this: nk1: 1 nk2: 1500 ksk: 1 velocity model......end with a negative number depth,alpha,beta, rho last depth defines zmax, bottom of grid 0., 1.7, 0.4, 1.8, 0.4, 1.8, 0.7, 1.8, 5., 5.65, 3.2, 2.5, 11., 5.85, 3.3, 2.8, 11., 6.6, 3.7, 2.8, 12., 7.2, 4.15, 2.8, 155., 7.2, 4.15, 2.8, -1.

The above 13 lines might be an exact copy of a data file. Each software application is written to find the relevant numbers in the file and to skip the other information. In a line of the form 'some string: some numbers', the colon (or equal sign) is a search character. The application finds the colon and then reads the numbers following the colon, ignoring the string before the colon. In fact, the user can alter the length of the string before the colon without affecting the proper reading of the 21

input numbers. In the data files, commas are used as field terminators, and empty fields are read as zero. NOTE – one of the most common user errors is to omit a comma!!!

OLSON DETAILED INSTRUCTIONS Mathematical Development The application OLSON is a version of the Olson et al. (1984) code DWFE, with minor modifications by us. Following Olson et al. (1984), we have that a general expression for displacement u in a cylindrically symmetric medium is given by ∞

u(r ,φ , z , t ) = ∑ ∫0 m

k [U zmk ( z , t )R mk (r ,φ ) + U rmk ( z , t )S mk (r ,φ ) 2π

+ U φmk ( z , t )Tkm (r ,φ )]dk and its vertical derivative ∞

u′(r , φ , z , t ) = ∑ ∫0 m

u′ ≡ du dz

(1)

is

k [U z′mk ( z , t )R mk (r , φ ) + U r′mk ( z , t )S mk (r , φ ) 2π + U φ′ mk ( z , t )Tkm (r , φ )]dk

(2)

where R, S, and T are vector surface harmonics defined in Olson et al. (1984) equation

(r , φ , z ) are cylindrical coordinates, k is horizontal wavenumber, and m is angular m m m order. OLSON solves for the expansion coefficients U z k ( z , t ) , U z′ k ( z , t ) , U r k ( z , t ) 2.5,

, ..., U φ′ k

m

( z, t ) . These expansion coefficients are obtained by using a time stepping

finite element method to solve the coupled ordinary differential equations (ODEs), Olson et al's equations B11a. This set of ODEs is solved once per each choice of k, wavenumber and m, angular order. Because we are ultimately concerned with double couples and not more complicated sources, the only m values we need to consider are m =0 and m =l. For a particular choice of m, we need to solve the coupled ODEs for discrete values wavenumber k, given by

kn

such that

J m (k n R ) = 0 , 22

kn

of

where R is some maximum

epicentral distance, discussed later. Hence, for m =0, (called "vertical motions" in OLSON), the program solves the coupled ODEs for each Once a particular

kn

kn

given by

J 0 (k n R ) = 0 .

and m are chosen, the OLSON program solves the ODEs

using a finite element method. To do this, the program chooses a grid of points in depth z. The grid spacing is variable and the spacing at depth z is chosen to be 1/6 of the minimum shear wavelength at depth z (more on this later). The total depth to which the grid extends is also a function of increasing

kn

kn , with the maximum depth diminishing for

(more on this later). Once the grid is chosen, the program steps through m

time determining the expansion coefficients U z k

U φ′mk ( z , t ) .

( z, t ) , U z′mk ( z, t ) ,

U r′mk ( z , t ) , ...,

All this proceeds according to the Olson et al. (1984) paper.

We modified DWFE to take the temporal Fourier transform of the expansion m

coefficients U z k

( z, t ) , U z′mk ( z, t ) ,

U r′mk ( z , t ) , ..., U φ′mk ( z , t ) after they are obtained

in the time domain, and these Fourier transforms are the actual output of OLSON in the .OLO files.

23

A simplified flowchart of the OLSON program is: SIMPLIFIED FLOWCHART FOR APPLICATION OLSON read data (.OLD file)

loop on m=0,1

loop on wavenumbers nk = nk1 to nk2

find kn such that Jm(knR) = 0

remove deep part of grid that is not needed for this wavenumber

use finite element method to obtain expansion coefficients U for this kn and m

Fourier transform the Us

write them to the .OLO file

end loop on wavenumbers

end loop on m

Figure OLSON-1. Flowchart for OLSON.

24

How to Prepare an OLSON Run It is necessary to execute a few preliminary runs of OLSON in order to adjust all the input parameters properly. We first present a description of the data parameters, and after the description we present the section “Trial Runs the User Should Perform”, and another section "How to Set up a Problem." The user should read all these sections before attempting to run OLSON.

OLSON User-Prepared Data File *.OLD In this section we will explain the variables in a OLSON data file and the method for choosing their values. Here is the input file for the circular model: TEST: CIRCULAR MODEL fmax: 10.0 fcut: 10.0 tmax: 3.5 dt: 0.008 tfade: 0.5 rmax: 50.0 nk1: 1 nk2: 603 ksk: 1 velocity model......end with a negative number depth, alpha, beta, rho: 0.0, 4.0, 2.3, 2.6, 1.5, 5.5, 3.2, 2.8, 4.5, 6.3, 3.65, 2.9, 30, 6.35, 3.67, 2.95, -1 min and max desired receiver depths, and integer grid increment 3.3, 4.57, 1 desired depth of point force source: 0. components to be calculated (V=vert, H=horiz, B=both):B Detailed explanation: TEST: CIRCULAR MODEL rec 1: title - a text string descriptive of the run, which gets printed in a variety of places and is passed to subsequent programs. fmax: 10.0 rec 2: fmax- the "resolving frequency" of the finite element grid (Olson's omega max divided by 2π). It is the maximum frequency at which you want the 25

calculation to be accurate. Note that it is NOT the Nyquist frequency. fmax is the single most important input parameter because it controls the accuracy and the computation time. Computational effort goes as fmax CUBED. Fmax controls the grid spacing in depth, and indirectly controls the time step. At depth z the wavelength of a shear wave is (shear velocity) / frequency. The grid spacing is chosen so that there are 6 grid points in the shear wavelength corresponding to a frequency of fmax, i.e. grid spacing goes like vs(z) / (6*fmax). If the results of a calculation look messy or oscillatory, one possible remedy is to increase fmax. (Although Olson says that the errors in the calculation are acceptably small for frequencies up to fmax, we have found that what he calls "acceptably small" is sometimes not small enough for our extended source calculations). See the section "Trial Runs the User Should Perform" for more information on the meaning of fmax. fcut: 10.0 rec 3: fcut - cut-off frequency. the results of the calculation are saved from frequency = 0 to frequency = fcut. OLSON was initial written with fcut ≥ fmax in mind, but subsequent use seems to indicate that the results are cleaner if fcut ≤ fmax, i.e. you only save a portion of the response that is allegedly calculated accurately. OLSON will warn you if you set fcut < fmax, but this option is perfectly acceptable. See the section "Trial Runs the User Should Perform" for more information on the meaning of fcut. tmax: 3.5 rec 4: tmax - total length (seconds) of the calculated time series. Because all the Greens functions that get calculated are ultimately added together with source delays included, they must all be defined over a common time span. Hence, this time must equal or exceed the sum of the extended source duration and the duration of the Greens functions. This will be clarified in an example below. Note that computational effort and the amount of storage required are proportional to tmax**2, so it should be kept as small as possible. See also notes on dt regarding setting tmax. If you find that the extended source seismograms calculated by SLIP/SEESLO have noncausal noise before the hypocentral P wave, your tmax might be too small. dt: 0.008 rec 5: dt - length of time step for finite element calculation. Computational effort is proportional to 1/dt. (Note that fmax is not the Nyquist frequency and thus dt is not 1/(2*fmax). See the section "Trial Runs the User Should Perform" for more information on the relationship between dt, fmax, and fcut. ) Time step dt must be less than some number derived from the Courant stability criterion, which in practice is about 0.9 * (minimum P wave travel time across any element of the finite element grid). A good 26

initial guess is dt = 0.15 * vs / (fmax * vp), where vp and vs are the P and S velocities at the shallowest part of the model. The program checks your choice of dt and warns you if it is inappropriate. If dt is too large, instability in the finite element solution grows rapidly as the time stepping progresses, and OLSON halts execution on a floating point overflow or other nonsubtle error. A second consideration in choosing dt is related to the ultimate amount of output storage and to the use of an FFT in the program which requires that the time series contain a number of points which is an integer power of 2. You can reduce the computation time of programs XOLSON and SLIP by inputting dt here in a way that minimizes the number of frequencies you must save in the output files. This is done in the following way: As mentioned earlier, the program calculates some U(zi,tk), which is an expansion coefficient for all grid points zi and times tk, where tk = (k-l)*dt, 0 • tk • tmax. These U are then Fourier transformed into the frequency domain, so that we have U(zi,fj), where fj is the jth frequency and df = fj - fj- 1 = 1./(total length of time series calculated, which equals or exceeds tmax). Hence the number of frequencies stored is nf = (fcut/df)+l, and it is apparent that the required storage space is related to tmax through df. (For more information on df and storage, see the notes on OLSON dimensions below.) For optimum storage efficiency, you should choose dt so that (tmax/dt) +l is slightly less than some integer power of 2. The reason for this is as follows: let's say you choose tmax = 100 s and dt = 1 s. The program will calculate 101 time steps, but since 101 is not an integer power of 2, the program will pad the calculated time series out to 128 points, so that the time series that is Fourier transformed is actually 128 s long, and the df is correspondingly smaller. Hence, to avoid excessive padding with zeros and diminution of df, choose dt as indicated above. In this example, a better choice of dt would have been about 1/126. tfade: 0.5 rec 6; tfade - the last tfade seconds of the calculated time series is linearly faded to zero to avoid a truncation phase which wraps around to the beginning of the seismograms when you do an extended source calculation. tfades that are about 10% of the total duration of your time series seem to work reasonably well. If you find that the extended source seismograms calculated by SLIP/SEESLO have noncausal noise before the hypocentral P wave, your tfade might be too small. rmax: 50.0 rec 7: rmax - This is the R which appears in the equations above. Computing effort and storage are linear in rmax. rmax controls the particular wavenumbers chosen by the program, with spacing of the wavenumbers dk approaching 1/rmax asymptotically. The effect of rmax on the seismograms is to introduce a reflecting boundary at an epicentral distance of rmax. Hence rmax should be chosen large enough so that the reflections from the 27

reflecting boundary arrive after time tmax at the observer having the largest epicentral distance. Let rg be the largest epicentral distance for which you want Green's functions. If you don't want P waves reflected from the artificial boundary at distance rmax to arrive at an observer at radius rg before time tmax, a useful expression for rmax is rmax = (vpg * tmax + rg ) / 2, where vpg is the greatest P velocity in your earth model. This, of course, is approximate and is on the conservative (expensive) side. In many cases it may be that the reflected P waves are very small and could be allowed to interfere with the outgoing waves, although the reflected S and surface waves may still be too big. Hence, you could replace vpg with vsg (max s velocity) in the above formula. In addition, you may not care if the reflected waves interfere with the portion of the time series being linearly faded, in which case rmax could be even smaller. An accurate calculation of the true reflection time would also be smaller than that used above because vpg is an overestimate of the speed of the reflected wave. nk1: 1 nk2: 603 ksk: 1 rec 8: nkl rec 9: nk2 rec 10: kskip - These control the indices of the wavenumbers over which OLSON loops. The program chooses wavenumber k n such that Jm (k n R ) = 0 , where R=rmax, m=0 or 1, and the actual loop index that appears in the program is n = nkl,nk2,kskip (See Figure OLSON-1 above). Thus, nkl and nk2 are the lower and upper desired wavenumber indices, and kskip is the increment increment in n, following the normal rules of FORTRAN. For a test run you may want to set nkl=nk2 (as explained later), but for the real run you will want to set nkl = 1, nskip=1, and nk2 according to the following rule of thumb. The maximum wavenumber used, corresponding to nk2, is given to a good approximation by kmax = (nk21)*dk, where dk = π/rmax. Let's say that you want to include in the calculation all waves with phase velocities between infinity and some minimum phase velocity cmin (to get the complete medium response we usually choose cmin to be 0.8 times the Rayleigh wave velocity at the top of the Earth model.). Because cmin = 2π*fmax/kmax, or kmax = 2π *fmax/cmin and nk2 = (2*fmax*rmax/cmin)+l (note, if fcut < fmax, you can replace fmax by fcut in this formula). In certain cases your choice of kmax may be superseded by the program. As indicated in the OLSON flowchart above, the program determines the bottom of the grid for each wavenumber, and in general the bottom of the grid moves upward (smaller z) for progressively higher wavenumber. Suppose the shallowest (least z) grid point you wish to save is at depth zmin (more on this below). If the bottom of the grid becomes shallower than zmin for some wavenumber, then OLSON will automatically stop. This is the best possible conclusion to the computation, because in this case you are certain that the complete medium response is included in your 28

seismograms. A good way to observe this behavior is to run the code with nkl=l, nk2=(the largest number you think you will need, which would be 651 in this example), and kskip=50. (This type of test run will also give you a good estimate of the computation time that will be required when you set kskip=1). In the event that the program stops itself automatically, your input value of nk2 is irrelevant. WARNING - If you are saving any of the shallowest 7 grid points, the program will never stop automatically – as the wavenumber loop (nk) increments, the bottom of the grid will become progressively shallower, but the bottom of the grid never becomes shallower than the 7th grid point. In this case OLSON terminates when nk reaches your choice of nk2. This is perfectly acceptable if your choice of nk2 is intelligent. WARNING - If you use surface sources and receivers, you might have to use very high nk2 to make your seismograms look good. This is because the Bessel series of equation 1 is very slowly convergent for grid points very near the surface. You can learn more about this problem in Hisada (1995). See the section SITUATIONS IN WHICH COMPSYN CAN FAIL. velocity model......end with a negative number rec 11: - dummy record. OLSON skips. depth,alpha,beta, rec set 12:

rho:

velocity and density model. If you want to use a velocity model with a low velocity zone, please read the section How the Grid is

Progressively Truncated, and a LVZ Warning first record - dummy record, skipped by OLSON, followed by as many records as needed with the following content: subsequent records: depth(km), P velocity(km/s), S velocity(km/s), and density 3 (gm/cm ), separated by commas. Each record represents the corner point of piecewise-linear velocity- and density-depth functions. To introduce discontinuities in material properties, input two successive records with the same depth but different velocities and/or densities. To introduce a uniform layer, input two successive records with different depths and identical velocities and densities. If you want to use a velocity model with a low velocity zone, please read the section How the Grid is

Progressively Truncated, and a LVZ Warning penultimate record of the set: NOTE – THE DEEPEST POSITIVE DEPTH (30 km in the above example) DEFINES ZMAX, THE INITIAL BOTTOM OF THE GRID. Energy is reflected off the bottom of the grid for low wavenumbers, so the bottom of the grid must be placed deeply enough so that a P wave which is generated at the surface, reflected from the bottom of the grid, and received at the deepest grid point that is saved (more on this below) arrives after tmax. In some cases this P wave may be small and it may be 29

possible to set zmax so that the reflected S wave rather than the P wave does not interfere in the seismograms (saves computing time). last record of the set: To signify the end of input of the velocity model, enter a record having negative depth with dummy values for velocities and density. min and max desired receiver depths, and integer grid increment 3.3, 4.57, 1 record set 13 – A "receiver" is a grid point (node) for which the computed results will be saved (written to the .OLO file). You must save calculated results for grid points that span the total depth range of the extended fault that will be defined in TFAULT (see Figure TFAULT-2 below). The two records above specify the block of grid points for which results are written to the .OLO file. The first record ("min and max desired ...") is a dummy record ignored by OLSON. The second record ("3.3, 4.57, 1") contains variables ZGMIN, ZNNMX, NNSKIP. OLSON writes to the .OLO file the computed results from every NNSKIPth grid point, starting at the grid point just shallower than depth ZGMIN and continuing down to the grid point just deeper than depth ZNNMX. To make an output file that TFAULT can use, you must set these parameters so that OLSON saves results from at least 2 receiver depths, even if you want to use a point dislocation source in SLIP. In this example, every grid point from the node shallower than 3.3 km to the node deeper than 4.57 km will be saved. It is recommended that you save the results from a few grid points that are shallower and deeper than the desired top and bottom, respectively, of your fault, in case you decide to alter your fault depth slightly. If you want to save results from every grid point between the min and max receiver depths (3.3 and 4.57 km, above), enter 1 for the integer grid increment (as above). If you enter 3 (for example) rather than 1, then every third grid point result will be saved. Your choice of NNSKIP does not affect the accuracy of the OLSON output, but it will affect the accuracy of the spatial integrals performed in program SLIP. If you use NNSKIP > 1, then the traction surfaces will be undersampled for frequencies greater than FMAX/NNSKIP. The size of the .OLO file is proportional to 1/NNSKIP. We usually use NNSKIP = 1, but we have had some acceptable results with NNSKIP = 2, especially when FMAX exceeds FCUT. If you want to use NNSKIP > 1, you should test the accuracy of the program SLIP output. NOTE – be sure to read the section Situations in which Compsyn Can Fail that describes some of the problems that can occur when sources and receivers are at the same depth. desired depth of point force source: 0. Here you input the depth of the point force source used in OLSON. Recall that TFAULT/SLIP work in the reciprocal geometry, so if you want to use TFAULT and SLIP to calculate seismograms for observers at depth Z, you must enter depth Z here. OLSON applies point forces at depth Z, and 30

then SLIP uses Green's function reciprocity. NOTE – be sure to read the section Situations in which Compsyn Can Fail that describes some of the problems that can occur when sources and receivers are at the same depth. components to be calculated (V=vert, H=horiz, B=both):B rec 15: comp - enter 'V' to do verticals only (m=0), "H" to do horizontals only (m=1), and "B" to do both ("B" is the normal case). How to Set Up a Problem This section gives a brief description of the chain of thought that goes into creating a complete model. In this example we have shown how to create a model that runs efficiently with relatively little storage. This setup procedure is not guaranteed to give you a beautiful result every time, but it will give you a sufficiently good result that minor additional adjustments will be sufficient. As you become more familiar with OLSON, you will see additional ways to adjust your input models for a more efficient calculations. The model below might differ from the model in sample files supplied with this software. Suppose we wish to calculate ground motions at 3 observers located near a vertical fault (dip = 90°), in the geometry shown in Figures OLSON-2 and OLSON-3:

Map view

-1

5

Fault

11 X

1 observer 1 (-1, 1)

observer 3 (11, 0)

observer 2 (5, 1)

Y Figure OLSON-2. Map view of an example. The fault is 10 km long. 31

Vertical section - not to scale

observer 2

observer 1

observer 3 X

2

-1

5

10 11

5

Rupture area, dip=90o

8

*

10

hypocenter

Z Figure OLSON-3. Vertical section showing location of fault surface at depth in a hypothetical example. In the fault model, the hypocenter is at x=2, z=8 km, and the rupture velocity is 2.5 km/s everywhere, so rupture fronts are circular. Let us assume that the longest risetime on the fault is 0.5 s. This fault is embedded in the following velocity model (Figure OLSON-4):

32

EXAMPLE VELOCITY STRUCTURE 2.89

0

5.0

depth, km

2

4

S velocity

P velocity

6

8 4.62

10

8.0

4.05

7.0

12 2

3

4

5

6

7

8

9

velocity, km/s Figure OLSON-4. Hypothetical velocity structure. You must execute the following procedure. This procedure will usually give appropriate OLSON input values, but you might have to modify the values after you try the trial runs described later and after you look at the seismograms. You might want to set up a spreadsheet to automate these calculations. The most important things to decide first are fmax and fcut in OLSON. Fmax controls the accuracy and cost of the OLSON calculation, and fcut controls the ultimate bandwidth of the seismograms and it controls the amount of disk storage required. For this calculation we choose fmax = fcut = 1.0 Hz. Because OLSON is the most computationally time-consuming of the codes in COMPSYN, you should be careful in your choice of fmax. Sometimes you can make your seismograms more beautiful by selecting fmax ~ 2*fcut. This improves the accuracy without increasing the storage required. Now we determine the latest time of arrival at any observer from any point on the fault. Since the fault is buried fairly deeply and our observers are at small epicentral ranges, we expect few surface waves to be in the Green’s functions, so the following argument assumes that the latest arrivals will be S waves. For other geometries, the latest arrivals might be surface waves and you will have to include the slow propagation of the surface waves when you determine the latest time of arrival. Since 33

rupture proceeds away from observer 1, clearly the latest arrival at any observer will be the S wave that arrives at observer 1 from the last point on the fault to rupture. The last point on the fault ruptures at time tlast = (largest distance rupture travels) / (rupture velocity) + rise time = 8 2 + 3 2 km / 2.5 km/s + 0.5 s = 3.9 s (=tlast). S waves from this last point reach observer 1 at time tslast = tlast + distance/S velocity = 3.9 + 14.9/3.5 = 8.2 s. Since the fault is buried fairly deeply and our observers are at small epicentral ranges, we expect few surface waves to be in the Green’s functions, so we could set tmax and tfade so that (tmax-tfade) is just a little greater than tslast. We set tmax = 11 s, and tfade = 2 s, so that fading begins at 9 s, which is after the last S wave at the largest distance. Tfade should be at least 10-20 % of tmax. To set rmax, we first estimate rgr, the largest epicentral distance for which we will need to calculate a Green’s function, which is the largest horizontal distance between a point on the fault and an observer. That distance is about 11 km, which is the separation between observer #1 and the end of the fault at u=10. We now calculate rmax so that the reflected P wave from rmax does not arrive at an epicentral range of 11 km before time tmax. This is given approximately by the formula rmax = 0.5 * ( (max P vel )*tmax + rgr) = 0.5 *( 8.0*11+11) = 49.5 km. We now determine the maximum wavenumber of interest, kmax. The lowest shear velocity in the model is 2.89 km/s. Let us assume we want waves whose phase velocity cmin is 0.75 of the lowest shear velocity. (This value usually includes all surface wave, since the phase velocity of the fundamental Rayleigh wave is 0.92 of the lower shear velocity.) Then cmin is 2.16 km/s, and since c = omega/k, we have kmax = 2 π fcut / cmin = 2.91. The increment dk in wavenumber is approximately π/rmax or 0.0635, so the total number of wavenumbers we will need is nk2 = kmax/dk = 46. To estimate zmax, we need to know the time it takes a P wave to go from the surface down to zmax and back up again to the point on the fault. We set zmax so that this reflection arrives after time tmax. Let the depth of the deepest point on the fault be zd. The reflection time is approximately trefl = (2*zmax-zd)/(P velocity) so zmax is ((P velocity)*tmax+zd)/2 = (8.0*11+10)/2 or 49 km = zmax. The grid spacing dz will on the average be 1/6 of a shear wavelength at fmax, or (S vel )*fmax / 6 = about 0.75 km, so we will have about zmax/dz grid points, which is about 65. The time step dt should be about 0.9 of the minimum P wave transit time across any material in the grid (this tends to occur in regions of the grid where there is a low vs/vp ratio, or in places where the materials are for some reason smaller than their neighbors). Let's guess that the grid having minimum P wave transit time will be found near the surface in the region of minimum shear velocity. The width of the smallest material will then be about vs/(6*fmax) = 0.48 km, and 0.9 of the P transit time across this material will be about dt = 0.0864 s. If we use this for our time step dt, then the total number of time steps required will be about tmax/dt = 127, and the next largest integer which is a power of 2 is 128, which is acceptably close to 127 so that we get 34

minimal padding with zeros. As described in the section on OLSON earlier, the calculated time series get padded out with zeros until they contain a number of points which is a power of 2. Since we now know that we will be dealing with 128 point FFTs, we know that the total duration of the time series that ultimately are transformed will be 128*dt which is about 11.06 s, so that the frequency interval df will be 1/11.06 s = 0.0904 and the number of frequencies we will have to store to get up to frequency fcut is fcut/df+l = 12. If we had set tmax = 12 seconds instead of 11, we would have generated a 139 point time series, which then would have been padded out to 256 points, meaning that the total length of the padded time series would have been 22s, and the resulting interval in frequency would have been 0.045 hz, requiring OLSON output for about 22 frequencies to be stored instead of 11. In addition, the larger tmax would have required rmax to be larger, which would have made dk smaller, thus forcing us to calculate more wavenumbers, further increasing computation time and storage. In addition, zmax would have to have been larger, requiring more grid points and computation time. So you see it is advantageous to try to keep parameters as small as possible, since the size of the problem can grow rapidly. Finally, because our fault spans the depth interval 5.0 to 10.0 km, we place receivers over a slightly wider range of depths, 4.6 to 10.4 km, and we save every grid point. See Figure TFAULT-2 for an illustration of the choice of receiver depths. This chain of logic leads to the following .OLD file: OLSON data file in "How to set up a problem" section fmax: 1.0 fcut: 1.0 tmax: 11. dt: 0.0864 tfade: 2. rmax: 49.5 nk1: 1 nk2: 46 ksk: 1 velocity model......end with a negative number depth, alpha, beta, rho: 0.0, 5.0, 2.89, 2.6, 10., 7.0, 4.05, 2.8, 10., 8.0, 4.62, 2.9, 49., 8.0, 4.62, 2.9, -1 min and max desired receiver depths, and integer grid increment 4.6, 10.4, 1, desired depth of point force source: 0. components to be calculated (V=vert, H=horiz, B=both):B

OLSON Terminal Session Here is an example of an OLSON terminal session, showing the keyboard inputs (i.e. input to logical unit 5) and the OLSON outputs to the screen (logical unit 6). ENTER THE NAME OF THE INPUT DATA FILE(*.OLD SUGGESTED):

OLSON.OLD 35

ENTER A STEM NAME FOR ALL OUTPUT FILES(CR=OLSON):

weasel If you enter weasel, OLSON creates output files weasel.olh, weasel.olo, and weasel.olp. If you enter CR, OLSON creates output files olson.olo, olson.olh, and olson.olp. Ater you enter the above two inputs, OLSON writes a lot of informative output to the terminal and to the print (.OLP) file. We comment on some of it below: CIRCULAR MODEL LOGICAL UNIT NUMBER OF GREENS FUNCTION OUTPUT FILE=, 3 GRID SPACING BASED ON A RESOLVING FREQUENCY FMAX= 10.0 HZ CUTOFF FREQUENCY FOR TRUNCATING THE SPECTRUM OF EACH WAVENUMBER TIME SERIES= 10.0 HZ SEISMOGRAMS CONSISTING OF 437 TIME STEPS WILL BE GENERATED FROM TMIN= 0.0 S TO TMAX= 3.5 S WITH A TIME STEP OF DT= 0.008 S THE LAST 0.50 S WILL BE LINEARLY FADED TO 0. FFT WILL BE PADDED TO 512 POINTS, CORRESPONDING TO A 4.10 S TIME SERIES. FREQUENCY INCREMENT = 0.2441 , 42 FREQUENCIES WILL BE SAVED CYLINDER RADIUS RMAX= 50.00 KM. WAVENUMBER SUMMATION OVER INDICES NK1= 1 TO NK2=603 WITH A SKIP INTERVAL= 1

Note above that in the .OLD file we have asked for 602 wavenumbers, but as we will see below, we don't need all of them. MODEL PARAMETERS INPUT.....DEPTH ALPHA 1 0.00 4.00 2.30 2.60 2 1.50 5.50 3.20 2.80 3 4.50 6.30 3.65 2.90 4 30.0 6.35 3.67 2.95 5 -1.00 0.00 0.00 0.00

BETA

RHO

GRID GENERATED FROM 0.0 KM TO 30.0 KM DEPTH

NUMBER OF MATERIALS= 503

The above lines tell you that the grid has 504 grid points and 503 materials (zones between grid points). The following line is important. It compares your input DT (0.008s) to the time step necessary for stability of the finite element calculation (0.009 s). DT=0.008 S SHOULD BE .LT. 0.9*DTPMIN=0.009 S MINIMUM P TRAVEL TIME ACROSS A MATERIAL IS 0.010 S, AVERAGE IS 0.010 S

You should check that the minimum P travel time is comparable to the average. If one material (grid volume) is unusually small, the minimum P travel time will be significantly less than the average. You should avoid this situation because the time step for stability is controlled by the minimum P travel time across any material. If this occurs, look at the grid printed out in the .OLP file and try to adjust the depths of points in your velocity-depth function to fix the problem.

36

NNREC1= 69 NNSKIP= 1 ZNNMX= 4.57 AFTER SCANZ, 20 RECEIVERS REMAIN......REC. NUMBER NODE NUMBER 1 69 3.4372077 2 70 3.4953840 3 71 3.5537057 4 72 3.6121733 5 73 3.6707871 6 74 3.7295475 7 75 3.7884548 8 76 3.8475091 9 77 3.9067113 10 78 3.9660614 11 79 4.0255599 12 80 4.0852070 13 81 4.1450033 14 82 4.2049494 15 83 4.2650452 16 84 4.3252912 17 85 4.3856874 18 86 4.4462352 19 87 4.5069342 20 88 4.5677676

DEPTH(KM)

The above shows the depths of the nodes (grid points) for which you want the results saved. A "receiver" is a node (grid point) whose results are written to the .OLO file. FORCE SOURCE REQUESTED AT Z= 0.00 KM DEPTH WILL BE APPLIED TO NODE NUMBER 1 AT Z= 0.00 KM

This is the depth of the single force source that you selected in the .OLD file. VERTICAL BOUNCE TIME= 9.52 S SHOULD EXCEED TMAX= 3.50

The vertical bounce time is the time required for a P wave to go vertically from the surface to the bottom of the grid and back up to the deepest receiver point (4.567 km, above). It should exceed TMAX, which it does here. COMPONENTS REQUESTED (V=VERT,H=HORIZ,B=BOTH): B

For each wavenumber OLSON writes one line of output telling how many nodes and receivers are in the grid. As wavenumber increases, the waves are confined in a progressively more shallow part of the grid, and the deeper parts of the grid are discarded by OLSON because there are no seismic waves in them. (Physically, higher wavenumbers correpond to slower phase velocity waves which bottom more shallowly in the model.) At some point, some of the nodes discarded from the grid are receiver nodes. This is correct and desirable. Receivers that are not within the top 6 grid points might all be discarded from the grid at some point in the calculation, depending on your choice of nk2. Then OLSON stops. This is the ideal conclusion for buried receivers. FOR NK = 1 THERE ARE 504 NODES AND 20 RECEIVERS LEFT IN THE GRID FOR NK = 2 THERE ARE 504 NODES AND 20 RECEIVERS LEFT IN THE GRID FOR NK = 3 THERE ARE 504 NODES AND 20 RECEIVERS LEFT IN THE GRID

37

FOR NK = FOR NK =

4 THERE ARE 504 NODES AND 20 RECEIVERS LEFT IN THE GRID 5 THERE ARE 504 NODES AND 20 RECEIVERS LEFT IN THE GRID

. . . 332 lines removed here . . . FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK = FOR NK =

338 THERE ARE 339 THERE ARE 340 THERE ARE 341 THERE ARE 342 THERE ARE 343 THERE ARE 344 THERE ARE 345 THERE ARE 346 THERE ARE 347 THERE ARE 348 THERE ARE 349 THERE ARE 350 THERE ARE 351 THERE ARE 352 THERE ARE 353 THERE ARE 354 THERE ARE 355 THERE ARE

95 NODES AND 20 RECEIVERS LEFT IN THE GRID 94 NODES AND 19 RECEIVERS LEFT IN THE GRID 93 NODES AND 18 RECEIVERS LEFT IN THE GRID 92 NODES AND 17 RECEIVERS LEFT IN THE GRID 91 NODES AND 16 RECEIVERS LEFT IN THE GRID 90 NODES AND 15 RECEIVERS LEFT IN THE GRID 89 NODES AND 14 RECEIVERS LEFT IN THE GRID 87 NODES AND 12 RECEIVERS LEFT IN THE GRID 86 NODES AND 11 RECEIVERS LEFT IN THE GRID 85 NODES AND 10 RECEIVERS LEFT IN THE GRID 84 NODES AND 9 RECEIVERS LEFT IN THE GRID 83 NODES AND 8 RECEIVERS LEFT IN THE GRID 82 NODES AND 7 RECEIVERS LEFT IN THE GRID 80 NODES AND 5 RECEIVERS LEFT IN THE GRID 79 NODES AND 4 RECEIVERS LEFT IN THE GRID 78 NODES AND 3 RECEIVERS LEFT IN THE GRID 77 NODES AND 2 RECEIVERS LEFT IN THE GRID 76 NODES AND 1 RECEIVERS LEFT IN THE GRID

Note above that by the 356th wavenumber, all receivers had been removed from the grid and there was no need to continue the calculation, so OLSON stopped. It then reminds the user to run XOLSON. Sometimes your choice of NK2 stops OLSON, in which case you get the message: NOTE ... PROGRAM FINISHED AT NK2 = (some number) WITH RECEIVERS STILL LEFT IN THE GRID. NORMAL EXIT, BUT WATCH FOR SIGNS OF NONCONVERGENT WAVENUMBER INTEGRATION IN THE RESULTS. Now run XOLSON.

If your choice of NK2 is high enough, your result should contain all the surface waves. However, see the section SITUATIONS IN WHICH COMPSYN CAN FAIL regarding nonconvergent wavenumber integrations. This message is just to alert you to a possible problem; it does not mean the problem has actually occurred. This exit message is acceptable. For more information on this conclusion of exection, please see the explanation of nk1, nk2, and nkskip above where the .OLD file is explaned.

Trial Runs the User Should Perform OLSON solves a differential equation by a time-stepping finite element calculation. The user has to choose input parameters to cause the calculation to be both stable and accurate. One type of inaccuracy in finite element calculations is "grid dispersion." When the wave equation is solved in a time-stepping finite element method, low frequency waves travel at the correct speed in the grid, but high frequency 38

wave travel at a different, incorrect speed. This phenomenon is called 'grid dispersion.' When you input fmax, you are telling OLSON that you want waves in the frequency band 0-fmax to travel at the correct speed (and waves with frequency > fmax will travel with the wrong speed). OLSON then creates a calculational grid that satisfies your request. You choose dt to make the calculation stable. In an unstable calculation, numerical errors grow exponentially with every time step, so ultimately the errors become so huge that they cause numerical overflows and the calculation stops (typically with some strange error like square-root of a negative number). In order to guarantee a stable calculation, dt must be chosen to satisfy the Courant stability condition. The documentation suggests that you choose dt = 0.15*vs / (fmax*vp), where vp and vs are the P and S velocities in the shallowest part of the model. Because vs/vp is about 0.7, dt is about 0.1/fmax. Thus, the Nyquist frequency is about 5*fmax. OLSON is calculating results in the frequency band 0 - Fnyq, but only the band 0 - fmax is accurate (low grid dispersion). Because OLSON calculates results in the band 0 - Fnyq, you must also tell OLSON what band you want to write to disk. fcut controls this. OLSON writes to disk the results in the frequency band 0-fcut. The documentation recommends that you input fcut .le. fmax. OLSON automatically generates a grid of mesh points and “materials” (a “material” is the region between two grid points. OLSON also uses the term "node" to mean a grid point, and it sometime uses "mesh" to mean the grid.). You must inspect this grid in order to set some necessary parameters. Also, you must test the finiteelement time-stepping algorithm to ensure that it will be stable for your selected parameters. Consequently, you should follow the procedure described in this section. Trial Run 1 – Generating and Checking the Grid This trial run is highly recommended. First, try running OLSON for only one time step (tmax=dt) and with nkl = nk2 = 1. This run will cause the code to set up and print (to the .OLP file) the finite element grid ("mesh") it will use, as shown below. Partial printout of grid from a .OLP file: MATERIAL NO. DEPTH DZ RHO ALPHA BETA DTP TP 1 0.000.038 2.603 4.019 2.312 0.010 0.010 2 0.04 0.039 2.608 4.058 2.335 0.010 0.019 3 0.08 0.039 2.613 4.097 2.358 0.010 0.029 4 0.12 0.039 2.618 4.136 2.382 0.010 0.038 5 0.16 0.040 2.623 4.176 2.405 0.010 0.048 6 0.20 0.040 2.629 4.216 2.429 0.010 0.057 7 0.24 0.041 2.634 4.256 2.454 0.010 0.067 8 0.28 0.041 2.640 4.297 2.478 0.010 0.076 9 0.32 0.042 2.645 4.338 2.503 0.010 0.086 10 0.36 0.042 2.651 4.380 2.528 0.010 0.096 11 0.40 0.042 2.656 4.422 2.553 0.010 0.105 12 0.44 0.043 2.662 4.465 2.579 0.010 0.115 13 0.49 0.043 2.668 4.508 2.605 0.010 0.124 14 0.53 0.044 2.673 4.551 2.631 0.010 0.134 15 0.57 0.044 2.679 4.595 2.657 0.010 0.143 .

39

503

. . 29.81 0.061 2.950 6.350 3.670 0.010 4.850

You should verify that the bottom of the grid is deep enough so that a P wave that is generated at the surface (observer/point force location) and reflects from the bottom of the grid does not arrive at your deepest selected receiver before time TMAX. OLSON will check for this condition, print the vertical P wave bounce time, and print a warning if the bounce time exceeds TMAX. However, OLSON will not stop execution on this warning because it is possible in some cases that you will find this problem acceptable. An example of such a printed message is: VERTICAL BOUNCE TIME= 9.52 S SHOULD EXCEED TMAX= 3.50

If you want to estimate the vertical P wave bounce time yourself, you should use the column TP in the .OLP file (see example above). This column shows the vertical P wave travel time between the surface and the bottom of the corresponding material. In rd the above example you can see that a P wave travels from the bottom of the 503 material at 29.81 km depth to the surface in 4.85 s. Suppose you wanted to use a fault that spanned the depth interval 0.10 to 0.50 km. Then the earliest reflected P wave would be the P that started at the bottom of the fault (0.50 km), reflected from the bottom of the grid, and went to the surface. Its total travel time would be 4.85 + (4.85 – 0.134) = 9.566 s. Thus, if you wanted to calculate finite source seismograms that lasted longer than 9.566 seconds after the earthquake origin time, a reflected P from the bottom of the grid would arrive in your time window of interest. In this case, make the bottom of the grid (ZMAX) deeper. You should also check that no material in the grid has a P wave transit time that is substantially shorter than the average. The first place to look in the .OLP file is a printed line like this: MINIMUM P TRAVEL TIME ACROSS A MATERIAL IS 0.010 S, AVERAGE IS 0.010 S

Sometimes unusually short P transit times occur when you input a literal discontinuity in seismic velocity or density at some depth. When you do this, you should check the thickness of the materials (zones between neighboring grid points) adjacent to the discontinuity (look at the depths of the grid points in the .OLP file). Occasionally, one of the materials is much thinner than the other materials, which means that the time step required for stability will be unnecessarily small and your CPU time will be unnecessarily large. If you find a material that is significantly thinner than its neighbors, you can try varying the depth of the discontinuity slightly to get a more uniform material thickness near the discontinuity. While inspecting the .OLP file, you should check the list of remaining receiver depths. These are the grid points (receivers) at which the results of the calculation will be saved. You should verify that these depths span your desired depth interval. Trial Run 2 – Verifying Numerical Stability 40

This trial run is highly recommended. To ensure that you have set your time step, dt, small enough for stability, you should do a second run of OLSON. In this run, set nkl = nk2 = largest value of nk that you think you will want, and set dt and tmax to their desired values. The finite element algorithm is most likely to go unstable at the largest wavenumbers, and if OLSON successfully executes all the time steps for the largest wavenumber, you are assured of avoiding instabilities for all lower wavenumbers. If the calculation is unstable OLSON typically terminates abruptly in some unpleasant error state (e.g. numerical overflow, sqrt domain error, etc.). If this happens, make dt smaller and try again. Note – for some values of wavenumber, the finite element algorithm in OLSON will experience underflows when the wave functions become very small. We believe that this situation does not significantly affect the accuracy of the calculated seismograms, and typically we run OLSON without checking for underflows, but the user should verify the accuracy of the resulting seismograms personally. We note that in Table NOTES-1 above the largest errors were somewhere in the COMPSYN package, which might be related to underflows in OLSON. Trial Run 3 – Estimating Computation Time A third test you can do allows you to estimate CPU time for a big job. To do this, set dt, tmax, nkl, and nk2 to their desired values, but set kskip 50, and run OLSON with these parameters. This will allow you to see how the grid is truncated as a function of wavenumber (look in the .OLP file), and will also tell you that the real run with kskip = 1 will require 50 times as much CPU time. At this time you can also determine whether your choice of nk2 is reasonable. The program may stop at some value of nk < nk2 if the grid truncation causes the bottom of the grid to become shallower than the shallowest grid (receiver depth) you want to save (the ideal situation), or you may have to cause OLSON to stop with your choice of nk2.

Trial Run 4 – Testing the Adequacy of nk2 This trial run is necessary only if you are somewhat paranoid. Let's call the nk2 estimated from my simple equations nk2e. If you want to check whether nk2e is high enough, run OLSON with nk1 = nk2e+1 and nk2 = nk2e+10. Then run TFAULT, SLIP, SEESLO with the output from this run of OLSON. If the seismograms are almost zero, then the wavenumbers above nk2e are contributing nothing, so they do not have to be calculated.

How the Grid is Progressively Truncated, and a LVZ Warning The user should carefully check the results of the calculation if the velocity-depth function contains a low velocity zone. We have run OLSON a few times with models having LVZs, and we expect that OLSON calculates these results correctly, but we have never verified this. In particular, the grid truncation, which is done in subroutine BNDRY, should be checked. As OLSON loops over wavenumber index NK, 41

progressively higher values of wavenumber k (variable XK) are used. If frequencies from 0 to fmax are present in waves having wavenumber k, then the maximum phase velocity present is cmax = 2πfmax/k, and progressively higher wavenumbers correpond to progressively lower maximum phase velocities. If we call the region between two grid points a "material", the variable NMAT counts the number of materials in the grid. Material thickness DZ at depth z is set in subroutine ELEMNT to

be β(z)/(6*fmax), where β(z) is local shear wave velocity. Subroutine BNDRY scans the materials from the bottom up, and compares the material thickness with ZLAM5 =

2π/5/k.

If the wavelength ZLAM5 is less than the thickness of the material at the bottom of the grid, the material is removed from the grid. An algebraic restatement of the following is that if β(z)

> 6fmax/5, the node (grid point) at depth z is removed from the grid. Physically, a wave with phase velocity c is confined to depths where β(z) < c. Subroutine BNDRY removes nodes for which β(z) > 6cmax/5, where waves with phase velocity c cannot exist. BEWARE: the algorithm in BNDRY assumes your point source is located above the LVZ. If you use a point source inside the LVZ, the grid truncation algorithm will discard grid incorrectly! OLSON is currently set to call BOMOUT if your model has an LVZ, but you can remove the call to BOMOUT and try to run your LVZ model. The BOMOUT is there just to remind you that you are extending the code beyond the limits we have tested.

Setting OLSON Dimensions: Dimensions of the important arrays are set in the PARAMETER statement in OLSON.INC. You might have to change array dimension if the arrays are too big to fit into RAM on your computer, or if you want to run a bigger problem than the current array dimensions allow. The parameters controlling dimensions (and the array affected) are: NDDIM = max allowed number of grids points (receivers) whose results may be saved NTDIM = max allowed time steps in finite element calculation (arrays TEMPD, TEMPG) NODIM = max number of nodes (grids). It is a good idea to leave a little margin for error when you set NODIM, i.e. set NODIM=402 if you want to run a 400 node (grid point) problem. NFDIM = max number of frequencies to be saved ( array XPOSE) NKDIM = dimension of arrays XK0 and XK1 which hold the zeros of the bessel functions. These are used only for temporary storage of the zeros before writing to unit nfunit. NKSTOR = the number of wavenumbers that are stored in one call to subroutine XCRETE NFFT = must be a power of 2, is the number of elements in the arrays which get fourier transformed (PD, PG, PDGFFT). NFFT must be .ge. 2*NSTEPT, total time steps (TMAX/DT + 1). 42

NTDIM: set NTDIM large enough to accomodate the number of time steps (NSTEPT) in your calculation. NFFT: To set NFFT, pick the next integer which is a power of 2 and which equals or exceeds NSTEPT. Set NFFT to be double this integer. NFDIM is the maximum number of frequencies you can store. When we discussed DT above, we showed how it is related to NF, the number of frequencies calculated. If NFDIM is too small for the actual NF chosen by the program, the program will stop with an error message. If NFDIM in the PARAMETER statement is much greater than NF, a lot of storage will be wasted, although the program will run. NFDIM is used only for final frequency domain storage, in array XPOSE, so NFDIM must be set ≥ the number of frequencies which will ultimately be stored, NUMFQ Two moderate sized arrays are dimensioned thusly: TEMPG(NTDIM,NDDIM,3) TEMPD(NTDIM,NDDIM,3) The most gigantic array is complex array XPOSE: COMPLEX XPOSE (10, NKSTOR, NDDIM, NFDIM) To make this program run efficiently, you should determine the largest array you can fit into RAM on your computer. Then set NKSTOR (in the PARAMETER statement) so that 2 * 10 * NKSTOR * NDDIM * NFDIM is equal to that largest array size.

Identities of Output Variables The expansion coefficients that ultimately get written out to the .OLO file are known by various names in various places, as summarized in the following table. TABLE OLSON-1: Variables containing expansion coefficients Olson et al. (1984) OLSON Spudich and TFAULT Ascher (1983) “horizontals”, m=1 TEMPD(1,*,*) A(1) a 1S U1 z U ′z1 U 1r U ′r1 1



1

U ′φ

TEMPG(1,*,*)

a ′1S

A(4)

TEMPD(2,*,*)

a 3S

A(2)

TEMPG(2,*,*)

a 3S ′

A(5)

TEMPD(3,*,*)

a 5T

A(3)

TEMPG(3,*,*)

a ′5T

A(6)

TEMPD(1,*,*)

a 1R

A(7)

“verticals”, m=0

U 0z

43

U ′z0

TEMPG(1,*,*)

a ′1R

A(9)

U 0r

TEMPD(2,*,*)

a 3R

A(8)

U ′r0

TEMPG(2,*,*)

a ′3R

A(10)

Note that the motions of the medium are completely described by these 10 independent expansion coefficients, at least for point force and double couple sources. In TFAULT we will integrate linear combinations of these against Bessel function kernels to get various physical quantities such as strain or traction. These coefficients are functions of wavenumber, depth,and frequency, and are stored for output in the XPOSE array. However, because these coefficient arrays be very large (i.e. they can be much bigger than the available RAM), they are stored in the XPOSE array for only a limited number of wavenumbers (NKSTOR) at a time, and only these limited sets of wavenumbers are written out by subroutine XCRETE. In other words, if you choose nkl=l, nk2=482, and nkstor (set in the parameter statements) = 50, the first call to XCRETE will write out the expansion coefficients for wavenumbers 1 to 50, the second will write 51 to 100, etc., and the last will write 451 to 482. Ultimately these have to be regrouped so that all wavenumbers are in adjacent storage, and this is done by program XOLSON.

XOLSON DETAILED INSTRUCTIONS XOLSON Function Program XOLSON is a utility program which reads OLSON output files *.OLO and *.OLH and rearranges the output arrays into a form suitable for TFAULT. As explained earlier, OLSON writes the expansion coefficients in blocks of NKSTOR wavenumbers. Each block contains the results for all frequencies. XOLSON regroups the expansion coefficients into blocks containing the results for NFDIM frequencies and all wavenumbers. XOLSON does this by scanning repeatedly through the *.OLO file, reading subsets of the OLSON output and placing those numbers into an array named BIG. XOLSON rearrangest the numbers in BIG. After you have successfully run XOLSON, you may safely delete the .OLO and .OLH files – you will never need them again. XOLSON Dimensions – What To Do If XOLSON Does Not Fit in RAM The most likely problem you might encounter is that array BIG can be a very big array, depending on the dimensions. If XOLSON runs properly with the dimensions that are currently set, you can ignore the following discussion about arrays. If XOLSON does not fit into your RAM, you must change the dimensions. Do not be discouraged!! Do not wail, gnash your teeth and rend your clothes!! XOLSON can handle an arbitrarily large output from OLSON and run properly on a computer with as little as 128Kb (yes, Kb, not Mb) of RAM if you set the dimensions of XOLSON properly. The array dimensions are set in the PARAMETER statement, and they are currently: 44

PARAMETER (NFDIM=4, NDDIM=200, NKDIM=1500, NHEAD=3000, 1 NKWDIM=3000) C COMPLEX BIG (10, NKDIM, NDDIM, NFDIM) DIMENSION HEADER(NHEAD), IKLAST(NDDIM), LASTK(NDDIM), 1 XK0(NKWDIM), XK1(NKWDIM) Dimension Parameters and How to Set Them NFDIM - should be ≤ NFDIM in OLSON. The larger NFDIM is set, the less cpu time and i/o time XOLSON will require to run, but the more memory XOLSON will need. The maximum size of NFDIM is limited by the amount of memory you have on your machine. Using the above dimensions XOLSON.EXE uses about 94.2 Mb of RAM and will run on a computer having 128 Mb of RAM. If you raise NFDIM to 10, XOLSON.EXE uses about 234.9 Mb of RAM and will fit on a computer with 256 Mb of RAM. NDDIM - this number should equal or exceed the number of grids saved from OLSON. NKDIM - should be ≥ NK2 in the OLSON output file. NHEAD - should equal NHEAD in OLSON. This is the length of the header array, and it should not be changed under normal circumstances in either program. If you change NHEAD in OLSON.INC, BE SURE TO CHANGE IT IN ALL THE OTHER CODES! NKWDIM – this is the length of the XKO and XKL arrays, which contain the zeros of the bessel functions. This should be ≥ NK2 in the OLSON output. The program creates two output files, written to units LP and IOUT (set in a data statement). The program prompts you at the terminal for an output stem name for the transposed, regrouped wavenumber output, and it writes two output files, one named "your output stem.XWV and a print file named "your output stem.XOP".

XOLSON Terminal Session >XOLSON ENTER THE STEM NAME OF THE OLSON OUTPUT FILES(CR=OLSON): CR

(You should enter only the stem of the OLSON output file names because there are two OLSON output files, *.OLO and *.OLH. If you answer CR here, XOLSON looks for OLSON.OLH and OLSON.OLO. If you answer guk here, XOLSON looks for guk.OLH and guk.OLO).

ENTER DESIRED STEM NAME FOR THE XOLSON OUTPUT FILE(CR=XOLSON):

some_desired_name 45

(The program creates two output files, written to units LP and IOUT (set in a DATA statement). The program prompts you at the terminal for an output stem name for the transposed, regrouped wavenumber output, and it writes two output files, one named "some_desired_name.XWV and a print file named "some_desired_name.XOP") 1000 HEADER WORDS READ, 355 ZEROS READ NK1 = 1 NK2 = 603 NNTOT1 = 20

(Below XOLSON writes an output record containing the results for NFDIM (=10) frequencies. The .OLO file contains the results for 42 frequencies, so five records are written, the last of which contains the results for only 2 frequencies.) WRITING OUTPUT RECORD FOR IF1= WRITING OUTPUT RECORD FOR IF1= WRITING OUTPUT RECORD FOR IF1= WRITING OUTPUT RECORD FOR IF1= WRITING OUTPUT RECORD FOR IF1= IF2=50 .GE. NUMFQ= 42 NORMAL EXIT

1 IF2= 11 IF2= 21 IF2= 31 IF2= 41 IF2=

10 NFTOP= 10 NFTOP= 10 NFTOP= 10 NFTOP= 10 NFTOP=

10 10 10 10 2

Now run TFAULT

After you have successfully run XOLSON, you may safely delete the .OLO and .OLH files – you will never need them again. The .XOO file contains all the information that was in the .OLO and .OLH files.

TFAULT DETAILED INSTRUCTIONS TFAULT reads the frequency-wavenumber domain output from XOLSON in the .xoo file to calculate frequency domain Green’s functions for traction on a user-defined fault surface for a set of observer locations. The fault surface geometry is shown in Figure TFAULT-1. Unfortunately, this geometry is not exactly like the geometry of ISOSYN, which contains the parameter ZF which is not used here.

46

slip s = d+ - d-

(u=0, v=0)

x

u (u,v)

v d-

dip

d+

(xobs, yobs)

y z

Figure TFAULT-1. Definition of fault geometry for program TFAULT. The fault is a plane, and the intersection of this plane with the Earth’s surface (z = 0 km) determines the x axis. Note that the dip of the fault may exceed 90°; TFAULT requires 0° < dip < 180°. The (u,v) coordinate system lies in the fault plane, where u is the coordinate along strike (parallel to the x axis), and v is the downdip coordinate. The u-v origin coincides with the x-y-z origin of coordinates. NOTE that this coordinate system differs from the ISOSYN coordinate system; there is no parameter ZF in this coordinate system. The tractions are calculated on a grid of points on the fault plane, where the sample point spacing in u is du and the spacing in v is dv. This relationship of TFAULT sample points to OLSON depths is shown in Figure TFAULT-2.

47

OLSON applies a vertical and horizontal point force at the observer location

y

grid points in OLSON calculation

t su

v3

faul

vm

fa ax

x

}

ur

s ult

vma

grid points ("receivers") written by OLSON to the .OLO file

vmi n rfac e2

d 1 e vmin c fa

V

V

zmax

z

Figure TFAULT-2. Figure showing the relationship between receiver depths in OLSON and the permissible extent of faulting in TFAULT. Coordinates y, z, and v are shown in Figure TFAULT-1. In OLSON seismic reciprocity is used. Point horizontal and vertical forces (green and pink arrows) are applied at the Earth's surface. OLSON solves for the Earth response on a grid of points in depth, shown by black dots. Red dots are OLSON "receivers", grid points for which the results of the calculation are written to the .OLO file. All fault surfaces used in TFAULT must lie entirely within the depth range of the OLSON receivers (red dots). Fault surface 1 (purple line segment) is forbidden for input into TFAULT because the minimum v coordinate, vmin, defines a fault whose top is shallower than the shallowest OLSON receiver (red dots). Fault surface 2 (orange line segment) is acceptable to TFAULT because both the top and bottom of the fault lie within the interval of OLSON receivers. . These spacings vary as a function of frequency and depth, so that fewer sample points are used at depths and frequencies where the traction function is varying more slowly as a function of position on the fault (see Spudich and Archuleta, 1987, Figure 21 and pp. 233-234 and pp. 256-262). The sample point spacing is determined using the following algorithm. For a particular frequency f and downdip coordinate v, the wavelength of an S wave at that depth and frequency is λ ( z ) = β ( z ) / f . TFAULT sets the along-strike sample spacing du and the downdip sample spacing dv to be a user-input number n of samples per shear wavelength, i.e du = β ( z ) /( nf ) . Since the shear wavelength is inversely proportional to frequency and proportional to local shear velocity, the sample spacing becomes more dense with increasing frequency and generally less dense with increasing depth (except when there is a low velocity zone 48

present). Of course, at f=0 a shear wavelength is infinite, so sample spacing at zero frequency would also be infinite if we applied this algorithm uniformly for all frequencies. To avoid this problem of infinite du and dv, the user also specifies a minimum allowable number of sample points allowed in the horizontal (u) and downdip (v) extents of the fault surface. To calculate the traction at a desired fault point (u,v), TFAULT determines the corresponding values of epicentral radius r and depth z. It then linearly interpolates between the discrete expansion coefficients in the m

.XOO file, U z k

( z,ω ) , U z′mk ( z,ω ) ,

U rmk ( z ,ω ) , ..., U φ′ mk ( z ,ω ) for the value of depth

z correponding to the desired downdip coordinate v. It then applies the Bessel transform to these interpolated values. Typically, at high frequency it is not so important to calculate an accurate traction because the highest frequencies in the ultimate seismograms are likely to be attenuated by a filter that the user will probably apply to the seismograms. Also, in TFAULT the computational effort for each frequency is proportional to the square of that frequency. Thus, calculation time can be reduced if the user is willing to accept a less accurate result at high frequency. TFAULT offers the possibility of sampling the traction functions less densely at high frequency. The user may specify a frequency f2 and an exponent mf2, the sample spacing on the fault is proportional

(

to β ( z ) f / f 2 change for f > f2.

)− m n −1 . Thus, for example, if m=0, then the sample spacing does not

User-Prepared Data File (.TFD File) In addition to reading a .XOO file, TFAULT reads a user-prepared data file. Here is an example of such a file: Minimum allowable number of samples/cycle in J tables [cr=30]:30 Least and greatest horizontal coordinate of fault plane (km):-0.6,0.6 Least and greatest downdip coordinate of fault plane (km):3.5,4.5 Fault plane dip (degrees, 0 < dip < 180): 90. Real number of sample points per wavelength: 8., Minimum number of sample points in horizontal direction: 20, Minimum number of sample points in downdip direction: 20, Minimum, intermediate, and maximum frequencies to do: 0., 10., 10., Power of frequency to use above intermed freq for sample density: .8 List of observer coordinates (km): 1.,1.73, -5.2, 3.8 A line-by-line explanation of the input follows: Minimum allowable number of samples/cycle in J tables [cr=30]:30 (To speed the calculation, TFAULT does not explicitly calculate a Bessel function every time it is needed. Instead, TFAULT calculates look-up tables of Bessel functions and performs linear interpolation to determine an approximate value. Currently these look-up tables are 15000 elements 49

long. The problem geometry controls the total number of cycles of the Bessel function that are fit into the look-up tables. TFAULT automatically selects a sample density based on the total table length of 15000 elements. This sample density is usually much more than 30 samples per cycle. Here you are telling TFAULT the lowest acceptable number of samples per cycle of the Bessel function you will allow in the look-up tables. Since the Bessel functions are oscillatory, you specify the minimum sampling in samples per cycle. The minimum recommended number of samples is 30 per cycle, which gives a maximum error in the approximate Bessel function value of less than 1%. This is the value we usually use. If the sample density falls below your input number (30 in this case), TFAULT will warn you and stop execution. At some sufficiently high input number, you may have to increase the dimensions of the look-up table arrays. See *** below in the TFAULT TERMINAL SESSION section below for an example. Modern-day computers might be fast enough so that the look-up tables for Bessel function values can be replaced by actual function evaluations without significant speed penalty.) Least and greatest horizontal coordinate of fault plane (km): -0.6,0.6 (These are umin and umax, illustrated in Figure TFAULT-1 above. These define the horizontal extent of the fault. Note that umin must be less than or equal to umax, but umin can be greater than 0, and umax can be less than 0. It is a good idea to choose umin and umax so that they define a fault surface that is larger than the rupture model defined in the .SLD data file for SLIP.) Least and greatest downdip coordinate of fault plane (km):3.5,4.5 (These are vmin and vmax, illustrated in Figure TFAULT-1 above. These define the downdip extent of the fault. Note that vmin should be less than vmax, and both of them must be non-negative. If you want to use a point source at coordinates (up, vp), you must still enter vmin < vp < vmax. Because your fault might dip, the depth z corresponding to downdip coordinate v is z = v*sin(dip). You must be sure to choose vmin and vmax so that the corresponding depths lie inside the range of depths you selected for receivers in OLSON. Both vmin and vmax must be nonnegative. It is a good idea to choose vmin and vmax so that they define a fault surface that is slightly larger than the rupture model defined in the .SLD data file for SLIP.) Fault plane dip (degrees, 0 < dip < 180): 90. (Enter dip as defined by Figure TFAULT-1 above. Note that the dip of the fault may exceed 90°; TFAULT requires 0° < dip < 180°. We have little experience with very shallow dipping faults, and you should check the results of such calculations. We have never tested the code for dips less than 5° or greater than 175°) 50

Real number of sample points per wavelength: 8., (This controls the density of samples of the traction function in the horizontal (u) direction (See Spudich and Archuleta, 1987, Figure 21). This is specified as number of sample per wavelength of a shear wave. 6 is barely adequate. 8 is better, 10-12 is usually good. Calculation time and output volume are proportional to the square of this number, so if the calculation time and storage are available, you should use a number like 20.) Minimum number of sample points in horizontal direction: 20, (This parameter sets a lower limit on the sample interval (du) for tractions in the horizontal direction, so that TFAULT always uses at least this number of samples, regardless of frequency and depth. You should always use at least 10. 20 is usually pretty good, but when we want a beautiful result we use 100. The calculation time and storage are proportional to the square of this number. If your seismograms calculated by TFAULT look rather noisy, this number might be too low. BEWARE that if one of your observers is very close to the fault, you might have to make this number very high to get an accurate integration. You should choose this number so that the spacing between sample points is smaller than the distance from the observer to the fault plane. For example, if the observer is 100m from the fault, you will have to choose this number so that your sample spacing is less than 100 m. See the section Situations in which Compsyn Can Fail. Unfortunately, TFAULT does not check this. Also, TFAULT is not smart enough to make sample spacing variable depending on distance to the observer. Sorry.) Minimum number of sample points in downdip direction: 20, (This parameter sets a lower limit on the sample interval (dv) for tractions in the horizontal direction, so that TFAULT always uses at least this number of samples, regardless of frequency and depth. You should always use at least 10. 20 is usually pretty good, but when we want a beautiful result we use 100. The calculation time and storage are proportional to the square of this number. If your seismograms calculated by TFAULT look rather noisy, this number might be too low. BEWARE that if one of your observers is very close to the fault, you might have to make this number very high to get an accurate integration. You should choose this number so that the spacing between sample points is smaller than the distance from the observer to the fault plane. For example, if the observer is 100m from the fault, you will have to choose this number so that your sample spacing is less than 100 m. See the section Situations in which Compsyn Can Fail. Unfortunately, TFAULT does not check this. Also, TFAULT is not smart enough to make sample spacing variable depending on distance to the observer. Sorry.) Minimum, intermediate, and maximum frequencies to do: 0., 10., 10., 51

(f1, f2, and f3, 0 ≤ f1 < f2 ≤ f3. TFAULT calculates traction surfaces for frequencies between f1 and f3. Set f3 equal to the highest frequency saved in the .OLO file, unless computation times are too large, in which case you should reduce f3. For f>f2, the sample spacing on the fault is

proportional to β ( z ) /( f − f 2) / n , where m is input below. If there is some frequency fup above which you can tolerate a less accurate answer (for example, fup might be the corner of a low-pass filter you will apply in SEESLO), then you can set f2 = fup < f3 and set m < 1 to speed calculations. We do not have much experience with setting intermediate frequency f2, and often we use f2 = f3. We do not have much experience with calculations in which f1 > 0. ) m

Power of frequency to use above intermed freq for sample density: .8 (This number is the exponent m in the above comment. This number should be ≥ 0.0 and ≤ 1.0. 0.5 is reasonable, but if f2 = f3, then this number has no effect.) List of observer coordinates (km): 1.,1.73, -5.2, 3.8 (Enter pairs of numbers that are the x and y coordinates of the desired observer locations (xobs and yobs in Figure TFAULT-1 above). You may enter up to IOBDIM (currently 20) observer locations).

TFAULT Dimensions TFAULT dimensions are set in TFAULT.INC. They are: NKDIM – the wavenumbers k n such that Jm (k n R ) = 0 , where R is some maximum epicentral distance, are held in array XK0 for m=0 and XK1 for m=1. If you run problems needing very large wavenumbers (for example, you use very low surficial shear velocities), then you might need to raise NKDIM NUDIM – maximum allowed samples on the fault in the u direction. You might have to raise this dimension if you use a particularly long fault or a particularly high frequency. IZDIM – maximum allowed number of depths to save out of OLSON. You might have to raise this number if you are using a fault with large downdip extent or high frequency. IOBDIM – maximum allowed number of observers. May be changed by the user.

52

NTABLE – maximum allowed number of samples in the Bessel function lookup tables TJ0 and TJ1. You might have to raise this number if you need very large wavenumbers (for example, you use very low surficial shear velocities). NHEAD – number of elements in the header of the .TFO file. It is very unlikely the user will need to change this number, but if you use more than 250 observers it might be necessary. If you change NHEAD in TFAULT.INC, BE SURE TO CHANGE IT IN ALL THE OTHER CODES! IFDIM – only used for restarting calculations. Controls the number of frequencies that are read on restart. Unlikely that user will need to change, and we can give no guidance on how to change.

TFAULT Terminal Session >TFAULT Enter the name of the input dat file(*.TFD suggested): TFAULT.TFD

(Here you enter the name of the user-prepared file, including the extension (.TFD), specifying the fault geometry (described above). Note that the .TFD file contains the names of the desired input .XOO file and the desired output print file (.TFP) and output traction file (.TFO). ) Enter the desired STEM name of the output .TFO and .TFP files (CR=TFAULT): SOMENAME

(This input causes TFAULT to create two output files, SOMENAME.TFO file, into which are placed the traction Greens functions evaluated on your desired fault plane, and the SOMENAME.TFP file, which is a “print” file that reports on the input values and the progress of the code. The .TFP file is a useful document of the run. The .TFO file is an input into SLIP. ) Enter the name of the XOLSON output file (CR=XOLSON.XOO): SOMENAME.XOO

(This input causes TFAULT to look for SOMENAME.XOO, which should be an output file from XOLSON that you have previously created. This .XOO file contains the frequency-wavenumber version of the Greens functions. )

OBSERVER # 1 X= 1.00 Y= 1.73 ACTUAL LIMITS OF U WILL BE -0.60 AND 0.60

(Here TFAULT is identifying the observer locations it uses and the extent of the fault it creates. In this case, only one observer is being done.) FOR ASSEMBLING BESSEL TABLES, LARGEST ANTICIPATED RANGE IS 2.36,

53

LARGEST WAVENUMBER IS 6.27 FOR LARGEST ARG OF 0.162E+02 NUMBER OF SAMPLES IN EACH TABLE IS 15000 FOR AN ARG INCREMENT OF 0.108E-02 AND 5000 POINTS PER BESSEL CYCLE

(*** For computational speed TFAULT does not calculate a Bessel function every time it is needed. Rather, TFAULT creates a look-up table of the Bessel function evaluated at 15000 (in this case) points. Linear interpolation on the look-up table is used in TFAULT to get Bessel function values. The above statements report that the Bessel function J(x) will be sampled over the range x = 0 to x = 16.2. There will be 15000 sample points in this interval, with an increment dx = 0.108E-02, yielding about 5000 points per cycle of J(x). 5000 points per sample is dense sampling and should yield a good interpolant. The above numbers are for an observer at very close distance, so the argument of the Bessel function never gets very large. If you use much more distant observers and go to higher wavenumbers (nk2), your largest argument could be much larger, leading to much fewer samples per cycle. Then the interpolation of Bessel functions will be less accurate and your seismograms will be less accurate. We usually use at least 30 samples per Bessel cycle. If you need more samples per cycle, raise dimension NTABLE in TFAULT.FOR and recompile/relink.) DOING FREQUENCY NUMBER 1 OF 42 FREQUENCY = 0.012 HZ DOING FREQUENCY NUMBER 2 OF 42 FREQUENCY = 0.244 HZ DOING FREQUENCY NUMBER 3 OF 42 FREQUENCY = 0.488 HZ

. . . DOING FREQUENCY NUMBER 42 OF 42 FREQUENCY = 10.01 HZ Normal exit. Now run SLIP.

TFAULT reminds you that the next step is to run SLIP.

SLIP DETAILED INSTRUCTIONS The traction surfaces calculated by TFAULT depend only on the local velocity and density structure, and the relative positions of the fault and the observers. These surfaces are completely independent of the particular rupture model that may characterize an earthquake on the fault. Hence, the user only needs to calculate one traction surface per fault-observer pair, and this traction surface may be used repeatedly by SLIP to generate seismograms for specific models of rupture on the fault. The coordinate system used by SLIP is the same as that of TFAULT, with u and v having the same meaning as in TFAULT (Figure SLIP-1).

54

slip s = d+ - dumax

umin

(u=0, v=0)

x

u d-

v

d+

Area of integration

dip

(u,v)

vmin

Area of slip model in .SLD

vmax

file

y Boundary of fault defined in TFAULT z

Figure SLIP-1. Geometry of fault used in SLIP. The blue area is the region of the fault for which tractions were calculated in TFAULT. The green area is the region of the fault for which a slip model is defined in the .SLD file. The pink area is the region of the fault over which the surface integral is calculated. A rupture model is specified in terms of several parameters. First, SLIP assumes that the basic form of the slip velocity time-function is the same at all points of the fault, e.g. it may be a boxcar function, or a decaying exponential, etc. However, various parameters characterizing that function are functions of position on the fault. These parameters are the slip velocity function's duration, its amplitude, and the time it initiates. To specify these quantities on the fault surface, we use the following approach. First we choose a set of horizontal (constant v) lines which are equally spaced on the fault plane in v, the downdip coordinate. For each of these lines, we specify 4 quantities at a set of points distributed equally or unequally along the line. The 4 quantities are rupture time (the time rupture initiates at the point), duration (the time the slip velocity is nonzero, if the slip function is a boxcar, or the l/e time of the exponential, if the slip velocity is a decaying exponential), the amplitude of the strike-slip component of slip velocity, and the amplitude of the dip-slip component of slip velocity. Once these quantities are specified along each constant v line, they can be derived as necessary at any other points by bilinear interpolation. Since the slip has a strike-slip and dip-slip component, it is a 2 component vector. To obtain ground displacements we use the representation theorem of Spudich(1980) to calculate ground motions:

u k (x, ω ) =

vmax u max



∫ s(u, v,ω ) • Tk (u, v, ω ; x)du dv ,

(SLIP-1)

v min u min

where x is the position of the observer, u and v are coordinates on the fault plane (see Figure SLIP-1), k=1,2, or 3 denotes the x, y, or z direction, u k ( x, ω ) 55

is the Fourier transform of the k-component of displacement at observer location x and angular frequency ω, s(u, v, ω ) is the Fourier transform of the two component slip vector at point (u,v) on the fault, and Tk (u, v, ω ; x) is the Fourier transform of the two component traction vector at u and v on the fault caused by a point impulsive force in the k-direction at observer location x. Note that this form of the representation uses Green’s function reciprocity. This representation theorem says that we must dot the two components of slip with the two components of the traction vector on the fault plane (calculated by TFAULT), and do the surface integral of this dot product over the desired part of the fault plane. This surface integral is done once per frequency, and it is done as a series of line integrals along lines of constant v (not the same ones the slip model is specified on), which are then summed. The spacing of the sample points for this quadrature in both u and v is variable, and as in TFAULT it is set to be related to a shear wavelength at whatever frequency and depth is appropriate.

SLIP User-Prepared Data File (.SLD File) The user’s rupture model is defined in this file. Here is an example file. Note that this example is unrealistic and oversimplified, for the purpose of illustrating the input. Title: Crazy Rupture Model Minimum, intermed, and maximum frequencies to do: 0.,8.,10., Power of freq to use for sampling above intermed freq: 0.8 Type of slip velocity function (boxcar, exponential):B Horizontal limits of integration on fault plane: 1., 5., Downdip limits of integration on fault plane: 2., 4., Desired ratio of local rupture vel to local shear vel: 0.9 ==================OBSERVER INFORMATION======================= x y points/wvln min u samples min v samples ----------------------------------0., 0., 8., 20, 10, =====================SLIP MODEL============================== Factor to multiply risetime by to determine slip duration: 1.2 Rupture times below based on rupture/shear vel ratio: 0.45 Desired moment (cgs, CR=whatever model below gives): 1.35e21 ----downdip coord = 1., uu: -5., 7., tr: 0., 6., rt: 0., 0., ss: 0., 0., ds: 0., 0., ----downdip coord = 3., uu: -5., -2., 4., 7., tr: 0., 1.5, 4.5, 6., rt: 0., 2., 2., 0., ss: -20,. –20., -10., 0., ds: 0., 10., 10,. 0., ----downdip coord = 5., uu: -5., -4., 6., 7., tr: 0., 0.5, 5.5, 6., rt: 0., 2., 2., 0., 56

ss: -20,. –18., -12., 0., ds: 0., 20., 30,. 0., ----downdip coord = 7., uu: -5., -2., 4., 7., tr: 0., 1.5, 4.5, 6., rt: 0., 2., 2., 0., ss: -20,. –20., -10., 0., ds: 0., 10., 10,. 0., ----downdip coord = 9., uu: -5., 7., tr: 0., 6., rt: 0., 0., ss: 0., 0., ds: 0., 0., (NO TRAILING BLANK LINE– be careful!!) Explanation of .SLD input Title: Crazy Rupture Model (A title describing the rupture model that will be plotted in DISPLAY8) Minimum, intermed, and maximum frequencies to do: 0.,8.,10., (f1, f2, and f3, 0 ≤ f1 < f2 ≤ f3. SLIP performs integration over fault for frequencies between f1 and f3. Set f3 no higher than the highest frequency saved in the .TFO file. For f>f2, the sample density on the fault is proportional to (f/f2)m where m is input below. F2 and m are intended to reduce computation time. If you will taper the high frequency part of the spectra in SEESLO, there is no reason to calculate the highest frequencies as accurately as the lowest frequency, particularly since most of the computational effort is required by the high frequencies. You can set f2 equal to the high frequency corner (ful) in your SEESLO input, and set 0 < m < 1. However, we do not have much experience with setting intermediate frequency f2, and often we use f2 = f3. We do not have much experience with calcualations in which f1 > 0. Power of freq to use for sampling above intermed freq: 0.8 (m, see explanation of above line. 0