Forward Modelling and Inversion of Multi-Source TEM Data

3 downloads 0 Views 113KB Size Report
magnetic Data) consortium partners: Newmont Mining Corp,. CAMECO Corp, CRVD, Xstrata Plc, Teck Cominco Ltd, and. Barrick Gold Corp. The second author ...
Forward Modelling and Inversion of Multi-Source TEM Data

D. W. Oldenburg1∗ , E. Haber2 , and R. Shekhtman1 of British Columbia, Department of Earth & Ocean Sciences 2 Emory University, Atlanta, GA 1 University

SUMMARY

We present a practical formulation for forward modeling and inverting time domain data arising from multiple transmitters. The underpinning of our procedure is the ability to factor the forward modeling matrix and then solve our system using direct methods. We first formulate Maxwell’s equations in terms of the magnetic field, H to provide a symmetric forward modeling operator. The problem is discretized using a finite volume technique in space and a backward Euler in time. The MUMPS software package is used to carry out a Cholesky decomposition of the forward operator, with the work distributed over an array of processors. The forward modeling is then quickly carried out using the factored operator. The time savings are considerable and they make the simulations of large ground or airborne data sets feasible and greatly increase our abilities to solve the 3D electromagnetic inverse problem in a reasonable time. The ability to use direct methods also greatly enhances our ability to carry out survey design problems.

solve the matrix system using iterative solvers. We use the MUMPS codes and distribute the computation over many different processors. MAXWELL’S EQUATIONS IN THE TIME DOMAIN The forward model consists of Maxwell’s equations in time where the permeability is fixed but electrical conductivity can be highly discontinuous. To be more specific, we write Maxwell’s equations as

∂ H(i) ∂t ∂ E(i) ∇ × H(i) − σ E(i) − ε ∂t i = 1, . . . , N ∇ × E(i) + µ

=

0,

=

sr (t)

(1a)

(i)

(1b) (1c)

over a domain Ω × [0,t f ], where E(i) and H(i) are the electric (i)

INTRODUCTION In previous research (Haber, Oldenburg, & Shekhtman, 2006) we developed an inversion algorithm that allowed us to invert data from a single, or a very few, transmitters. Unfortunately the computational demands of that algorithm were too large to invert typical ground or airborne surveys acquired from many source locations. The principal difficulty is the time required to solve the forward problem. Simulating data that arise from multi-sources can be computationally onerous because each transmitter requires that Maxwell’s equations be solved. Usually this is done with iterative (eg. CG-type) algorithms and hence the computation time increases linearly with the number of transmitter locations. However, for surveys with a large number of sources, significant increases in efficiency can be had if the forward modeling matrix is factored. Factorization involves large computations and significant memory requirements. However, once this is accomplished, solving the factored system with a different right hand side proceeds very quickly. The idea of decomposing the matrix system and solving many right hand sides for different sources is not new (Dey & Morrison, 1979), and small problems have been solved in this manner. However, the matrices for 3D TEM problems have generally been considered to be too large to contemplate this approach. Over the last decade however, advances in mathematics and computing science have resulted in factorization algorithms that can be implemented on large scale computing systems (Amestoy, Guermouche, LExcellent, & Pralet, 2006). The efficacy of this approach depends upon the time required to factor the matrix compared to the time required to

and magnetic fields that correspond to the source sr , µ is the permeability, σ is the conductivity, ε is the permittivity and sr is a source. The equations are given with boundary and initial conditions: n × H(i)

=

0

(2a)

H(0, x)

=

(i) H0

(2b)

E(0, x)

=

0

(2c)

although other boundary and initial conditions could be used. Solving the forward problem As previously discussed in (Haber & Ascher, 2001; Haber, Ascher, Aruliah, & Oldenburg, 2000) consistent discretization of Maxwell’s equation leads to a near-singular system. This is due to the rich null space of the curl operator. In our previous work we have turned to the A − φ formulation to stabilize the system. While this approach is highly effective for iterative methods, it is not as effective for direct methods due to two main difficulties. First, the resulting system is not symmetric thus one is required to use the LU factorization rather than the Cholesky factorization; this doubles the amount of memory used for the solution of the system. Second, the number of unknowns is larger and this adds even more memory to the factorization. Keeping this in mind, we now work directly with the fields. Using backward Euler discretization in time with step size δ t we obtain H − Hi ∇ × Ei+1 + µ i+1 = 0, (3) δt Ei+1 − Ei ∇ × Hi+1 − σ Ei+1 − ε = sri+1 (4) δt

Oldenburg, D. et al., 2008, Multi-source TEM ... Eliminating the electric field Ei+1 from Maxwell’s equations we obtain an equation for the magnetic field Hi+1 ∇ × (σ + δ t −1 ε )−1 ∇ × Hi+1 +

µ Hi+1 = rhsi+1 δt

(5)

The above system can be near-singular when the conductivity is small and the time steps are large. To stabilize it we recall that we also have ∇ · µ H = 0 and therefore we can rewrite the system as

−µ ∇µ

∇ × (σ + δ t −1 ε )−1 ∇ × Hi+1

(6a)

−2

(6b)

(σ + δ t

−1

−1

ε ) ∇ · (µ Hi+1 ) µ + Hi+1 = rhsi+1 δt

(6c)

The new symmetric term µ ∇µ −2 (σ δ t −1 ε )−1 ∇ · (µ Hi+1 ) stabilizes the system without changing the solution. Upon discretization is space, we obtain the linear system A(σ )Hi+1 := (C(σ , δ t) + M)Hi+1 = rhs

(7)

where the matrix C(σ , δ t) is a discretization of the differential operator in (7). The matrix is SPD and depends on the conductivity and the time step δ t. The matrix M depends on µ and δ t. We now make an important observation which motivates our approach. We note that by using the same time steps δ t, the linear system (7) is identical for all times and all sources. Thus, assuming that we decompose the system into A(σ ) = LL>

(8)

this factorization can be used to solve all the linear systems. Matrix factorization is an expensive computational process. If the number of sources and/or time steps is small then we may not benefit from this work and using iterative methods for the solution of the system can be superior. However, when the same forward modeling matrix needs to be solved many times and sources, the decomposition will be greatly superior to iterative techniques. The benefits of this are exacerbated when one proceeds to the inverse problem since the same factorization can be used for the computation of the gradient as well as for the solution of the linear system which arises at each Gauss-Newton iteration. Software for Matrix Factorization In recent years there has been a growing effort to obtain scalable matrix factorizations on parallel machines. We use the package Multi-frontal Massively Parallel Solver (MUMPS) by the CERFACS group (Amestoy et al., 2006). MUMPS is a package for solving systems of linear equations of the form Ax = b, where the matrix A is sparse and can be either unsymmetric, symmetric positive definite, or general symmetric. MUMPS uses a multifrontal technique which is a direct method based on either the LU or the LDL> factorization of the matrix. It exploits both parallelism arising from sparsity in the matrix A and from dense factorizations kernels. The main features of the MUMPS package include the solution of the transposed system, input of the matrix in assembled format (distributed or centralized) or elemental format, error analysis, iterative refinement, scaling of the original matrix, and return of a Schur complement matrix. Finally, MUMPS

is available in various arithmetics (real or complex, single or double precision). The software is written in Fortran 90. The parallel version of MUMPS requires MPI for message passing and makes use of the BLAS, BLACS, and ScaLAPACK libraries. We have tested MUMPS on a cluster of PCs under Linux. MUMPS distributes the work tasks among the processors, but an identified processor (the host) is required to perform most of the analysis phase, distribute the incoming matrix to the other processors (slaves) in the case where the matrix is centralized, and collect the solution.

IMPLEMENTATION OF THE FORWARD MODELING To obtain insight regarding the advantages of the decomposition we consider a generic problem in airborne geophysics. The transmitter is a square loop, of 20 meters on a side, flown at 50 meters elevation. Three components of dB/dt are acquired in the center of the loop. The waveform is a step-off and 35 time channels, equispaced in logarithmic time between 10−6 and 10−2 seconds, are measured. Data are collected every 20 meters along 20 east-west lines and there are 1000 transmitters. The acquisition geometry and the model, which is a conductive block in a resistive halfspace, are shown in Figure 1. In the numerical forward modeling the initial fields are the steady state fields and our time stepping begins at 10−6 seconds which is a decade before the first measurement time. Consequently, 35,000 solutions of Maxwell’s equations (that is, 35 time steps and 1000 transmitters) are required to simulate the data. For a volume discretization we use a structured rectangular grid divided the earth into 703 cells. With our previous code that used iterative techniques the solution time for a single Maxwell system is about 2 minutes. The total time is therefore 2 × 35, 000/NP where NP is the number of processors. In our case, where we use a dual quadcore INTEL XEON E5410 NP = 8 and so the forward modelling time, would be approximately 8750 minutes or 6 days. We now turn to the MUMPS factorization. Our equations are to be time stepped between times T1 = 10−6 and T2 = 10−2 seconds. The time step δ t is determined by the smallest time needed and is µ T1 where µ is a constant. Here we use µ = 1 so the number of time steps is Nδ t = (T2 − T1 )/δ t = 104 . The factorization time, t f , depends upon the number of CPU’s and the amount of available memory; for our configuration t f is 250s. With the factored system the time required to solve a forward problem, ts , is 0.6s. The total number of forward solutions is equal to NT X × Nδ t where NT X is the number of transmitters. Thus the total time taken to solve our problem is Ttotal = NT X Nδ t ts + t f .

(9)

The numbers are listed in the first row of Table I. Unfortunately the total time is still prohibitive. The difficulty arises because of the excessive number of time steps; keeping the same δ t for the entire modeling requires 107 forward solutions. This difficulty can be circumvented by dividing our total time interval into subintervals each of which has a constant δ t. One additional factorization is required for each subinterval.The exam-

Oldenburg, D. et al., 2008, Multi-source TEM ...

Figure 1: The acquisition geometry and model. A conductive block of resistivity 1 Ω − m is buried in a 100 Ω − m halfspace. The top of the block is ? meters below the surface. The locations of 1000 transmitters along east-west flight lines is shown. Data are obtained at the centers of each transmitter. ple here can be solved using 5 factorizations and 34 time steps. As per Table I, this reduces the total forward modeling time to 2.1E4s or about 5.6 hours. The simulated responses for the three components of dB/dt are shown in Figure 2. NT X 1000 1000

NP 8 8

Nδ t 104 34

tf 250 250

ts 0.6 0.6

Nf 1 5

Ttotal 6.0E6 2.1E4

Table 1: NT X is the number of transmitters; N p is the number of processors; Nδ t is the number of time steps; t f is the time required to factor the matrix; ts is the solution time for a forward problem once the matrix has been factorized; N f is the number of matrix factorization; Ttotal is the time taken to solve the complete problem.

Inversion The real benefit of using decomposition in our forward modeling becomes apparent when we treat the inverse problem. Our inversion algorithm (Haber et al, 2006), is based upon a GaussNewton procedure where a model m = ln σ is sought. The forward modeling is generically written as A(m)u = q where A is the forward modeling matrix, u are the fields, and q is the right hand side which contains the source terms. A is a large bidiagonal matrix of the form.   ˜ A(m) ··· 0  I ˜  A(m) ···    · · · · · · A= (10)     ˜ 0 I A(m) ˜ where A(m) is the discretized matrix in equation (7). At each Gauss-Newton iteration, the equation (J(m)T J(m) + β W T W )s = −g(m)

(11)

Figure 2: The simulated responses for the three components of dB/dt

is solved to obtain a perturbation s from a current model m. W is a sparse regularization matrix, β is a constant, and g(m) is the gradient of the objective function. The sensitivity matrix is available as J(m) = −QA(m)−1 G(m, u) where Q is an interpolation matrix that extracts the simulated data from the computed fields and G(m, u) is a known sparse matrix. G(m, u) =

∂ [A(m)u] ∂m

(12)

We note that G requires the fields for all transmitters and for all times. Although these have been computed to evaluate the misfit and gradient, they need to be stored or recomputed to carry out the effect of multiplying J or J T on a vector. The system in equation (11) is solved using a preconditioned conjugate gradient (CG) solver. This requires that the matrix on the left be applied to a vector many times. The application of Jv or J > v thus requires that a forward modeling be carried out. The implications of the factorization of the forward modeling matrix now become apparent. Firstly, the computation of the gradient requires a forward modeling. Secondly, even with a good preconditioner, NCG , the number of CG iterations needed to get a good solution, is 10-20. Thirdly, once the perturbation s is calculated, a line search is required to find the scale factor α to update the model as mn+1 = mn + α s. If NCG = 15, the number of forward modelings is 31 plus any required for a line search. If there are multiple transmitters, then a complete forward modelling can involve thousands of solutions of Maxwell’s equation and hence the matrix factorization

Oldenburg, D. et al., 2008, Multi-source TEM ...

(a)

Figure 3: The acquisition geometry and model. A conductive block of resistivity 1 Ω-m is buried in a 100 Ω-m halfspace. The top of the block is 8 meters below the surface. The locations of 9 transmitters are shown as well as the locations for the receivers. is highly important. As an example we invert data obtained from nine transmitter loops on the surface a flat earth. The data are three components of dB/dt at an 12 × 12 surface grid. The layout of the loops, the data locations, and the location of the buried block are shown in Figure 3. Data are recorded at 14 time channels between 10−5 and 10−3 seconds. The modelling time was divided so that three factorizations were required for a complete forward modelling. Gaussian noise was added to the 54,432 data prior to inversion. The inversion mesh as 34 × 34 × 34. The time for a single factorization of the matrix was 6 seconds and the time for a complete forward modelling for all times and all transmitters was 28 seconds. Approximately 22 forward modelings were needed for each Gauss-Newton iteration and each iteration took approximately 10 minutes. XX iterations were required to successfully complete the inversion. A cross-sectional image of the recovered conductivity is shown in Figure 4.

CONCLUSIONS We have demonstrated the benefits of solving multi-source time domain electromagnetic problems by decomposing the matrix and using direct solvers. The advantages over traditional approaches that use iterative solvers depends upon the number of transmitters, the size of the discretized problem, the number of processors and the amount of memory available. Optimum use of this approach requires dividing the global time interval of interest into subintervals. If the factorization for each subinterval is stored, then Gauss-Newton iterations for the inverse problem can be efficiently carried out.

(b)

Figure 4: The model recovered from inversion of the time domain data. (a) xy section. (b) xz section.

ACKNOWLEDGMENTS This work was supported by NSERC CRDPJ 340409- 06 and by MITEM (Multi-Source Inversion of Time Domain Electromagnetic Data) consortium partners: Newmont Mining Corp, CAMECO Corp, CRVD, Xstrata Plc, Teck Cominco Ltd, and Barrick Gold Corp. The second author was supported by NSF CMG grant xx and DOE grant yy.vvvvvvvvvvvvvvvlkjh

Oldenburg, D. et al., 2008, Multi-source TEM ... REFERENCES Amestoy, P. R., Guermouche, A., L’Excellent, J.-Y., & Pralet, S. 2006. Hybrid scheduling for the parallel solution of linear systems. Parallel Computing, 32, 136-156. Dey, A., & Morrison, H. F. 1979. Resistivity modeling for arbitrarily shaped three-dimensional structures. Geophysics, 44, 753-780. Haber, E., & Ascher, U. 2001. Fast finite volume simulation of 3d electromagnetic problems with highly discontinuous coefficients. SIAM Journal of Scientific Computing, 22, 19431961. Haber, E., Ascher, U., Aruliah, D., & Oldenburg, D. 2000. Fast simulation of 3d electromagnetic using potentials. Journal of Computational Physics, 163, 150171. Haber, E., Oldenburg, D., & Shekhtman, R. 2007. Inversion of time domain 3d electromagnetic data. Geoph. J. Int. Vol 171, 2, pp 550-564.

Suggest Documents