IAMG2015 - Proceedings
F1303
Bayesian facies inversion, using spatial resampling and cosimulation with model response data G. VALAKAS1 and K. MODIS1* 1
School of Mining and Metallurgical Engineering, National Technical University of Athens, Athens, Greece,
[email protected] *presenting author
Abstract Inverse problems involving the characterization of hydraulic properties of groundwater flow systems are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for non linear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.
1 Introduction Numerical modeling of groundwater flow and mass transport are important tools for predicting the behavior of a hydrogeological system. Nevertheless, in order to produce reliable hydrologic predictions, parameter values that determine the response of the system must be appropriately chosen for a specific aquifer. Direct measurements of hydrologic parameters, however, are scarce and fraught with uncertainty. An alternative is to use a two-step approach in which, first, the geological facies are modeled, and second, they are populated with heterogeneous hydraulic and transport parameters. This approach is flexible and allows modeling structures at different scales (Mariethoz et al., 2009). Therefore, before the geostatistical estimation of a soil property such as conductivity, the knowledge of geological formation should take into account, due to the complexity of soil types as natural entities (Ibanez and Saldana, 2007; Modis and Sideri, 2013). To address the problem of parameter uncertainty, additional data such as head measurements can be of great help. But, even though conditioning models to direct state data is addressed very efficiently by most geostatistical algorithms (e.g. Journel and Huijbregts, 1978), incorporation of indirect or system response measurements can be difficult. To do so, hydrologic models can be used in applications “opposite” or “inverse” to their original use, i.e., parameter values are treated as system unknowns and are determined by extracting information from observations of system-response variables (Kitanidis and Vomvoris, 1983). The procedure is conveniently called inverse problem solving. However, the subsurface reservoir is normally very heterogeneous due to complex geologic processes and physical or chemical reactions, which makes the subsurface characterization a demanding task. On the other hand, a crucial issue is that the problem of identifying every block from sparse head observations is underdetermined, i.e., there are many solutions that are consistent with the data. The ambiguity is largely due to the scarcity of the data but is also inherent in the mathematics of typical inverse problems: a small range of values in the observed head is consistent with a larger range of conductivity values (Kitanidis, 2007). This characteristic is known as ill-posedness and results in nonuniqueness in the solution of the inverse problem. In a Bayesian perspective, the essence of the ill-posedness is that the likelihood of the data is not sufficient to characterize the probability distribution of facies and thus the solution consists of a
ISBN 978-3-00-050337-5 (DVD)
495
Bayesian facies inversion, using spatial resampling and cosimulation with. . . posterior ensemble of models that fit the data up to a certain degree and are a subset of a prior distribution. The objective is to approximate this posterior distribution given a certain prior and a likelihood function. A possible approach for these problems is the use of Markov chain Monte Carlo (McMC) techniques. This is an alternative way to achieve the same results without resorting to an optimization problem, but rather sampling a multivariate probability distribution that converges to the posterior. McMC methods generate model realizations that match the state observations, while reproducing the prior statistics and obey Bayes’ rule. These requirements are only partly fulfilled by most gradient‐based optimization techniques (Gomez‐Hernandez et al., 1997; Mariethoz et al., 2010). In the above context, different proposal methods have been used in order to sample the posterior. Oliver et al. (1997) create a McMC by updating one grid node of a simulated realization at each step. Fu and Gomez‐Hernandez (2008) improve the efficiency of the method by updating many grid nodes at the same time, introducing the Blocking McMC (BMcMC) method that induces local perturbations by successively resimulating a whole block of the realization. Mariethoz et al. (2010) propose a modified Gibbs sampler (iterative spatial resampling - ISR) as a general transition kernel that preserves any spatial structure of the prior produced by conditional simulation and allows dealing with both Bayesian inversion and optimization aspects, while Hansen et al. (2012) present a theoretical background for using the method. In this study and under the ISR framework, we present a formula for steering the sampling process by cosimulation with system response variables (C-ISR). We demonstrate the effectiveness of our approach in conjunction to ISR, by an example application on a synthetic case in aquifer characterization.
2 Methodology 2.1 Bayesian framework for inversion Consider a hydrologic system g described by a parameter set m = {m1, …, mn}. While the forward problem is to predict the data values g(m), the inverse problem aims to identify the model parameters m according to a set of observations d = {d1, .., dp}. The solution m = g-1(d) of the inverse problem is usually a difficult task, mainly because the unknowns (m) are in general more than the data (d). In a Bayesian framework, the parameters m are represented by a random function (RF) K(x) = {K1(x1), …, Kn(xn)}. Solving the inverse problem means to find a posterior ensemble of models that fit the data up to a certain precision, specified by a likelihood function. This ensemble must certainly be a subset of the prior distribution f (m). The likelihood function f (d|m) is a goodness of fit measure and defines the probability of the measurements, given a certain model from the prior distribution. Then, the posterior density function, which describes the solution ensemble of models, is given by: f (m|d) = f (d|m) f (m) / f (d)
(1)
where f (d) is an appropriate normalization constant. Under this point of view, the probability density functions represent states of knowledge and the Bayesian statistics can be regarded as a mechanism of making inferences on the basis of incomplete information (Kitanidis 2007). It is for this reason exactly that the unknown function, which in our case is the aquifer structure, is modelled as a random function. 2.1.1 McMC methods for sampling the posterior As seen in the previous section, the general solution of an inverse problem is a probability distribution over the models space. It is only when this probability distribution is very simple, that analytic techniques can be used to characterize it (Tarantola 2005). For more general probability distributions, one needs to perform an extensive exploration of the models space. In the usual case of large number of dimensions, this exploration cannot be systematic. Alternatively, well-designed random explorations avoid entrapment in local likelihood maxima and thus can solve many complex problems.
496
IAMG2015 - Proceedings
F1303
These random methods are called Monte Carlo methods. However, if not appropriately modified, Monte Carlo methods sparsely sample the local maxima of the posterior distribution (Mosegaard and Tarantola 1995). On the other hand, the McMC techniques preferentially visit the model space where the posterior density is high. The basic idea behind these methods is to perform a random walk that normally would sample some initial probability distribution and then, using a probabilistic rule, to modify the walk by accepting or rejecting samples in such a way that the produced samples are representative of the target distribution. In inversion problems, McMC techniques generate candidate models from the prior distribution and use an acceptance criterion to reject or not the candidate model under the consideration of likelihood function. A candidate model m* in a Markov Chain is generated by modifying the previous model mi, after addition of a random perturbation. The asymptotic behavior of a Markov Chain is governed by its transition kernel, that is, the probability density function of transition from a model mi to a new model m*. 2.1.2 Iterative spatial resampling Different transition kernels have been proposed in previous studies to investigate Markov chains applied to spatially dependent variables, as stated in the introduction. In this work, since our aim is to condition on the state measurements, we use ISR to create a chain of dependant realizations by obtaining a random subset ri of each member model mi and impose as conditioning data to generate the next candidate model m*. The perturbation mechanism Q (m*|mi) of the chain works as follows: 1. Generate an initial model m1 = {K11(x1), …, Kn1(xn)} as a realization of the RF K(x) discretized on a grid with n nodes, using a geostatistical simulation algorithm and evaluate its likelihood L(m1). 2. Iterate on i: a. Select randomly a subset ri = {Kαi(xα), α = 1, …, l} of the previous model, where l is the number of conditioning data to generate the next candidate model m*. b. Generate a proposal realization m* using the same geostatistical simulation and under the conditioning data ri. c. Evaluate L(m*). d. Accept or reject the candidate model m*. If accept, set mi+1 = m*. The performance of the method depends on the criterion for candidate model selection (section 2.3) and the time of chain interruption. The number of conditioning data should be enough to permit a certain dependency between two members of the chain, but this number cannot be too high, in order to avoid artifacts in simulation. The measurements, if any, are added on the subset r in each iteration.
2.2 Facies representation with a truncated Gaussian variable A convenient way to cosimulate facies distribution with the reference variables, is to represent the former by one or more truncated Gaussian variables: Consider a standard Gaussian RF Z(x) where x ∈ R3, with variogram γ(h). Let (D1 ,…,Dν) be a partition of R into ν disjoint subdomains. A categorical random field with ν categories (facies) is obtained by putting
∀ x ∈ R3, I(x) = i if and only if Z(x) ∈ Di
(2)
while the indicator random field for each facies Fi is defined as: ∀x ∈ R 3 ,
1 if I(x ) = i I Fi (x ) = 0 otherwise
(3)
Ιn order to transform between the Gaussian RF and the facies indicators, a proper set of ν-1 truncation thresholds ti has to be defined so as:
ISBN 978-3-00-050337-5 (DVD)
497
Bayesian facies inversion, using spatial resampling and cosimulation with. . . I Fi (x ) = 1 ⇔ ti −1 ≤ Z(x ) ≤ ti , where t1 ≤ t2 ≤ ≤ ti −1 ≤ t ≤ ti +1 ≤ ≤ tν −1
(4)
Approximating the proportion of a particular facies Fi at point x by the probability of having this facies Fi at that point x:
{
}
PFi (x ) = P ( facies at point x = Fi ) = E I Fi (x )
(5)
We assume that K(x) can be described as a function of the Gaussian RF by the following relation: K (x ) =
ν
∑ i =1
ki I Fi (x ) =
ν
∑k i =1
i
⋅ IsTrue(ti −1 ≤ Z(x ) ≤ ti ) = ξ (Z(x ))
(6)
where v is the number of distinct facies, ki is the value of the parameter in facies i and IsTrue() is a function which returns 1 if its argument is true and 0 otherwise.
2.3 Likelihood optimization In an optimization context, instead of sampling the posterior, the objective is to reach an optimal solution which maximizes the likelihood function. Therefore, the searching strategy is to not sample the prior distribution uniformly, but rather to sample the space where the likelihood function is maximized, in order to quickly obtain an approximation of the posterior. If we consider the case of independent, identically distributed Gaussian uncertainties, then the likelihood function of equation (1) describing the experimental uncertainties degenerates into:
1 L( m ) ∝ exp − 2 2 pσ (7)
k
∑ ( g ( m) − d ) i =1
i
i
2
where σ2 expresses the variance of epistemic and measurement errors. In order to avoid entrapping in local maxima and explore particular spaces of prior distribution under the framework of McMC methodology, the optimization process requires a large number of independent samples of the prior distribution. Thus, sampling the posterior using McMC techniques raises two important issues: the criterion to accept a state model as a member of a chain and the criterion to accept a model as a member of the posterior ensemble. Concerning the first issue, Mosegaard and Tarantola (1995) propose a modified version of the Metropolis algorithm while Mariethoz et al. (2010) propose to simply accept a candidate model m*, if L(m*) ≥ L(mi)
(8)
where mi is the current model (section 2.4). Concerning the second issue, it is well known that the convergence of the chain must be reached before performing any sampling. In addition, to ensure uniform sampling of posterior, as previously stated, many independent samples of the prior are required. Tarantola (2005) notes that this is difficult to attain, due to the emptiness of largedimensional spaces. To overcome this problem, Mosegaard and Tarantola (1995) suggest keeping only one model every μ samples, where μ should be large enough so as the chain to forget the previously accepted models. After a large number of iterations, the optimal solution will be reached. We adopt the procedure by Mariethoz et al. (2010) who apply ISR using independent Markov chains (see section 2.4). Each chain is interrupted by a stochastic criterion inspired by the rejection sampling method (von Neumann, 1951) with probability: P(m*) = L (m*) / L(m)max
498
(9)
IAMG2015 - Proceedings
F1303
where L(m)max denotes the highest likelihood value. The resulting models belong to the posterior ensemble. This way, the optimization process is more efficient, reducing the computational cost by skipping the unnecessary forward solutions.
2.4. Using cosimulation to improve the search strategy As seen in the previous sections, when inverting under the McMC framework, hydraulic measurements can only be used as indirect data to evaluate the prior models and drive the search path. This inconvenience is due to the nonlinearity relation between hydraulic measurements and unknown parameters. On the other hand, well known cokriging and cosimulation rely on a linear predictor approach and use covariance and cross covariance functions derived from a first-order approximation, therefore they often result in unacceptable solutions when the multivariate distribution of integrated variables is not Gaussian (Yeh et al., 1995). In this paper, although the problem is not linear, we propose to use cosimulation with the reference data as auxiliary variable, in order to improve the search path in the Markov chains. More specifically, each proposed model m* is produced by cosimulating the truncated Gaussian variable Z, which represents the facies distribution, with the normal scores transformation of the reference variable. The realizations so produced belong to a subset of the original prior, that is, the prior described solely by the truncated Gaussian variogram. Our method relies on the approximation that, temporally, the p x 1 vector of normal scores transformed hydrologic measurements is related to Z assuming a linear coregionalization model. Thus, a narrower prior is created by utilization of the reference variable. In order to obtain one sample by an interrupted Markov chain, we design an ever‐improving Markov chain that accepts new members under condition (8). The chain should be interrupted following the stochastic stopping criterion (9). The C-ISR algorithm is accomplished in the following steps: 1. Generate an initial model m1 = {K11(x1), …, Kn1(xn)} = {ξ(Z11(x1)), …, ξ(Zn1(xn))} as a realization of the RF Z(x), discretized on a grid with n nodes, using a geostatistical cosimulation with the normal scores transformed reference variable and evaluate its likelihood L(m1). 2. Iterate on i: a. Select randomly a subset ri = {ξ(Zαi(xα)), α = 1, …, l} of the previous model, where l is the number of conditioning data to generate the next candidate model m*. b. Generate a proposal realization m* using the same geostatistical cosimulation and under the conditioning data ri. c. Evaluate L(m*). d. If L(m*) ≥ L(mi) accept the candidate model m* and set mi+1 = m*. e. Decide whether or not to interrupt the chain: i. Compute P(m*) = L (m*) / L(m)max ii. Draw u in U[0,1] iii. If u ≤ P(m*) interrupt the chain, else continue the chain using mi+1 Hence, the above means of successive linearizations is used to transfer from one model to another at each step of the optimization procedure in a Markov chain. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior and thus improving convergence. Therefore, by the incorporation of indirect data to narrow the prior distribution, our approach promises to allow the full utilization of measurements in achieving the best possible site characterization. The effectiveness of the proposed formulation is shown next, using a synthetic example.
3. Synthetic Example 3.1. Materials and methods Our proposed inversion method is demonstrated by means of a synthetic aquifer example using a pump test. A simple two-dimensional synthetic flow system is set up for that purpose and finite
ISBN 978-3-00-050337-5 (DVD)
499
Bayesian facies inversion, using spatial resampling and cosimulation with. . . element software package COMSOL™ 3.4 (COMSOL Multiphysics User’s Guide, 2005) being controlled via a MATLAB™ 7.8 script, has been employed. We consider a square zone of side 100 m in a confined 2D aquifer, with a given spatial distribution of four distinct facies discretized in 100x100 nodes, as shown in Figure 1a. The hydraulic conductivity values of 10-3.5 (m/s), 10-4.5 (m/s), 10-5.5 (m/s) and 10-6.5 (m/s) are assigned to facies A, B, C and D, respectively. Using this spatial structure of facies as reference, we can produce the hydraulic head observations required in our example. A pumping well injecting 0.001 (m3/s) is set at the left lower corner of the field and another pumping well extracting 0.001 (m3/s) at the right upper corner of the field (Figure 1b). The hydraulic potential governing the flow through the aquifer zone and the surrounding area can be represented by the 2D pressure head H (m) distribution, which obeys Darcy’s Law: ∇( − K ( x )∇H ) = Q ⇒ ∇( −ξ (Z(x ))∇H ) = Q and H = bμ(x) on model boundary μ,
where Z(x) is a standard Gaussian RF representing facies distribution, ξ(∙) is defined in (6) and Q (m3/s) represents aquifer recharge. Considering the change in head due to pumping and applying superposition principle, we derive the following equation for δH: ∇( −ξ (Z(x ))∇δH ) = q and δH = 0, on model boundary μ,
where q represents sources and sinks due to pump test. The modeling domain’s boundary μ is assumed to be a square of side 12,000 m, where any effects of pumping are negligible. Also, the area outside the square zone is considered to have roughly constant hydraulic conductivity 10-5 (m/s).
Figure 1. True facies distribution in aquifer zone and the facies observations with circles (left). The reference heads and head observations with black crosses (right).
Prior information on the aquifer structure consists of a set of 16 facies observations. Using this information together with the 9 head measurements as shown in Figure 1, we perform two different approaches applying ISR to steer the search for posterior distribution samples: using sequential indicator simulation (SIS) to generate proposal models on the one hand and cosimulation of a truncated Gaussian variable with the reference data on the other hand. For the cosimulation we use the algorithm by Emery and Silva (2009).
3.2. Variography Using the available measurements, variogram models were defined for the state variables. The head measurements were transformed into Normal scores on which variogram analysis was performed. A Gaussian model with a sill of 1 and isotropic range of 41.5 m was used to interpolate the transformed data.
500
IAMG2015 - Proceedings
F1303
For the needs of the truncated Gaussian simulation, a standarized Gaussian variable with the characteristics of Figure 2 was defined. A trial and error procedure using the program VMODEL (Emery and Silva, 2009) was applied to determine the variogram model of this variable, which represents the facies distribution. Finally, an exponential model with a sill of 1 and isotropic range of 49 m was adopted. The experimental, model and cross-variogram of the truncated Gaussian variable and the transformed heads are displayed in Figure 3. A linear coregionalization model between the truncated Gaussian variable and Normal scores of head observations was defined using the iterative algorithm proposed by Emery (2010). In this semi-automated procedure, the sills of simple variograms have unit value, while the sills of the cross-variograms remain free of constrains. The indicator model variograms of the facies are shown in the diagonal of Figure 4 under the assumption of an intrinsic coregionalization model (Journel and Huijbregts, 1978). Exponential variogram models were used for the facies A, B, C, D with sills 0.0625, 0.25, 0.25, 0.125 and isotropic ranges 14.8 m, 19.2 m, 24.8 m and 29.3 respectively. A graphical comparison between true experimental variograms and variograms derived from the truncated Gaussian and the coregionalization models respectively, is shown in Figure 4. It isapparent that the fitting is satisfactory.
Figure 2. Truncation rule, showing contact relationship, proportions and Gaussian thresholds associated with the facies.
Figure 3. Sample (points) and linear model of coregionalization (solid lines) between truncated Gaussian variable and Normal scores transformed reference data.
3.4. Results and Discussion For both the C-ISR and ISR approaches, we run 150 independent Markov chains. We used the likelihood function of equation (7) with σ = 0.05 m, which can reasonably correspond to the head measurements error. The supremum value of L is set to 0.607, which, according to equation (7), corresponds to a RMSE of 0.05. The percentage of resampled nodes to generate the next candidate model m* is set to 1%. C-ISR is generally faster (76 forward model runs on average in each chain vs. 128 for ISR) and more accurate (average RMSE to 0.0984 vs. 0.1157 for ISR and average true facies similarity to 80.82% vs. 71.74% for ISR).
ISBN 978-3-00-050337-5 (DVD)
501
Bayesian facies inversion, using spatial resampling and cosimulation with. . .
Figure 4. Experimental and model variograms of facies indicators.
More specifically, Figure 5 shows that C-ISR produces on average smaller RMS errors. The minimum RMSE values reached are 0.0598 and 0.0890 for C-ISR and ISR respectively. Also, as seen by the right tail of ISR RMSE distribution, our method avoids entraping to local maxima, in contrast to ISR. Furthermore, Figure 6 shows the evolution of 100 randomly selected optimizations under the two approaches. The average slope of the curves on the left is higher, showing that C-ISR converges faster. Also, the likelihood of the posterior models produced by C-ISR is better, as seen from the ending points of the ensemple lines in Figure 6a, compared to those of Figure 6b.
Figure 5. RMSE distribution of posterior samples for C-ISR (green) and ISR (black).
Figure 7 shows the best resulting models of facies distribution reached by C-ISR and ISR respectively, after the 150 individual optimizations. Compared to the true facies field (Figure 1a), C-ISR results in a much better approximation (87.93% similarity) than ISR (78.79% similarity). Also, as a final remark,
502
IAMG2015 - Proceedings
F1303
our proposed method conforms better to the contacts and proportions of facies in the true field. For example, in Figure 7b the contact between facies B and D is not allowable.
Figure 6. Convergence of 100 individual optimizations for C-ISR (a) and ISR (b).
Figure 7. Optimal results for facies field from C-ISR (a) and ISR (b). Similarity to true facies is 87.93% and 78.79% respectively.
4. Conclusion We presented an algorithm (C-ISR) to improve posterior sampling in McMC optimization under the Bayesian framework for inversion, using cosimulation with system response data. This algorithm works in combination with ISR and relies on the approximation that, temporally, the vector of normal scores transformed hydrologic measurements is related to the truncated variable Z, assuming a linear coregionalization model. This process of successive linearizations acts as an importance sampling effect and speeds up convergence of the Markov chains. Our approach is illustrated by a synthetic aquifer inversion example, using a pump test. We applied CISR vs. ISR based solely on SIS. The results of 150 individual optimizations for both approaches show that C-ISR needs less forward model runs and, as a result, is faster. Also, it is more reliable since it produces smaller RMS errors and explores more effectively the prior space avoiding entrapment in local maxima.
References COMSOL Multiphysics User’s Guide (2005). Version 3.2, COMSOLAB, Stockholm (Sweden). Emery, X. and D.A. Silva (2009). Conditional co-simulation of continuous and categorical variables for geostatistical applications. Computers & Geosciences 35 (6), pp. 1234-1246.
ISBN 978-3-00-050337-5 (DVD)
503
Bayesian facies inversion, using spatial resampling and cosimulation with. . . Emery, X. (2010). Iterative algorithms for fitting a linear model of coregionalization. Computers & Geosciences 36 (9), pp. 1150-1160. Fu, J., and J. Gomez‐Hernandez (2008). Preserving spatial structure for inverse stochastic simulation using blocking Markov chain Monte Carlo method. Inverse Problem in Science and Engineering 16 (7), pp. 865-884. Gomez‐Hernandez, J., A. Sahuquillo, and J. Capilla (1997). Stochastic simulation of transmissivity fields conditional to both transmissivity and piezometric data - I. Theory. Journal of Hydrology 203 (1-4), pp. 162-174. Hansen, T. M., K. S. Cordua, and K. Mosegaard (2012). Inverse problems with non-trivial priors: Efficient solution through sequential Gibbs sampling. Computational Geosciences 16, pp. 593-611. Ibáñez, J.J. and A. Saldaña (2007). Continuum versus discrete spatial soil pattern analysis. In P. V. Krasilnikov, editor, Geostatistics and soil geography. Nauka, Moscow (Russia), pp. 109-120. Journel, A.G. and C.J. Huijbregts (1978). Mining Geostatistics. Academic Press, London (UK). Kitanidis, P. K. (2007). On stochastic inverse modeling. Geoph. Monograph Series 171, pp. 19-30. Kitanidis, P. K., and E. G. Vomvoris (1983). A geostatistical approach to the inverse problem in groundwater modeling (steady state) and one dimensional simulations. Water Resources Research 19 (3), pp. 677–690. Mariethoz G., P. Renard, and J. Caers (2010). Bayesian inverse problem and optimization with iterative spatial resampling. Water Resources Research 46, W11530. Mariethoz, G., P. Renard, F. Comaton and O. Jaquet (2009). Truncated Plurigaussian Simulations to Characterize Aquifer Heterogeneity. Ground Water 47 (1), pp. 13-24. Modis, K. and D. Sideri (2013). Geostatistical Simulation of Hydrofacies Heterogeneity of the West Thessaly Aquifer Systems in Greece. Natural Resources Research 22 (2), pp. 123-138. Mosegaard, K., and A. Tarantola (1995). Monte Carlo sampling of solutions to inverse problems. Journal of Geophysical Research 100 (B7), pp. 12431-12447. Oliver, D., L. Cunha, and A. Reynolds (1997). Markov chain Monte Carlo methods for conditioning a logpermeability field to pressure data. Mathematical Geosciences 29 (1), pp. 61-91. Tarantola, A. (2005). Inverse Problem Theory and Methods for Parameter estimation. Society for Industrial and Applied Mathematics, Philadelphia (USA). von Neumann, J. (1951). Various techniques used in connection with random digits. Monte Carlo methods. Journal of Research of the National Bureau of Standards 12, pp. 36-38. Yeh, T.-C. J., A. L. Gutjahr, M. Jin (1995). An iterative cokriging-like technique for groundwater flow modeling. Ground Water 331, pp. 33-41.
504