SPE-182609-MS Quantifying Value of Information Using ... - OnePetro

6 downloads 453 Views 2MB Size Report
SPE-182609-MS. Quantifying ... Chevron Energy Technology Company. Copyright ... reduction and the value of information (VOI) attainable from a given design.
SPE-182609-MS Quantifying Value of Information Using Ensemble Variance Analysis Jincong He, Pallav Sarma, Eric Bhark, Shusei Tanaka, Bailian Chen, Xian-Huan Wen, and Jairam Kamath, Chevron Energy Technology Company Copyright 2017, Society of Petroleum Engineers This paper was prepared for presentation at the SPE Reservoir Simulation Conference held in Montgomery, TX, USA, 20–22 February 2017. This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents of the paper have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect any position of the Society of Petroleum Engineers, its officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

Abstract Data acquisition programs such as surveillance and pilots play an important role in minimizing subsurface risks and improving decision quality for reservoir management. In order for design optimization and investment justification of these programs, it is crucial to be able to quantify the expected uncertainty reduction and the value of information (VOI) attainable from a given design. This problem is challenging as the data from the acquisition program is uncertain at the time of the analysis. In this paper a method called ensemble variance analysis (EVA) is proposed. Based on a multi-Gaussian assumption between the observation data and the objective function, the EVA method quantifies the expected uncertainty reduction from covariance information that is estimated from an ensemble of simulations. The result of EVA can then be used with a decision tree to quantify the VOI of a given data acquisition program. The proposed method has several novel features compared to existing methods. Firstly, the EVA method considers data-objective function relationship directly. Therefore it can handle nonlinear forward models and an arbitrary number of parameters. Secondly, for cases when the multi-Gaussian assumption between the data and objective function does not hold, the EVA method still provides a lower bound on expected uncertainty reduction, which can be useful in providing a conservative estimate of the surveillance/pilot performance. Finally, EVA also provides an estimate of the shift in the mean of the objective function distribution, which is crucial for VOI calculation. In this paper, the EVA workflow for expected uncertainty reduction quantification is described. The result from EVA is benchmarked with recently proposed rigorous sampling methods, and the capacity of the method for VOI quantification is demonstrated for a pilot analysis problem using a field-scale reservoir model.

Introduction The development of oil and gas reservoirs is associated with substantial risk as the subsurface condition is highly uncertain. Data acquisition programs such as surveillance and pilots are routinely conducted in the hope of minimizing subsurface uncertainties and improving decision quality. However, these programs themselves involve significant capital investment. For example, an on-shore water-flood pilot with its associated facilities can easily cost tens of millions of dollars. The cost quickly multiplies when it is offshore or when the physics is more complicated than a simple water-flood. Therefore, it is crucial to be able

2

SPE-182609-MS

to evaluate the effectiveness and quantify the value of a data acquisition program so that different designs can be compared and the best investment decision can be made. Traditionally, the effectiveness of pilots/surveillance is estimated based on experience and subjective engineering judgement (Ballin et al., 2005; Gerhardt et al., 1989; Koninx et al., 2001). The effectiveness of the data in reducing uncertainty is characterized by heuristically specifying the chance of "good news" or "bad news" from the data acquisition, and the posterior distribution of the objective function given these "good news" or "bad news". Ranking of different designs and the calculation of value of information based on such subjective estimates can be very unreliable. In the past few years more and more researchers have focused on using reservoir simulation and probabilistic forecasts to rigorously quantify the effectiveness of different measurement data in reducing uncertainty. The major challenge of estimating the effectiveness of data acquisition programs is that the data are uncertain at the time of the analysis. This is different from traditional history matching where the data are known. Methods using reservoir simulations to quantify expected uncertainty reduction largely fall into two categories. Methods in the first category employ exhaustive history matching using multiple realizations of the measurement data. For example, Walker and Lane (2007) proposed a workflow for quantifying the value of time-lapse (4D) seismic data by considering multiple, plausible reservoir models and investigating the divergence (similar to the concept of posterior uncertainty) in predicted economic outcomes when conditioned to each of the reservoir models. Cameron (2013) estimated the optimal sensor locations for plume-size surveillance in a CO2 sequestration project under geological uncertainty. Multiple plausible geological realizations are generated in his work to characterize the geological uncertainty, and deterministic history matching runs are performed assuming each realization to be true. The effectiveness of the sensor locations are evaluated by the expected prediction error of the plume size. More recently, He et al. (2016a) introduced a method called proxy-based pilot analysis (PBPA) to analyze the uncertainty reduction of reservoir performance by pilot measurement data. In the PBPA method, multiple realizations of the pilot data are generated and, for each realization, rejection sampling is performed to condition the model uncertainty and to obtain a posterior distribution of the reservoir performance. The expected uncertainty reduction is then defined as the difference between the prior uncertainty and the mean of the posterior uncertainty. The entire process is accelerated by the use of proxies. Methods based on exhaustive history matching are theoretically rigorous as they are usually free of assumptions on the forward model or the distribution of uncertainties. However, they can be very expensive because of the large number of simulations required for the exhaustive history matching. Methods in the second category rely on different assumptions to simplify the problem and reduce the computational cost. For example, Landa (1997) investigated the resolution of different dynamic measurements on the uncertainty (variance) of the reservoir parameters. In this work, the measurements is assumed to be linear with respect to the parameters and a formula is proposed to estimate the resolution (reduction in variance) using the gradient of this linear relationship. Moore and Doherty (2005) used a similar approach to investigate how the variance of subsurface measurements reduces predictive error (posterior uncertainty) of some objective function for groundwater modeling applications. In that work, both the measurement and the objective function are assumed to be linear functions of subsurface parameters. A formula is developed to calculate the expected reduction in predictive variance given the measurement using the gradient of both the measurement and the objective function with respect to the parameters. More recently, Le and Reynolds (2014); Le et al. (2014) investigated the expected uncertainty reduction of reservoir performance metrics by surveillance operations using the theory of mutual information. The mutual information, which characterizes the expected uncertainty reduction in terms of information entropy, is estimated based on a multi-Gaussian assumption between the measurement and the objective function. Besides the amount of expected uncertainty reduction, another important metric of the performance of a data acqusition program is the value of information (VOI) of the program. VOI, defined as the amount of

SPE-182609-MS

3

money a decision maker would be willing to pay for the information, characterizes how information helps improve decision quality and project economics. As pointed out by Barros et al. (2015), a data acquisition program is worthless, regardless of the reduction in uncertainty obtainable by history matching the acquired data, if its outcome will not change the full-field development decision. Barros et al. (2015) presented a workflow based on exhaustive history matching using the ensemble Kalman filter to quantify VOI in the context of closed-loop reservoir management. More recently, Chen et al. (2016) extended the work in He et al. (2016a) and presented an approach to calculate the VOI based on the result from the PBPA method. In this work, we propose the use of a new methodology called the ensemble variance analysis (EVA). EVA estimates the expected uncertainty reduction based on the assumption that the objective function and the measurement(s) jointly follow a multi-Gaussian assumption. The variance and covariance required in the formula are estimated from an ensemble of simulations. Compared with existing approaches, the EVA method has several distinct features. Firstly, rather than making assumptions on the forward model, as is done in Landa (1997) and Moore and Doherty (2005), the EVA method considers data-objective function relationships directly. Therefore, it can handle nonlinear forward models and an arbitrary number of parameters. This idea is similar to that used in direct forecast (Satija and Caers, 2015) or in data-space inversion (Sun et al., 2016). Secondly, the EVA method has an important theoretical appeal. It is proven that in the case when the linear-Gaussian assumption is violated, the result from EVA provides an upper bound for the expected uncertainty reduction. In other words, the EVA method always provides a conservative estimate (lower bound) of the expected uncertainty reduction, with the estimate being exact when the multiGaussian assumption is satisfied. Finally, under the multi-Gaussian assumption, EVA also provides a quick approach to estimating the shift in the mean of the objective function given different data realizations, thereby enabling the quantification of VOI. In this work, we demonstrate the capacity of the EVA method for a pilot analysis problem, although the same workflow can be used to analyze surveillance and other arbitrary data acquisition programs. The paper is organized as follows. We first present the formulation of the pilot analysis problem and its various components. Next, the theory of EVA is presented with an example application to quantify the expected uncertainty reduction from different pilot designs. The expected uncertainty reduction predicted by the EVA is then benchmarked with the PBPA method using rigorous rejection sampling. Last, we present an approach to combine EVA and decision analysis to quantify VOI and demonstrate the methodology for a synthetic waterflood pilot project. We conclude the paper with a discussion of the methodology limitations and present possible future development directions.

Problem Formulation The formulation of the pilot analysis problem in this paper closely follows that presented in He et al. (2016a). In this formulation, a pilot (or more generally, any data acquisition program) consists of four components and two performance metrics. The four components are (1) the uncertainty characterization, (2) the objective function, (3) the measurement data and (4) the design alternatives. The two performance metrics of program success are (1) the expected uncertainty reduction and (2) the VOI. Uncertainty Characterization Pilot measurements and performance of the full-field development are both strongly influenced by various subsurface uncertainties. Uncertainty characterization for a pilot analysis problem involves two steps. The first step is to identify the subsurface uncertainties that impact the objective function or the pilot measurements, and to then construct a set of parameters that represents these uncertainties. The second step is to characterize the distributions and possible correlations of the uncertainty parameters. A proper uncertainty characterization is a crucial step for the analysis as it defines the problem to be solved. However, for this work we only focus on analyzing the measurement efficiency given an uncertainty

4

SPE-182609-MS

characterization. We denote the vector of uncertainty parameters as m ∈ Rn × 1, where nm is the number of uncertainty parameters. We denote the prior distribution of m (prior distribution is the distribution before the measurement) as P(m). m

Objective Functions The objective function is any uncertain quantity that we would like to resolve through the pilot. For example, the objective function can be the oil production cumulative (OPC), the water production cumulative (WPC) or the recovery factor (RF) at the end of the full-field development. It can also be one of the uncertainty parameters such as porosity, permeability, etc. Mathematically, the objective function is a function of the uncertainty parameters m and is denoted as J(m). Measurement Data The vector of measurement data that will be obtained from the pilot is denoted by d ∈ Rn × 1 where nd is the number of data points. The measurement data set can be a time series, such as well bottom-hole pressure at different times. It can also be a quantity derived from the pilot data such as water/tracer breakthrough time, pressure derivatives, etc., or it can be a vector that combines any of the different measurements mentioned above. Measurement data should be both measurable in the field and obtainable from the simulation model. An important distinction should be made between three different concepts related to measurement data. Observed data are what we will measure in the field (e.g., from the gauge) and are denoted simply as d. True data are the measurement data given the state of nature of the reservoir and perfect measurement techniques, and are denoted as . Finally, simulated data are what our simulation model predicts the measurement data to be and are denoted as . Since the true data will never be known exactly, in this work we are mostly interested in the observed data d and the simulated data . The difference between and d is denoted by the vector e ∈ Rn × 1 and is referred to as the error, that is d

d

(1) The error in Eq. 1 includes two components, modeling error and measurement error. Modeling error represents the discrepancy between the state of nature of the reservoir and the numerical model. Example sources of modeling error include numerical diffusion, ignored physics (e.g. non-Darcy flow), incorrect uncertainty characterization (e.g., excessive use of global multipliers that ignore local variability), etc. Measurement error represents the discrepancy between the measurement we obtained from the gauge and the true reservoir response. Example sources of measurement error include: Gauge accuracy (which is usually a small piece compared with others), tidal effects, measurement limitations (e.g., gauge depth is not the same as datum depth), etc. The error reflects our confidence in the measurement data. The larger the error, the smaller the uncertainty reduction will be. Design The last component of the problem is the design of the pilot. The design of the pilot is characterized by a number of design parameters denoted as u ∈ Rn × 1. For example, the design parameters for a water injection pilot would include the location of the wells, the duration of the pilot, injection rate and water treatment facility capacity. The design of the pilot impacts the measurement data d that would be obtained and ultimately impacts the value of information from the project. Therefore, the design should be optimized to obtain the maximum VOI. For the purpose of this paper, we will not delve into the topic of pilot optimization and will only consider the performance analysis of one or several given/distinct pilot designs. u

SPE-182609-MS

5

Expected Uncertainty Reduction One of the key functionalities of a pilot is to reduce the uncertainty of the objective function. The uncertainty of the objective function can be quantified in several ways. For example, it is quantified by the difference between P90 and P10 (the 90th and the 10th percentiles) in He et al. (2016a), and by the information entropy in Le et al. (2014). In this paper, we simply use the variance and the standard deviation of the objective function to quantify the uncertainty. The variance of the objective function J before the pilot is denoted as (where σJ itself represents the standard deviation), and we denote the variance of the objective function after the pilot as . Because pilot data d are uncertain at the time of pilot analysis, the is also an uncertain quantity depending on the value of d. The expected posterior uncertainty is denoted as . The expected uncertainty reduction from the pilot data is the difference between the expected posterior uncertainty and the prior uncertainty, i.e., . Value of Information Expected uncertainty reduction only presents a partial picture of the pilot effectiveness as it does not consider how the uncertainty resolution changes the full-field development decision. As pointed out by Barros et al. (2014), a pilot is valuable only if it has the potential to change the full-field development decision. In order to quantify the impact of the pilot data in decision quality and compare the benefit of the pilot against its cost, decision analysis needs to be performed to calculate the value of information for the pilot. Barros et al. (2014) presented a workflow to evaluate VOI in the setting of closed-loop reservoir management based on exhaustive history matching which could be computationally prohibitive in practice. In this paper, we proposed the use of the EVA method and decision tree analysis for efficient VOI calculation.

Ensemble Variance Analysis In order to quantify expected uncertainty reduction over uncertain pilot data, we need to quantify . In addition, in order to quantify value of information, we need an efficient way to calculate μJ|d for any d. The EVA method provides an efficient way to estimate these two quantities. Multi-Gaussian Assumption The basic assumption underlying the EVA method is that the measurement data d and the objective function J jointly follow multi-Gaussian distribution. Mathematically this assumes (2) with (3) where ∑ ∈ Rn +1) × (n +1) is the covariance matrix, ∑JJ ∈ R and ∑dd ∈ Rn ×n are the covariance matrices for J and d, respectively, and ∑Jd ∈ R ∈ R1×n is the cross-covariance matrix between J and d. We note here that the multi-Gaussian assumption applies directly to the objective function J and observed data d. It does not pose an explicit requirement on the distribution of the uncertainty parameters m. In addition, unlike many other methods that assume a linear model in their theory, in EVA there is no explicit requirement on the forward models J(m) and d(m). Therefore, EVA can potentially handle a nonlinear forward model with a large number of parameters. d

d

d

d

d

Posterior Variance Under the multi-Gaussian assumption, the posterior variance of the objective function given a realization of observation data d is given by the following equation (Eaton and Eaton, 1983).

6

SPE-182609-MS

(4) One important point to note in Eq 4 is that the posterior variance is independent of d under the multiGaussian assumption. This allows us to write the expected posterior variance as follows (5) Eq. 5 shows that the expected posterior variance equals the prior variance minus the term , which characterizes expected uncertainty reduction. A more intuitive explanation of Eq. 5 can be obtained by rearrange it into Eq. 6, where the left hand side is the expected variance reduction as a percentage of the original variance, and the right hand side, in the case when d is scalar, is the squared correlation coefficient between the objective function and the measurement. Therefore, Eqs. 5 and 6 indicate that the better the measurement data correlate with the objective function, the more uncertainty is expected to be reduced. (6) Upper Bound of Expected Uncertainty Reduction It has in fact been proven in Harville (2003) that the following holds for arbitrary distributions for J and d (7) with the equality holding when the multi-Gaussian assumption is valid. The implication of Eq. 7 is that the expected posterior variance estimated by the EVA method in Eq. 5 is always an upper bound of the actual expected posterior variance. In other words, regardless of the distribution types, the EVA method always provides the lower bound of the expected uncertainty reduction. Posterior Mean With the multi-Gaussian assumption, the posterior mean of the objective function J given observed data d can be written as (Eaton and Eaton, 1983) (8) Eq. 8 indicates that the shift in the mean of the objective function distribution given the observed data depends on how well the observed data correlate with the objective function and how far the observed data are from the prior mean. Note that because the observation data d are uncertain at the time of the analysis, the posterior mean μJ|d is also an uncertain quantity that depends on the value of d. However, it is interesting to note that the expected value of the conditional mean of J, over all plausible data d, is the same as the prior mean of J, i.e., (9) In fact, as proven in Le et al. (2014), Eq. 9 holds even without the multi-Gaussian assumption. Ensemble Evaluation In order to evaluate the expected uncertainty reduction and the posterior mean in Eq. 5 and Eq. 8, one needs to evaluate the covariance matrices ∑JJ, ∑Jd and ∑dd. They can be estimated emperically from an ensemble of nr simulations that sample the prior distributions of the uncertainty parameters. Denoting the objective function and the simulated data vector from the ith simulation to be Ji and di, the covariance of the objective function is given by (10)

SPE-182609-MS

7

where is the mean value of the objective function over all simulations. Evaluating ∑dd and ∑Jd is more complicated because what the simulator outputs is rather than d. In order to evaluate ∑dd and ∑Jd, a model for the error e needs to be established. In this work, we assume the error for each data point to be independent Gaussian such that its covariance is given by a diagonal matrix denoted as ∑e. In addition, we also assume that the distribution of e is independent with the distribution of the objective function J and the simulated data . The covariance matrices are derived as follows (11) (12) where is the mean value of the simulated data vector over all simulations. We note that the error assumption and the procedure in Eqs. 10, 12 and 11 are similar to those that are commonly used in the ensemble Kalman filter (EnKF) (Aanonsen et al., 2009; Nævdal et al., 2005) or ensemble smoother (Emerick and Reynolds, 2013; Van Leeuwen and Evensen, 1996) for history matching problems. EVA Workflow Using Simulation The workflow to calculate the expected uncertainty reduction from simulation results is summarized below.

• • • • • •

Step 1. Generate ns design points in the uncertainty parameter space honoring the prior distribution of the parameters. Step 2. Perform a reservoir simulation on each of the design points. Step 3. Collect the vector of simulated data

and the objective function Ji from each simulation i.

Step 4. Calculate the covariance matrices using the Eqs. 10, 11 and 12. Step 5. Evaluate Eqs. 5 to obtain the expected uncertainty reduction. Step 6 (for VOI) For each possible realization of the simulated data the posterior mean

, evaluate Eqs. 8 to obtain

EVA Workflow Using Proxies In the EVA workflow using simulation, the method requires that the ensemble sampled from the prior uncertainty distributions (before the pilot) is applied in simulation. However, for some cases this may not be possible to accomplish using only simulation. This is especially true with discrete parameters. For example, the uncertainty in relative permeability curves is theoretically continuous, but in practice it is usually characterized by a limited number of different curve tables, and using simulation alone it is impossible to sample between the different curve tables. In these cases, numerical proxies, which are numerical functions that approximate the behavior of the system, can be built for both the objective function and the measurement data (He et al., 2016b) to enable interpolation between discrete levels and to provide inputs for EVA.

• • • •

Step 1. Generate nt design points in the uncertainty parameter space. Step 2. Perform a reservoir simulation on each of the design points. Step 3.1. Construct proxies for the objective function and the measurement data. Step 3.2. Generate ns design points in the uncertainty parameter space honoring the prior distribution of the parameters.

8

SPE-182609-MS

• • • •

Step 3.3. Evaluate the proxies for the vector of simulated data and the value of the objective function Ji for each of the ns points. Step 4. Calculate the covariance matrices using the Eqs. 10, 11 and 12. Step 5. Evaluate Eqs. 5 to obtain the expected uncertainty reduction. Step 6 (for VOI) For each possible realization of the simulated data the posterior mean.

, evaluate Eqs. 8 to obtain

Example Application Model Setup We now illustrate the application of the EVA method for a synthetic pilot analysis problem. The reservoir model considered in this example is the Brugge model that was also applied in He et al. (2016a). The model contains a total of 60,048 active cells. Figure 1 shows the initial oil saturation for the top layer of the reservoir, where the aquifer and the oil zone are separated by a sharp oil-water contact. The reservoir model contains nine layers which are divided into two units: Unit 1 includes the upper five layers and Unit 2 includes the lower four layers.

Figure 1—Simulation model and well locations for the Brugge reservoir. Background shows the initial oil saturation for the first layer

The water-oil relative permeability curves in the simulation model are given by following equations (13) where Swc and Sor are the irreducible water saturation and residual oil saturation, respectively, krw0 and kro are the end points for the water and oil relative permeability curves, respectively, and ew and eo are the exponents for the water and oil relative permeability curves, respectively. While the value of Sor is uncertain (see Table 1), the value of Swc is constant and set to 0.1. The oil and water viscosities are set to 3 cp and 1 cp, respectively. Table 1—Summary of uncertainty for Case 1 Variables

Description

Minimum

Maximum

1

SORW (Sor)

Residual oil saturation

0.1

0.2

2

KRWRO (krw0)

Water rel. perm. end point

0.6

0.9

3

KROCW (kro0)

Oil rel. perm. end point

0.8

1

4

WEXP (ew)

Water rel. perm. exponent

1

4

5

OWEXP (eo)

Oil rel. perm. exponent

1

4

SPE-182609-MS

9

Variables

Description

Minimum

Maximum

6

RCOMP

Rock compressibility

3E-06

4E-06

7

WOC

Oil-water contact

5575

5580

8

PERM1

Layers 1-5 perm. multiplier

0.5

5

9

PERM2

Layers 6-9 perm. multiplier

0.5

5

10

PORO1

Layers 1-5 porosity multiplier

0.6

1.5

11

PORO2

Layers 6-9 porosity multiplier

0.85

1.15

Table 1 summarizes the 11 uncertainty parameters and their respective ranges considered in this model. They include imbibition parameters such as relative permeability exponents and end-points, as wells as static parameters such as permeability and porosity multipliers. We note that this reservoir model and the uncertainty characterization are the same as those defining Case 2 in He et al. (2016a), where they were used to evaluate the performance of the proxy-based pilot analysis (PBPA) method. This intentional similarity facilitates the benchmarking study presented in a later section. The objective function for uncertainty reduction is the field cumulative oil production (OPC) after 18 years of production with waterflood. As shown in Figure 1, the waterflood development scenario consists of drilling 20 producers in the oil zone and 10 injectors along the oil-water contact. The pilot project is planned to start two years before commencement of full-field development and involves one injector (Well I07) and one producer (Well P15). The data to be collected from the pilot include monthly bottom-hole pressure (BHP) and water cut (WCT) from the producer P15. The standard deviations for the BHP and the WCT measurements are defined as 50 psi and 2%, respectively. The data collected are divided into three sets, and uncertainty reduction is considered for each set separately. The first data set, referred to as "bhp_p15", contains monthly BHP data from P15 for two years, which amounts to 24 data points. The second data set, referred to as "wct_p15", contains monthly WCT data from P15 for two years, which also amounts to 24 data points. The third data set, referred to as "bhp_wct_p15", is a combination of the first two data sets and therefore has 48 data points. For both the pilot and the full-field development period, producers are controlled at a constant liquid production rate of 3,000 std/day. The injectors are controlled at a constant liquid injection rate of 4,000 std/day. EVA Result Using Simulations We first consider the scenario (referred to as Scenario 1) where all parameters in Table 1 are considered uniformly distributed within the minimum and maximum. 200 samples are generated using a space filling design Yeten et al. (2005) and a reservoir simulation run is performed for each of the samples. Figure 2 shows the distribution of the parameters (normalized to be between -1 and 1), the objective functions and example measurement data points. It is seen that while the parameter distributions are clearly non-Gaussian, the objective function is close to Gaussian.

Figure 2—Distributions of parameters, objective function and data for Scenario 1

10

SPE-182609-MS

Following the EVA workflow, the expected uncertainty reduction is quantified for the three data sets separately. For each time step in a data set, the expected uncertainty reduction via Eq. 5 is evaluated using all data points from the beginning time to that particular time step. Repeating this process for all time steps in a data set, we can show how the posterior uncertainty is reduced over time by the associated data set. Figure 3 shows the posterior uncertainty (in terms of standard deviation of the objective function) over time for the three data sets. It is clear that bhp_p15 is expected to reduce the uncertainty slowly over time, while wct_p15 is expected to rapidly reduce the uncertainty within the first 100 days. It is also observed that the combined data set wct_bhp_p15 is expected to reduce uncertainty more than either of the individual data sets. In this case, in fact, the expected uncertainty reduction from wct_bhp_p15 is larger than the expected uncertainty reduction from the two individual data sets combined (arithmetically). As demonstrated in He et al. (2016a), the expected uncertainty reduction from a combined data set can be smaller than, equal to, or larger than the arithmetic sum of expected uncertainty reduction from individual data sets.

Figure 3—Expected uncertainty reduction in objective function over time

Just as Figure 3 describes the expected uncertainty reduction for the objective function OPC, the same process can be repeated for each uncertainty parameter because a parameter can also be viewed as an objective function for uncertainty reduction. Figures 4(a) and 4(b) show the time-dependent expected posterior uncertainty as a percentage of the prior uncertainty for the 11 parameters using the wct_p15 and bhp_p15 data sets, respectively. These two figures show that the WCT data at P15 are expected to reduce the uncertainty of WEXP rapidly and by the largest amount, while the BHP data at P15 are expected to best resolve the uncertainty of PERM2. We note that because the distribution of the parameters tends not to be Gaussian (in this case it is uniformly distributed), the estimation of the expected uncertainty reduction for the parameters may be inaccurate compared with that for the objective function OPC, as we will demonstrate later in the benchmarking study. Therefore, Figure 4 is most useful when interpreted qualitatively. The estimate of expected uncertainty reduction in Figure 4 (as well as Figure 3) can also be interpreted using Eq. 7 as the upper bound of expected uncertainty reduction when the Gaussian assumption is violated. In other words, results in Figure 4 and Figure 3 always provide a conservative estimate (minimum amount) of the expected uncertainty reduction.

SPE-182609-MS

11

Figure 4—Expected posterior uncertainty of parameters given WCT or BHP data

Figure 4 provides a useful tool to analyze the capability of different data sets for resolving different uncertainty parameters. A more complete picture can be obtained when comparing the result of Figure 4 to the sensitivity of the objective function. For this, Figure 5 shows the sensitivities of the objective function on the left, as calculated by the entropy test method (see He et al. (2016a) for more detail), and the percentage of expected uncertainty reduction from the wct_p15 and the bhp_p15 data sets on the middle and on the right, respectively. From Figure 5(a) it is clear that the objective function is most sensitive to parameters WEXP and PORO2 (and they are called heavy hitters) and insensitive to PERM2. On the other hand, Figures 5(b) and 5(c) show that the wct_p15 data set is most capable of resolving WEXP, which is an heavy-hitter the objective function, while the bhp_p15 data set is most capable of resolving PERM2, which is not an heavyhitter. This provides a physical explanation of Figure 3 in which the wct_p15 data set is expected to be more effective in reducing objective function uncertainty than bhp_p15.

Figure 5—Heavy-hitter alignment between the objective function and the measurement data

Convergence Analysis Using Proxies The results presented in the previous section are based on estimation of the covariance information using an ensemble of 200 simulations. When using this method, it is important to verify that the number of simulations is sufficient for the EVA workflow results to converge. One way to numerically validate

12

SPE-182609-MS

results convergence is to gradually increase the number of simulations and identify the convergence point. However, this would require a large number of simulations which can be overly expensive. In this work, we use proxies for the validation procedure. Proxies are numerical functions that approximate the behavior of the system but that are fast to evaluate. Therefore, they can be readily used to evaluate the convergence of the EVA method with low computational cost. Various types of proxies exist including analytical proxies such as linear, quadratic and polynomial, as well as numerical proxies such as Kriging, splines, artificial neural networks and support vector regression. See Castellini et al. (2010); Yeten et al. (2005) for detailed reviews. In this paper we use a support-vector regression proxy, ν-SVR from the library LIBSVM (Chang and Lin, 2011), which is capable of capturing highly nonlinear relationships in the training data set. Two user-defined parameters in nu-SVR are ν, which represents the tolerance of the mismatch between the true response and the proxy response for the training data, and C, which represents the amount of penalty applied on the mismatch. For our analyses we set ν = 1e — 10 and C = 1e10. These represent a small tolerance and large penalty to honor the training data exactly given that they are computer experiments and assumed to be error-free. The ν-SVR proxies are constructed for each of the objective functions and each of the data points. The proxies are constructed using 200 training simulations generated by a space filling design within an 11dimensional parameter space. The quality of the ν-SVR proxies are tested with ten-fold cross validation (Geisser, 1993). The correlation coefficient between the true values and the predicted values for the objective function proxy is 0.98, and the correlation coefficient for the measurement data proxies averaged over all data points is 0.96, indicating that the proxy accuracy is high in this case. Figure 6 compares the proxy-versus simulation-based EVA results. The solid line depicts the EVA expected uncertainty reduction generated from the ensemble of 200 reservoir simulations, i.e., the same as presented in Figure 3. The crosses represent EVA results generated from an ensemble of 200 proxy evaluations located at the exact same points in parameter space as the reservoir simulations. As seen in Figure 3, because of the high quality of the proxy, the results using 200 proxy evaluations are effectively the same as those using simulations. The circles in Figure 6 represent the EVA results generated from an ensemble of 50,000 proxy evaluations. Because of the large size of the ensemble, these are considered convergenced results. It can be seen in Figure 6 that results using only a 200-member ensemble match reasonably well with the converged results. Therefore we presume for this case that the 200-member ensemble is sufficiently large to capture covariance accurately.

Figure 6—Expected uncertainty reduction in objective function over time

SPE-182609-MS

13

Finally, we note that besides validating uncertainty reduction convergence of the proposed EVA workflow, the above analysis also supports use of the ν-SVR proxies in the next section where the EVA method is benchmarked against the recently proposed PBPA method which, as indicated by the name, requires the use of proxies.

Benchmarking EVA with PBPA Proxy-Based Pilot Analysis The EVA method relies on the assumption of a multi-Gaussian distribution for the objective function and the measurement data. In order to evaluate the impact of this assumption on results, we need to benchmark the EVA against a statistically rigorous method that does not rely on distribution assumptions. One of the few methods that satisfy this requirement is the proxy-based pilot analysis method (PBPA) recently proposed by He et al. (2015, 2016a). Given the prior uncertainty distribution(s) and the pilot design, the basic idea behind the PBPA method is to generate multiple plausible realizations of the pilot measurement data and to then perform one probabilistic history matching run for each plausible data realization (assuming it to be true) to obtain the corresponding posterior distribution. The expected posterior uncertainty is then simply the expected value of the uncertainty metric of all plausible posterior distributions. In He et al. (2016a), the probabilistic history matching step is performed using the rejection sampling method. Because of the large number of reservoir simulations required, the use of numerical proxies is required to make the method computationally feasible. An illustration of the entire PBPA workflow is provided in Figure 7, a modification from He et al. (2016a).

Figure 7—Schematic of the proxy-based pilot analysis (PBPA) workflow. Modified from He et al. (2016a)

The PBPA method does not rely on any particular distribution assumptions. When the proxies are accurate, the PBPA method will yield the theoretically correct posterior distributions and expected posterior uncertainties. Therefore, its result can be used as a benchmark for the EVA result. Setup of the Two Methods In order to ensure that results from the PBPA and the EVA methods are comparable, both methods use the ν-SVR proxy constructed in the previous section as the true forward model. Therefore inaccuracy due

14

SPE-182609-MS

to proxies is ruled out. For the EVA, 50,000 samples from the ν-SVR proxies are used to construct the covariance matrices to additionally rule out any inaccuracy related to covariance convergence. Further, the error model for the two methods should be the same to ensure that both are solving the same Bayesian problem. In the EVA method, the error e is assumed to follow an independent Gaussian distribution. In the PBPA method, rejection sampling as originally proposed in He et al. (2016a) was based on filters and, therefore, requires modification to accommodate the same error model. Accordingly, the acceptance probability of a model with simulated data d in rejection sampling in the PBPA method is defined as (14) Benchmarking Result for Non-Gaussian Cases The results of PBPA and EVA are compared for two non-Gaussian scenarios. Scenario 1 follows previous sections where, as shown in Figure 2, all uncertainty parameters have uniform distributions. In Scenario 2, as shown in Figure 8, all uncertainty parameters can only take values of -1, 0 or 1 and follow uniform discrete distributions. As depicted in Figures 2 and 8, although the parameter distributions are highly nonGaussian, the objective function distributions still resemble Gaussian distributions. For the measurement data, the WCT distribution resembles a Gaussian distribution while the BHP distribution displays noticeable skewness.

Figure 8—Distributions of parameters, objective function and data for Scenario 2

Figure 9 shows the result of time-dependent expected uncertainty reduction for the objective function calculated using the PBPA and EVA methods. Results from Scenario 1 and Scenario 2 are shown on the left and right, respectively, and results using data set wct_p15 and bhp_p15 are shown in blue and red, respectively. For both scenarios and for both data sets, EVA results (dashed lines with markers) follow the trends of PBPA results (solid lines) reasonably well. The match in results for wct_p15 is better than for bhp_p15, possibly due to the skewness in the distribution of the bhp_p15 data as shown in Figures 2 and 8. More importantly, the comparison clearly shows that the EVA method always provides a conservative estimate (minimum amount) of the expected uncertainty reduction, which is a guaranteed behavior by Eq. 7.

SPE-182609-MS

15

Figure 9—Comparison of expected uncertainty reduction for the objective function by EVA and PBPA

Figure 10 shows the expected uncertainty reductions for the model parameters calculated using the PBPA and EVA methods in Scenario 1. Results using wct_p15 are shown in Figure 10(a) while results using bhp_p15 are shown in Figure 10(b). The same comparison for Scenario 2 is shown in Figure 11. The inaccuracy of the EVA relative to the PBPA method is noticable from Figures 10 and 11. This is expected because the parameter distributions are highly non-Gaussian in both scenarios. However, it is also shown that the EVA method consistently provides a lower estimate of the expected uncertainty reduction compared with PBPA results. More importantly, it preserves the ranking and the relative magnitude of the heavyhitters.

Figure 10—Comparison of expected uncertainty reduction for parameters from wct_p15 and bhp_p15 by EVA and PBPA for Scenario 1

16

SPE-182609-MS

Figure 11—Comparison of expected uncertainty reduction for parameters from wct_p15 and bhp_p15 by EVA and PBPA for Scenario 1

Last, we note that the PBPA method fails to produce a result for the case when both sets of data are used. The primary reason is that as more data are used with tighter tolerances (small error), the posterior uncertainty becomes lower and the posterior distribution becomes more concentrated, making it difficult for random samples to be accepted in the rejection sampling process. The number of samples required grows rapidly when the number of parameters increases or when the tolerance decreases. This is a known issue with the PBPA method (He et al., 2015) and Chen et al. (2016) has proposed the use of MCMC in place of rejection sampling to alleviate the problem. On the other hand, the EVA method is not sensitive to the number of parameter or to the error tolerance. This is one major advantage of the EVA method over the PBPA method. Summary of the Benchmarking Study Our observations from the benchmarking study are summarized as follows:

• • • •

Even when the parameters are strongly non-Gaussian, the common objective function and the measurement data derived from reservoir simulation may resemble Gaussian distributions Expected uncertainty reduction quantified by the EVA method well matched corresponding benchmark results from the PBPA method. Inaccuracy of EVA results increases as the problem deviates from multi-Gaussian assumptions (e.g., when non-Gaussian uncertainty parameters are considered as the objective functions, or when the measurement data set is non-Gaussian) The expected uncertainty reduction predicted by the EVA method is always lower than the PBPA method, as guaranteed by Eq. 7 Despite some inaccuracies, the expected uncertainty reduction for model parameters predicted by the EVA preserves the ranking and the relative magnitude of the heavy-hitters when compared to the PBPA method

Additionaly, Table 2 presents a qualitative comparison of the EVA and PBPA methods relative to practical aspects. For example, the EVA method is based on a multi-Gaussian assumption which guarantees to provide a conservative estimate (lower bound) of the expected uncertainty reduction. On the other hand, the PBPA method uses a rigorous sampler (rejection sampling) and does not rely on distribution assumptions or linearity, although it does rely heavily on the availability and accuracy of the proxies. In practice, when the quality of proxies is not as good as in this benchmarking study, the PBPA is not necessarily more accurate than the EVA. Further, the PBPA method may fail in practice for problems with a large number of parameters,

SPE-182609-MS

17

due to the challenge of constructing accurate proxies, or may fail when the acceptance tolerance is tight such that the rejection sampler cannot efficiently sample the concentrated posterior distribution. Table 2—Comparison of EVA and PBPA for predicting expected uncertainty reduction EVA

PBPA

Major assumption

Objective functions and measurement data follows multi-Gaussian distribution

Proxies for objective functions and measurement data are accurate

Theoretical appeal

Estimates of expected uncertainty reduction guaranteed to be conservative

Not relying on assumption on distributions or linearity

Accuracy

Accurate when multi-Gaussian assumption is valid

Accurate when proxies are accurate

Robustness

Robust with predictable behavior

May fail due to lack of posterior samples when tolerance is low or when filtering criteria is tight

Handling large number of model parameters

Not sensitive to the number of parameters

Not suitable for problems with > 30 parameters as proxy quality would be low

Handling large number of data points

Need more simulations for the covariance Number of proxies needed increases with matrices to converge. Can be alleviated by the number of data points, resulting in performing order reduction on data higher computational cost

As a general rule of thumb, when the number of parameters is small and high-quality proxies are easy to obtain for both the objective function and the measurement, PBPA is preferred for its accuracy and theoretical rigor. When obtaining quality proxies is a challenge, such as in cases with many model parameters, EVA may be a more viable alternative.

Value of Information Studies in the previous sections involve only one pilot project performance metric, the expected uncertainty reduction from pilot data. This metric provides only a partial picture of pilot performance as it does not take into account factors such as the full-field development decision that the pilot data is going to inform. As noted in Barros et al. (2014), a pilot would not have any monetary value if it does not have the potential to change the full-field decision. VOI, which is the value of the pilot based on its potential to improve the full-field decision, is a more complete metric characterizing the economic feasibility of the pilot. In order to quantify VOI, a decision analysis problem should be formulated that considers the full-field decision alternatives, their associated revenues and costs, as well as the cost of the pilot(s). Setup of the Decision Analysis Problem Figure 12 shows the decision tree for the pilot VOI problem. A decision tree depicts the components and the structure of the decision problem in chronological order from left to right. There are three kinds of nodes in the decision tree: decision nodes, chance nodes and end nodes. Decision nodes (shown as blue squares) represent a decision to be made and its branches the possible alternatives to choose from. Chance nodes (shown as black circles) represent events with random outcomes and its branches the possible outcomes. An end node (shown as a black triangle) represents the net present value of a scenario and its corresponding uncertainties and decisions are given by the path that reaches that node.

18

SPE-182609-MS

Figure 12—Decision tree for the pilot value of information problem

Figure 12 depicts the decision analysis problem for the Brugge injection pilot. From left to right, the first decision is whether or not to execute a pilot. If we decide to execute the pilot, the data from the pilot will be an event of uncertain outcome. However, assuming that we have obtained certain data outcomes from the pilot, we are then faced with a decision node for whether or not to execute the full-field waterflood development based on the information obtained from the pilot. This decision node is followed by a chance node which represents the uncertain outcome of the full-field waterflood development. Following each branch of the chance nodes is an end node that represents the net present value (NPV) of the entire project (pilot plus fullfield development). Note that because NPV is the primary value measurement for the purpose of decision analysis, it is defined as our objective function J. For simplicity, in this example we ignore the uncertainty in the primary depletion (business as usual) case. The NPV at the end node is defined as follows (15) where and are the cumulative oil production, water production and water injection, respectively, for year i. po, pwp and pwi are the oil price, produced water cost and injected water cost, respectively, d is the annual discount rate, Cffd is the discounted project cost of the full-field development (ffd = pd, wf, where pd is for primary depletion and wf is for waterflood), and Cpilot is the incremental cost of the pilot compared to the no-pilot case. The values chosen for economic parameters above are summarized in Table 3. Note that the incremental cost of the pilot is different depending on the full-field development option chosen. If a waterflood is chosen, both pilot wells can be reused as part of the full-field development and thus the incremental cost is assumed to be zero. If primary depletion is chosen, then only the producer can be reused as part of the full-field development. The incremental cost in this case is set to be $10MM. Table 3—Summary of economical parameters Parameters

Values

po

$45/bbl

pwp

$2/bbl

pwi

$1/bbl

d

10%

Cwf

$3B

SPE-182609-MS

19

Parameters Cpd

Values $200MM

Cpilot for WF

$0

Cpilot for PD

$10MM

Using the economic parameters above, the prior distributions of the NPV for both the primary depletion (PD) and waterflood (WF) alternatives are shown in Figure 13. The mean NPV of the WF alternatives ($1.126B) is slightly higher than that for the PD alternatives ($1.095B), although the standard deviation of the WF alternatives is large, representing greater project risk.

Figure 13—Prior distribution of NPV for PD and WF scenarios

To simplify the discussion, it is assumed that the impact of the pilot on the full-field (PD or WF) recovery is negligible. This is usually a good assumption because a pilot is typically localized and only spans a short duration compared with the full-field development. This assumption allows us to use the same set of simulation models and the same prior distributions for the pilot branch and the no-pilot branch. It also allows us to isolate the value of information from the impact of different pilot implementations so that the result is easier to interpret. In the case where the impact of the pilot on full-field recovery is not negligible, two sets of ED simulations would needs to be run for the pilot and no-pilot branches to estimate the respective distributions. Solving the Decision Tree with EVA The detailed solution process of a decision tree can be found in Howard and Abbas (2015). Basically, the decision tree is solved backwards from the end nodes to the root node. The value of a chance node is the certainty equivalence of all its branches. Certainty equivalence (CE) is the deterministic value which a decision maker will deem equivalent to an event with a probabilistic outcome. Using an exponential utility function, certainty equivalence for a chance node is equal to the expected value of the branches penalized by the variance among the branches, i.e., (16) where α is the risk aversion coefficient that characterize the risk preference of the decision maker, with α > 0 indicating risk aversion and α = 0 indicating risk neutrality. In this work, we use α = 0.1/σJ = 5.1 × 10– 10 –1 $ . Using Eq. 16 we can evaluate all chance nodes in the decision tree, and the value of a decision node

20

SPE-182609-MS

is just the maximum of the values of its branches. Finally, the VOI for the pilot will be quantified by the difference between the value of the "Pilot" branch and the "No-Pilot" branch. The inputs required in the solution process include the probability for each branch considering all chance nodes and the value of all end nodes. The EVA workflow can be used to efficiently provide these inputs. Firstly, the uncertainty of the pilot data outcome is described by an ensemble of ns simulations in EVA. Therefore, there will be ns branches for the pilot data chance node. Because the ns simulations in EVA are equally probable, the ns branches for the pilot data chance node have the same probability of 1/ns. In order to evaluate the full-field waterflood outcome chance nodes conditional to the pilot data realization di using Eq. 16, we need the posterior mean (μJ|di) and variance ( ) of the NPV. These are given by Eqs 8 and 4, respectively, in the EVA workflow. Both the simulation-based EVA and the proxy-based EVA can be used to provide input to the decision tree evaluation. The input for the result presented here is derived from the simulation-based EVA using 200 simulations. Therefore, there will be N = ns = 200 branches for each measurement data chance node. Decision Analysis Results The result of the decision tree analysis is shown in red in Figure 14. On the no-pilot branch, the CEs for the PD and WF alternatives are $1.095B and $1.116B, respectively. Therefore, without the pilot the WF alternative is marginally better and would be selected, corresponding to a (no-pilot branch) value of $1.116B. On the pilot branch, however, the full-field decision depends on the outcome of the pilot. As shown in Figure 15, out of the 200 branches of plausible data realizations, the WF alternative is chosen 59% of the time and PD chosen for the remaining 41% of the branches. For this latter 41% of the branches, the pilot data indicate an unfavorable WF as the CE for the WF, after conditioning to the data, is lower than that for the PD alternative. In other words, the information from the pilot allows the decision maker to avoid a low-side waterflood development outcome, thus increasing the value of information of the pilot branch to $1.173B. The VOI (net benefit after considering its cost) of the pilot is $56.9MM.

Figure 14—Decision tree showing the VOI solution

SPE-182609-MS

21

Figure 15—Percentage of change in full-field decision before and after pilot

The same process can be applied for analysis of other pilot designs to quantify their VOI such that the best pilot alternative can be selected. As an example, we consider two additional pilot designs named Pilot 2 and Pilot 3 (the design analyzed above is referred to as Pilot 1). In Pilot 2, with all else the same as in Pilot 1, instead of using well pair P15 and I07 as the pilot wells, we use well pair P05 and I01. In Pilot 3, with all else the same as in Pilot 1, instead of collecting data for two years, we collect data for only six months. The VOI workflow as described above was applied to analyze these two additional pilot alternatives, and Table 4 summarizes the VOI results. It is clear that Pilot 1 has the highest VOI. The VOI for Pilot 2 is low because it turns out that there is no water breakthrough in Well P05 for all 200 runs. Therefore, the WCT data from this well does not contribute to uncertainty reduction. It is also observed that the VOI for Pilot 3 is quite close to that of Pilot 1, although it only takes 1/4 of the time. In terms of pilot optimization, Pilot 3 may be a more preferable design. Table 4—Summary of VOI result for three different pilot design alternatives Plan No.

Pilot Wells

Duration

VOI (M$)

Pilot 1

P15, I07

2 years

56.9

Pilot 2

P05, I01

2 years

17.8

Pilot 3

P15, I07

6 months

53.1

Conclusion and Discussion In this paper we proposed a method called ensemble variance analysis (EVA) to quantify expected uncertainty reduc-tion and value of information for data acquisition projects such as a pilot. Based on a multi-Gaussian assumption, the EVA method explores the correlation between the objective function and the multi-dimensional data series to estimate the expected uncertainty reduction. The result of EVA is benchmarked against the reference solution from the recently proposed PBPA method. It is shown that the EVA method provides reasonably accurate estimation of the expected uncertainty reduction of the objective function. In addition, the expected uncertainty reduction estimated by EVA is always guaranteed to be conservative. Furthermore, the expected uncertainty reduction from EVA preserves the ranking and the relative magnitude of the heavy-hitters. We reiterate that in the benchmarking study, the proxies are assumed to be perfect so that the PBPA result can be used as the reference solution. In reality, the quality of proxies are often questionable and, in such cases, PBPA is not necessarily more accurate than the EVA method. The EVA method additionally has several distinct advantages compared with the PBPA. First, it does not necessarily require the use of

22

SPE-182609-MS

proxies (although it can work with proxy results). Second, it is not sensitive to the number of parameters in the model. Last, it is more robust than the PBPA method for cases with tight tolerance. We also demonstrated the possibility of using the EVA results for decision analysis. EVA can provide estimates of the mean and variance of each plausible posterior distribution, which can then be used as the input to the decision tree to evaluate the value of information. The VOI estimated from the EVA methods reveal the performance difference for different pilot designs and can thus be used as an important metric for optimal design selection. In this work, these methodologies were demonstrated using the Brugge model, a sythetic case developed after a field model. The method is also being applied to several field cases. Gaps in the workflow and lessons learned relative to practical application are summarized below and may serve as topics for further investigation. We note that many of the issues below are not specific for the EVA method.







In this work, the uncertainty characterization of the problem (i.e., the 11 uncertainty parameters and their ranges and distributions) is applied as a given. In field applications, proper characterization/ parameterization of uncertainty in the model is a key challenge. This is because in many data acquisition projects, data collected are localized from one or a few wells with limited radius of investigation. On the other hand, the objective function that the program tries to inform usually concerns the entire field. A traditional uncertainty characterization that uses global multipliers could lead to artificially high correlation between local data and global objectives and, therefore, overestimate the expected uncertainty reduction and value of information. A good uncertainty characterization for the purpose of pilot evaluation is one that properly characterizes this localglobal relationship and is a topic requiring further investigation. In this work, the error in the measurement data is applied as a given. As is apparent in the EVA equations, the error term can have a strong influence on the result. The larger the error, the less confidence we have in the measurement data and the smaller the expected uncertainty reduction. In field applications, this error is hard to estimate as it is a combination of many different factors as discussed previously. Furthermore, the validity of the independent Gaussian assumption of the error distribution, which is assumed also in many other studies of ensemble-based methods (Emerick and Reynolds, 2013; He et al., 2013; Wen and Chen, 2006), may not always be appropriate. Further investigation is required as to the proper characterization of this error. The applications in this work use relatively simple well control strategies where injectors and producers operate on constant rate control. In field applications, well control strategies can be considerably more complex. One example would be the use of guide-rate balance control (Guyaguler et al., 2008) in which the target production rate is dynamically allocated to each producer at each simulation time step. This type of dynamic control would result in a situation where the well controls for each simulation in the ensemble may be different. Differences i controls for different ensemble members would add significant noise to the data-objective function relationship and mask the true physics. Another example is when producers operate on minimum tubing-head pressure (THP) control resulting in well shut-in as reservoir pressure declines. In this case, the same well across different simulations in the ensemble may have different shut-in times. This could lead to missing data in the data matrix, making it impossible for the method to proceed. Further investigation is required for understanding of how to properly treat dynamic well control strategies. For now, we recommend that modeling of a pilot should be performed as a controlled experiment. In other words, dynamic constraints should be removed to the maximum extent possible to ensure that all members of the ensemble operate under the same well controls.

SPE-182609-MS

23

Acknowledgement

This work benefits tremendously from the inspiring discussion with Dr. Ning Liu, Dr. Gary Reedy, Dr. Kaveh Dehghani, Dr. Sebastien Strebelle, Dr. Sarah Vitel, Dr. Robin Hui, Dr. Yanbin Zhang, Dr. Matthieu Rousset, Dr. Zhiming Wang and Mr. Weijun Zhu.

References

Aanonsen S.I., Oliver D.S., Reynolds A.C., and Valles B. 2009. Ensemble Kalman filter in reservoir engineering - a review. SPE Journal 14(3): 393–412. Ballin P.R., Ward G.S., Whorlow C.V., Khan T., et al. 2005. Value of Information for a 4D-Seismic Acquisition Project. SPE Latin American and Caribbean Petroleum Engineering Conference, Society of Petroleum Engineers. Barros E., Jansen J., and Van den Hof P. 2014. Value of information in closed-loop reservoir management. ECMOR XIV-14th European conference on the mathematics of oil recovery. Barros E., Jansen J., and Van den Hof P. 2015. Proxy-Based Workflow for A Priori Evaluation of Data Acquisition Programs. Computational Geosciences 20(3): 737–749. Cameron D.A. 2013. Optimization and Monitoring of Geological Carbon Storage Operations. Ph.D. thesis, Stanford University. Castellini A., Gross H., Zhou Y., He J., and Chen W. 2010. An iterative scheme to construct robust proxy models. Proceedings of 12th European Conference on the Mathematics of Oil Recovery, Oxford, UK. Chang C.C. and Lin C.J. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST) 2(3): 27. Chen B., He J., Wen X.H., Chen W., and Reynolds A. 2016. Pilot Design Analysis using Proxies and Markov Chain Monte Carlo Method. ECMOR XIV-14th European conference on the mathematics of oil recovery. Eaton M.L. and Eaton M. 1983. Multivariate statistics: a vector space approach. Wiley New York. Emerick A.A. and Reynolds A.C. 2013. Investigation of the sampling performance of ensemble-based methods with a simple reservoir model. Computational Geosciences 17(2): 325–350. Geisser S. 1993. Predictive Inference, volume 55. CRC Press. Gerhardt J., Haldorsen H., et al. 1989. On the value of information. Offshore Europe, Society of Petroleum Engineers. Guyaguler B., Byer T.J., et al. 2008. A new rate-allocation-optimization framework. SPE Production & Operations23(04): 448–457. Harville D. 2003. The expected value of a conditional variance: An upper bound. Journal of Statistical Computation and Simulation 73(8): 609–612. He J., Sarma P., and Durlofsky L.J. 2013. Reduced-order flow modeling and geological parameterization for ensemblebased data assimilation. Computers & Geosciences 55: 54–69. He J., Xie J., Sarma P., Wen X.H., Chen W., and Kamath J. 2015. Model-Based A Priori Evaluation of Surveillance Programs Effectiveness using Proxies (SPE paper 173229). SPE Reservoir Simulation Symposium, Houston, Texas, USA. He J., Xie J., Sarma P., Wen X.H., Chen W., and Kamath J. 2016a. Proxy-Based Workflow for A Priori Evaluation of Data Acquisition Programs. SPE Journal 21(4): 1400–1412. He J., Xie J., Wen X.H., and Chen W. 2016b. An alternative proxy for history matching using proxy-for-data approach and reduced order modeling. Journal of Petroleum Science and Engineering 146: 392–399. Howard R.A. and Abbas A.E. 2015. Foundations of Decision Analysis. Prentice Hall. Koninx J.P.M. et al. 2001. Value of information: From cost cutting to value creation. Journal of petroleum technology53(04): 84–92. Landa J.L. 1997. Reservoir parameter estimation constrained to pressure transients, performance history and dis-tributed saturation data. Ph.D. thesis, stanford university. Le D.H. and Reynolds A.C. 2014. Optimal choice of a surveillance operation using information theory. Computational Geosciences 18(3-4): 505–518. Le D.H., Reynolds A.C., et al. 2014. Estimation of Mutual Information and Conditional Entropy for Surveillance Optimization. SPE Journal 19(04): 648–661. Moore C. and Doherty J. 2005. Role of the calibration process in reducing model predictive error. Water Resources Research 41(5). Nævdal G., Johnsen L.M., Aanonsen S.I., and Vefring E.H. 2005. Reservoir monitoring and continuous model updating using ensemble Kalman filter. S'PE Journal 10(1): 66–74. Satija A. and Caers J. 2015. Direct forecasting of subsurface flow response from non-linear dynamic data by linear leastsquares in canonical functional principal component space. Advances in Water Resources 77: 69–81.

24

SPE-182609-MS

Sun W., Durlofsky L., and Hui M. 2016. Production Forecasting and Uncertainty Quantification for a Naturally Fractured Reservoir using a New Data-Space Inversi. ECMOR XIV-15th European Conference on the Mathematics of Oil Recovery. Van Leeuwen P.J. and Evensen G. 1996. Data assimilation and inverse methods in terms of a probabilistic formulation. Monthly Weather Review 124(12): 2898–2913. Walker G.J. and Lane H.S. 2007. Assessing the Accuracy of History-Match Predictions and the Impact of Time-Lapse Seismic Data: A Case Study for the Harding Reservoir (SPE paper 106019). SPE Reservoir Simulation Symposium, Houston, TX. Wen X.H. and Chen W.H. 2006. Real-Time Reservoir Model Updating Using Ensemble Kalman Filter with Confirming Option. SPE Journal 11(4): 431–442. Yeten B., Castellini A., Guyaguler B., and Chen W.H. 2005. A comparison study on experimental design and response surface methodologies. SPE Reservoir Simulation Symposium.

Suggest Documents