The study of Monte Carlo methods involves learning about ... The object of many
simulation experiments in statistics is the estimation of an expectation.
Statistics 580 Monte Carlo Methods Introduction Statisticians have used simulation to evaluate the behaviour of complex random variables whose precise distribution cannot be exactly evaluated theoretically. For example, suppose that a statistic T based on a sample x1 , x2 , . . . , xn has been formulated for testing a certain hypothesis; the test procedure is to reject H0 if T (x1 , x2 , . . . , xn ) ≥ c. Since the exact distribution of T is unknown, the value of c has been determined such that the asymptotic type I error rate is α = .05, (say), i.e., P r(T ≥ c|H0 is true) = .05 as n → ∞. We can study the actual small sample behavior of T by the following procedure: a. generate x1 , x2 , . . . , xn from the appropriate distribution, say, the Normal distribution and compute Tj = T (x1 , x2 , . . . , xn ), b. repeat N times, yielding a sample T1 , T2 , . . . , TN , and c. compute proportion of times Tj ≥ c as an estimate of the error rate i.e., P #(Tj ≥c) α ˆ (N ) = N1 N , where I(·) is the indicator function. j=1 I(Tj ≥ c) = N a.s.
d. Note that α ˆ (N ) → α, the type I error rate.
Results of such a study may provide evidence to support the asymptotic theory even for small samples. Similarily, we may study the power of the test under different alternatives using Monte Carlo sampling. The study of Monte Carlo methods involves learning about (i) methods available to perform step (a) above correctly and efficiently, and (ii) how to perform step (c) above efficiently, by incorporating “variance reduction” techniques to obtain more accurate estimates. In 1908, Monte Carlo sampling was used by W. S. Gossett, √ who published under the x−µ)/s where (x1 , x2 , . . . , xn ) pseudonym “Student”, to approximate the distribution of z = n(¯ is a random sample from a Normal distribution with mean µ and variance σ 2 and s2 is the sample variance. He writes: “Before I had succeeded in solving my problem analytically, I had endeavoured to do so empirically [i.e., by simulation]. The material used was a ... table containing the height and left middle finger measurements of 3000 criminals.... The measurements were written out on 3000 pieces of cardboard, which were then very thoroughly shuffled and drawn at random ... each consecutive set of 4 was taken as a sample ... [i.e.,n above is = 4] ... and the mean [and] standard deviation of each sample determined.... This provides us with two sets of ... 750 z’s on which to test the theoretical results arrived at. The height and left middle finger ... table was chosen because the distribution of both was approximately normal...”
1
The object of many simulation experiments in statistics is the estimation of an expectation of the form E[g(X)] where X is a random vector. Suppose that f (x) is the joint density of X. Then Z θ = E[g(X)] = g(x)f (x)dx.
Thus, virtually any Monte Carlo procedure may be characterized as a problem of evaluating an integral (a multidimensional one, as in this case). For example, if T (x) is a statistic based on a random sample x, computing the following quantities involving functions of T may be of interest: the mean, θ1 = the tail probability, θ2 = or the variance, θ3 = =
Z
Z
Z
Z
T (x)f (x) dx, I(T (x) ≤ c)f (x) dx, (T (x) − θ1 )2 f (x) dx T (x)2 f (x) dx − θ12 ,
where I() is the indicator function.
Crude Monte Carlo As outlined in the introduction, applications of simulation in Statistics involve the comR putation of an integral of the form g(x)f (x) dx, where g(x) is a function of a statistic computed on a random sample x. A simple estimate of the above integral can be obtained by using the following simulation algorithm: 1. Generate X1 , X2 , . . . , XN . 2. Compute g(X1 ), g(X2 ), . . . , g(XN ) 3. Estimate θˆ(N ) =
1 N
PN
j=1
g(Xj )
Here N is the simulation sample size and note that a.s. • θˆ(N ) → θ
• if var(g(X)) = σ 2 , the usual estimate of σ 2 is 2
s =
PN
ˆ 2 − (θ)) ; N −1
j=1 (g(Xj )
√ ˆ = s/ N and for large N , the Central Limit Theorem could be used to • thus S.E.(θ) construct approximate confidence bounds for θ. s2 s2 [θˆ(N ) − zα/2 √ , θˆ(N ) + zα/2 √ ] N N √ • Clearly, the precision of θˆ is proportional to 1/ N (in other words, the error in θˆ is 1 O(N − 2 )), and it depends on the value of s2 . 2
Efficiency of Monte Carlo Simulation Suppose as usual that it is required to estimate θ = E[g(X)]. Then the standard simulation algorithm as detailed previously gives the estimate N 1 X g(Xj ) θˆ(N ) = N j=1
√ ˆ with standard error s(θ) ˆ = s/ N One way to measure the quality of the estimator θ(N ) is by the half-width, HW, of the confidence interval for θ. For a fixed α it is given by HW = zα/2
s
V ar(g(X)) N
For increasing the precision of the estimate, HW needs to be made small, but sometimes this is difficult to achieve. This may be because V ar(g(X)) is too large, or too much computational effort is required to simulate each g(Xj ) so that the simulation sample size N cannot be too large, or both. If the effort required to produce one value of g(Xj ) is E the total effort for estimation of θ by a Monte Carlo experiment is N × E. Since for achieving a precision of HW at least a sample size N≥
zα/2 × V ar(g(X)) HW
is needed, the quantity V ar(g(X)) × E is proportional to the total effort required to achieve a fixed precision. This thsi quantity may be use to compare the efficiency of different Monte Carlo methods. When E’s are similar for two methods, the method with the lower variance of the estimator is considered superior. As a result, statistical techniques are often employed to increase simulation efficiency by decreasing the variability of the simulation output that are used to estimate θ. The techniques used to do this are usually called variance reduction techniques.
Variance Reduction Techniques
ˆ that is obtain more accurate estimates Monte Carlo methods attempt to reduce S.E.(θ), without increasing N . These techniques are called variance reduction techniques. When considering variance reduction techniques, one must take into account the cost (in terms of difficulty level) of implementing such a procedure and weigh it against the gain in precision of estimates from incorporating such a procedure. Before proceeding to a detailed description of available methods, a brief example is used to illustrate what is meant by variance reduction. Suppose the parameter to be estimated is an upper percentage point of the standard Cauchy distribution: θ = P r(X > 2) where f (x) =
1 , π(1 + x2 ) 3
−∞ < x < ∞
is the Cauchy density. To estimate θ, let g(x) = I(x > 2), and express θ = P r(x > 2) as R∞ the integral −∞ I(x > 2)f (x)dx. Note that x is a scalar here. Using crude Monte Carlo, θ is estimated by generating N Cauchy variates x1 , x2 , . . . , xN and computing the proportion that exceeds 2, i.e., PN I(xi > 2) #(xi > 2) ˆ θ = j=1 = . N N Note that since N θˆ ∼ Binomial (N, θ), the variance of θˆ is given by θ(1 − θ)/N . Since θ can be directly computed by analytical integration i.e., θ = 1 − F (2) =
1 − π −1 tan 2 ≈ .1476 2
the variance of the crude Monte Carlo estimate θˆ is given approximately as 0.126/N. In order to construct a more efficient Monte Carlo method, note that for y = 2/x, θ=
Z
∞ 2
Z 1 Z 1 1 2 dx = dy = g(y)dy . π(1 + x2 ) 0 π(4 + y 2 ) 0
This particular integral is in a form where it can be evaluated using crude Monte Carlo by sampling from the uniform distribution U (0, 1). If u1 , u2 , . . . , uN from U (0, 1), an estimate of θ is N 1 X g(uj ). θ˜ = N j=1 For comparison with the estimate of θ obtained from crude Monte Carlo the variance of θ˜ can again be computed by evaluating the integral 1 Z ˜ {g(x) − θ}2 f (x)dx var(θ) = N ˜ ≈ 9.4×10−5 /N, a variance analytically, where f (x) is the density of U (0, 1). This gives var(θ) reduction of 1340. In the following sections, several methods available for variance reduction will be discussed. Among these are
• stratified sampling • control variates • importance sampling • antithetic variates • conditioning swindles
4
Stratified Sampling Consider again the estimation of
Z
θ = E[g(X)] =
g(x) f (x) dx
For stratified sampling, we first partition the domain of integration S into m disjoint subsets Si , i = 1, 2, . . . , m. That is, Si ∩ Sj = 0 ∀ i 6= j and ∪m i=1 Si = S. Define θi as
θi = E g(X) X Si Z
=
Si
g(x) f (x) dx
for i = 1, 2, . . . , m. Then θ=
Z
S
m Z X
g(x) f (x) dx =
i=1
Si
g(x) f (x) dx =
m X
θi
i=1
The motivation behind this method is to be able to sample more from regions of S that contribute more variability to the estimator of θ, rather than spread the samples evenly across the whole region S. This idea is the reason why stratified sampling is used often as a general sampling techniques; for e.g., instead of random sampling from the entire state, one might use stratified sampling based on subregions such as counties, or districts, sampling more from those regions that contribute more information about the quantity being estimated. Define pi = and
Z
Si
fi (x) = Then
m X
f (x) dx 1 f (x) . pi
pi = 1 and
i=1
Z
Si
By expressing
we can show that
fi (x) dx = 1 .
g(x), if x Si
gi (x) = θi =
Z
0,
Si
= pi
otherwise
pi g(x)
Z
S
f (x) dx pi
gi (x) fi (x) dx
and the crude Monte Carlo estimator of θi is then pi θˆi =
Ni X
gi (xj )
j=1
Ni
5
where x1 , . . . , xNi is a random sample from the distribution with density fi (x) on Si , with variance given by p2 σ 2 Var(θˆi ) = i i , Ni
where σi2 = Var g(x) X Si . Thus the combined estimator of θ is θ˜ =
m X
θˆi =
i=1
Ni m X pi X i=1
with variance given by ˜ = Var(θ)
Ni
gi (xj ),
j=1
m X p2i σi2 i=1
Ni
.
˜ with For the best stratification, we need to choose Ni optimally i.e., by minimizing Var(θ) respect to Ni , i = 1, 2, . . . , m. That is, minimize the Lagrangian m X p2i σi2 i=1
Ni
+ λ(N −
m X
Ni )
i=1
Equating partial derivatives with respect to Ni to zero it has been shown that the optimum occurs when p i σi N Ni = X m p i σi i=1
for N =
m X
Ni . Thus after the strata are chosen, the samples sizes Ni should be selected
i=1
proportional to pi σi . Of course, since σi are unknown, they need to be estimated by some method, such as from past data or in absence of any information, by conducting a pilot study. Small samples of points are taken from each Si . The variances are estimated from these samples. This allows the sample sizes for the stratified sample to be allocated in such a way that more points are sampled from those strata where g(x) shows the most variation. Z Example: Evaluate I =
1
0
2
e−x dx using stratified sampling.
Since in this example f (x) is the U (0, 1) density we will use 4 equispaced partitions of the interval (0, 1) giving p1 = p2 = p3 = p4 = .25. To conduct a pilot study to determine strata sample sizes, we first estimated the strata variances of g(x): σi2 = var (g(x)|x ∈ Si ) for i = 1, 2, 3, 4 by a simulation experiment. We generated 10000 u(0, 1) variables u and compute the stan2 dard deviation of e−u for u’s falling in each of the intervals (0, .25), (.25, .5), (5, .75), and (.75, 1)
6
1.0 0.9 0.8 exp(-x^2) 0.7 0.6 0.5 0.4 0.0
0.2
0.4
0.6
0.8
1.0
x
These came out to be σi = .018, .046, .061, and .058. Since p1 = p2 = p3 = p4 , P P Ni ∝ σi / σi , we get σi / σi = .0984, .2514, .3333, and .3169 implying that for N = 10000, N1 = 984, N2 = 2514, N3 = 3333, and N4 = 3169.
We will round these and use N1 = 1000, N2 = 2500, N3 = 3500, and N4 = 3000. as sample sizes of our four strata. A MC experiment using stratified sampling for obtaining an estimate of I and its standard error can now be carried out as follows. For each stratum (mi , mi+1 ), m1 = 0, m2 = .25, m3 = .50, m4 = .75, and, m5 = 1.0, generate Ni numbers uj ∼ U (mi , mi+1 )) for j = 2 1, . . . , Ni and compute xj = e−uj for the Ni uj ’s. Then compute estimates θˆi = σ ˆi2 =
X
X
xj /Ni (xj − x)2 /(Ni − 1).
P ˜ = for i = 1, 2, 3, 4. Pool these estimates to obtain θ˜ = θˆi /4 and var(θ) obtained ˜ = 2.132326e − 007 θ˜ = 0.7470707 and var (θ)
P
σ ˆi2 /Ni . 42
We
A crude MC estimation of I is obtained by ignoring strata and using the 10000 u(0, 1) 2 variables to compute means and variance of e−uj . We obtained ˆ = 4.040211e − 006 θˆ = 0.7462886 and var (θ) Thus the variance reduction is 4.040211e − 006/2.132326e − 007 ≈ 19. 7
Control Variates
R
Suppose it is required to estimate θ = E(g(x)) = g(x)f (x)dx where f (x) is a joint denR sity as before. Let h(x) be a function “similar” to g(x) such that E(h(x)) = h(x)f (x)dx = τ is known. Then θ can be written as θ=
Z
[g(x) − h(x)]f (x)dx + τ = E[g(x) − h(x)] + τ
and the crude Monte Carlo estimator of θ now becomes N 1 X θ˜ = [g(xj ) − h(xj )] + τ , N j=1
and ˜ = var(θ)
var(g(x)) + var(h(x)) − 2cov(g(x), h(x)) . N
If var(h(x)) ≈ var(g(x)) which is possibly so because h mimics g, then there is a reduction in variance if 1 corr(g(x), h(x)) > . 2 In general, θ can be written as θ = E[g(x) − β{h(x) − τ }] in which case the crude Monte Carlo estimator of θ is N N X 1 X θ˜ = [ g(xj ) − β{ h({xj ) − τ }] , N j=1 j=1
As a trivial example consider a Monte Carlo study to estimate θ = E(median(x 1 , x2 , . . . , xn )) where x = (x1 , x2 , . . . , xn )0 are i.i.d. random variables from some distribution. If µ = E(¯ x) is known, then it might be a good idea to use x¯ as a control variate. Crude Monte Carlo is performed on the difference {median(x) − x¯} rather than on median(x) giving the estimator
with
1 X θ˜ = {median(x) − x¯} + µ N ˜ = var(θ)
var{median(x) − x¯} . N
The basic problem in using the method is identifying a suitable variate to use as a control variable. Generally, if there are more than one candidate for a control variate, a possible way to combine them is to find an appropriate linear combination of them. Let Z = g(x) and let Wi = hi (x) for i = 1, ..., p be the possible control variates, where as above it is required to estimate θ = E(Z) and E(Wi ) = τi are known. Then θ can be expressed as θ = E[g(x) − {β1 (h1 (x) − τ1 ) + . . . + βp (hp (x) − τp )}] = E[Z − β1 (W1 − τ1 ) − . . . − βp (Wp − τp )]. 8
This is a generalization of the application with a single control variate given above, i.e., if p = 1 then θ = E[Z − β(W − τ )].
An unbiased estimator of θ in this case is obtained by averaging observations of Z −β(W 1 −τ ) ˆ This amounts to regressing observations of as before and β is chosen to minimize var(θ). Z on the observations of W. In the general case, this leads to multiple regression of Z on ˆ W1 , . . . , Wp . However note that Wi ’s are random and therefore the estimators of θ and var(θ) obtained by standard regression methods may not be unbiased. However, if the estimates βˆi are found in a preliminary experiment and then held fixed, unbiased estimators can be obtained. To illustrate this technique, again consider the example of estimating θ = P (x > 2) = R2 1 − 0 f (x)dx where x ∼ Cauchy. Expanding f (x) = 1/π(1 + x2 ) suggests control variates 2 x2 and x4 , i.e., h1 (x) = x2 , h2 (x) = x4 . A small regression experiment based on samples drawn from U (0, 2) gave 1 θˆ = − [f (x) + 0.15(x2 − 8/3) − 0.025(x4 − 32/5)]. 2 ˆ ≈ 6.3 × 10−4 /N. with var(θ) Koehler and Larntz (1980) consider the likelihood ratio (LR) statistic for testing of goodness-of-fit in multinomials: H0 : pi = pi0 ,
i = 1, . . . , k
where (X1 , ..., Xk ) ∼ mult(n, p), where p = (p1 , ..., pk )0 . The chi-squared statistic is given by χ2 =
k X (xj − npj0 )2
npj0
j=1
,
and the LR statistic by G2 = 2
k X
xj log
j=1
xj . npj0
The problem is to estimate θ = E(G2 ) by Monte Carlo sampling and they consider using the χ2 statistic as a control variate for G2 as it is known that E(χ2 ) = k − 1 and that G2 and χ2 are often highly correlated. Then θ = E(G2 − χ2 ) + (k − 1) and the Monte Carlo estimator is given by 1 X 2 {G − χ2 } + k − 1 θ˜ = N ˜ is given by and an estimate of var(θ) d
˜ = var(θ)
var{G2 − χ2 } . N
Now, since in general, var(G2 ) and var(χ2 ) are not necessarily similar in magnitude, an improved estimator may be obtained by considering 9
θ = E[G2 − β (χ2 − (k − 1))] and the estimator is
1 X 2 ˆ 2 θ˜β = {G − β χ } + βˆ (k − 1) N
where βˆ is estimate of β obtained through an independent experiment. To do this, compute simulated values of G2 and χ2 statistics from a number of independent samples (say, 100) ˆ obtained under the null hypothesis. Then regress the G2 values on χ2 values to obtain β. An estimate of var(θ˜β ) is given by var( ˆ θ˜β ) =
n
var G2 − βˆ χ2 N
Importance Sampling
o
.
R
Again, the interest is in estimating θ = E(g(x)) = g(x)f (x)dx where f (x) is a joint density. Importance sampling attempts to reduce the variance of the crude Monte Carlo estimator θˆ by changing the distribution from which the actual sampling is carried out. Recall that in crude Monte Carlo the sample is generated from the distribution of x (with density f (x)). Suppose that it is possible to find a distribution H(x) with density h(x) with the property that g(x)f (x)/h(x) has small variability. Then θ can be expressed as θ=
Z
g(x)f (x)dx =
Z
g(x)
f (x) h(x)dx = E [φ(x)] h(x)
where φ(x) = g(x)f (x)/h(x) and the expectation is with respect to H. Thus, to estimate θ the crude Monte Carlo estimator of φ(x) with respect to the distribution H could be used, i.e., N 1 X ˜ θ= φ(xj ) N j=1
where xj = (x1 , . . . , xN )0 is sampled from H. The variance of this estimator is 1 Z 1 Z f (x)g(x) 2 θ2 2 ˜ var(θ) = {φ(xj ) − θ} h(x)dx = [ ] h(x)dx − . N N h(x) N ˜ will be smaller than var(θ) ˆ if Thus var(θ) Z
[
Z f (x)g(x) 2 ] h(x)dx ≤ [g(x)]2 f (x)dx h(x)
since ˆ = var(θ)
Z
[g(x)]2 f (x)dx − 10
θ2 . N
It can be shown using Cauchy-Schwarz’s inequality for integrals that the above inequality holds if |g(x)|f (x) h(x) = R , |f (x)|g(x)dx
˜ = 0! i.e., if h(x) ∝ g(x)f (x). For an extreme example, if h(x) = g(x)f (x)/θ, then var( θ) The choice of an appropriate density h is usually difficult and depends on the problem at hand. A slightly better idea is obtained by looking at the expression for the left hand side of the above inequality a little differently: Z
Z f (x) f (x)g(x) 2 ] h(x)dx = [g(x)2 f (x)][ ]dx [ h(x) h(x)
˜ this suggests that h(x) should be chosen such that h(x) > To ensure a reduction in var(θ)), 2 f (x) for large values of g(x) f (x) and h(x) < f (x) for small values of g(x)2 f (x). Since sampling is now done from h(x) , θ˜ will be more efficient if h(x) is selected so that regions of x for which g(x)2 f (x) is large are more frequently sampled than when using f (x). By choosing the sampling distribution h(x) this way, the probability mass is redistributed according to the relative importance of x as measured by g(x)2 f (x). This gives the name importance sampling to this method since important x’s are sampled more often and h(x) is called the importance function. Example 1: Rx R 2 √1 e−y /2 dy = ∞ I(y < x)φ(y)dy, where φ(y) is the Standard Evaluate θ = Φ(x) = −∞ −∞ 2π Normal density. A density curve similar in shape to φ(y) is the logistic density: √ π exp(−π y/ 3) √ f (y) = √ 3(1 + exp(−πy/ 3))2 with mean 0 and variance 1. The cdf of this distribution is: F (y) =
1 √ . 1 + exp(−π y/ 3)
A crude Monte Carlo estimate of θ is θˆ =
PN
i=1
#(zi < x) I(zi < x) = N N
where z1 , . . . , zN is a random sample from the standard normal. An importance sampling estimator of θ can be obtained by considering θ=
Z
∞ −∞
Z ∞ I(y < x)φ(y) kφ(y) I(y < x)f (y) f (y)dy = dy f (y) k −∞ f (y)
where k is determined such that f (y)I(y < x)/k is a density i.e., f (y)/k is a density over (−∞, x): 1 √ · k= 1 + exp(−πx/ 3) 11
The importance sampling estimator of θ is: N φ(yi ) k X θ˜ = N i=1 f (yi )
where y1 , . . . , yN is a random sample from the density f (y)I(y < x)/k. The sampling from this density is easily done through the method of inversion: Generate a variate u i from uniform (0, 1) and transform using √ o n √ 3 yi = − 2 log (1 + exp(−π x/ 3))/ui − 1 . π Earlier, it was argued that one may expect to obtain variance reduction if h(x) is chosen similar in shape to g(x)f (x. One way to do this is attempt to select h(x) so that both h(x) and g(x)f (x) are maximized at the same value. With this choice if h is chosen from the same family of distributions as f one might expect to gain substantial variance reduction. For example, if f is multivariate normal, then h might also be taken to be a multivariate normal but with a different mean and/or covariance matrix. Example 2: 2 Evaluate θ = E[h(X)] = E[X 4 eX /4 I[X≥2] ] where X ∼ N (0, 1). Chose h to be also Normal with a mean µ but the same variance 1. Since g is maximized at µ, the above suggests a good choice for µ to be the value that maximizes g(x)f (x): µ = arg max g(x)f (x) x
2
4 x2 /4
= arg max xe x
= arg max x4 e−x x≥2 √ = 8
e−x /2 I[x≥2] × √ 2π
2 /4
Then
N √ 1 X 2 θ˜ = yj4 e(yj /4− 8yj +4) N j=1 √ where yj ≥ 2 and generated from N ( 8, 1).
2
A recommended procedure for selecting an appropriate h(x) given gL ≤ g(x) ≤ gU for all x, is to choose h(x) so that, for all x, h(x) ≥
g(x) − gL gU − g L
While this does not guarantee variance reduction, it can be used to advantage when crude Monte Carlo will require large sample sizes to obtain a required accuracy. In many cases, this occurs usually when g(x) is unbounded, i.e., gU ≡ ∞ in the region of integration. In is bounded. this case, we may chose h(x) such that φ(x) = f (x)g(x) h(x) 12
Example 3: Consider the evaluation of θ=
Z
1 0
1 1 (say) for x < .1 (say) and h(x) < 1 for x > .1. Suppose we chose h(x) = αxα−1 . The plot on the right is a plot of h(x) for α = .6. The dashed line is a plot of f (x) = I(0,1) (x). This plot shows clearly that h(x) would give more weight to those values of x < .1, the values for which g(x) gets larger, than would f (x), which assigned equal weights to all values of g(x). We also see that since θ= where φ(x) =
e−x , α
Z
1 0
x
α−1 −x
e dx =
Z
1 0
φ(x)αxα−1 dx
φ(x) is bounded at x = 0.
Example 4: A more complex application In Sequential Testing, the likelihood ratio test statistics λ is used to test H0 : Population density is f0 (x)
vs.
HA : Population density is f1 (x)
where λ =
m Y f1 (xj ) f1 (x1 , . . . , xm ) = f0 (x1 , . . . , xm ) j=1 f0 (xj )
M X
m X f1 (xj ) = exp log = exp z j f0 (xj ) j=1 j=1
and, zj = log{f1 (xj )/f0 (xj )}. Using the decision rule: 13
P z ≥b P j zj ≤ a P
Reject H0 if Accept H0 if continue sampling if
a
0), the algebra simplifies to f0 (yi ) = e2µ0 yi . f1 (yi ) 14
P
Thus the importance sampling estimator reduces to I(yi ≥ b) exp(2µ0 yi ) where yi are ˜ sampled under HA . The choice of −µ0 for µ1 is optimal in the sense that it minimizes Var(θ) asymptotially. The following results are from Siegmund (1976): −µ0 .0
a any
θ˜ b any .5
ˆ 1/2 {N Var(θ)} .5
˜ 1/2 {N Var(θ)} .5
R.E. 1
.125 .25 .50
-5 -5 -5
5 5 5
.199 .0578 .00376
.399 .233 .0612
.102 .0200 .00171
15 136 1280
.125 .25 .50
-9 -9 -9
9 9 9
.0835 .00824 .0000692
.277 .0904 .00832
.0273 .00207 .0000311
102 1910 7160
More on Important Sampling For estimating θ = E [g(X)] = = we used
Z
g(x)
Z
g(x)f (x)dx
f (x) h(x)dx h(x)
N 1 X θ˜ = g(xj )w(xj ) N j=1
where
w(xj ) = f (xj )/h(xj ) and xj ’s are generated from H. This estimator θ˜ is called the importance sampling estimator with unstandardized weights. For estimating the posterior mean, for example, f is known only upto a constant. In that case, the problem will involve estimating the constant of integration, as well, using Monte Carlo. That is θ = E [g(X)] = =
R
R
g(x)f (x)dx R f (x)dx
(x) g(x) fh(x) h(x)dx
R
f (x) h(x)dx h(x)
is estimated by the importance sampling estimator with standardized weights θ˜∗ : θ˜∗ = =
1 N
1 N
PN
j=1 g(xj )w(xj ) 1 PN j=1 w(xj ) N N X g(xj )w∗ (xj ) j=1
15
where
w(xj ) w∗ (xj ) = PN j=1 w(xj )
and xj ’s are generated from H as before. Obtaining the standard error of θ˜∗ involves using an approximation, either based on Fieller’s theorem or the Delta method. To derive this, the following shorthand notation is introduced: gj ≡ g(xj ), wj ≡ w(xj ), and zj ≡ gj wj . In the new notation, 1 PN 1 PN g w zj z¯ j j j=1 ∗ N N = 1 PNj=1 = θ˜ = 1 PN w j=1 wj j=1 wj N N
Now from the central limit theorem, we have !
z¯ w
≈N
"
µz µw
!
1 , n
σz2 σzw σzw σw2
!#
where µz , µw , σz2 , σzw and σw2 are all expectations derived with respect to H. It is required to estimate µz /µw and the standard error of this estimate would be the standard error of θ˜∗ . To estimate µz /µw either the Delta method or Fieller’s theorem could be used. Delta method gives z¯ ≈N w
2µz 1 µ2z 2 µz 2 σ − σ , σ + zw z µw nµ2w µw µ2w w
!!
and the standard error of z¯/w is given by
z¯ s.e. w
"
!
1 z¯2 z¯ =√ s2z − 2 szw + s2 nw w w2 w
#1/2
where s2z =
X
(zi − z¯)2 /(n − 1), szw =
X
(zi − z¯)(wi − w)/(n − 1), and s2w =
X
(wi − w)2 /(n − 1).
Rather than using variance reduction to measure the effectiveness of an importance sampling strategy under a selected h(x), the following quantity, known as the effective sample size has been suggested as a measure of the efficiency. When using unstandardized weights this measure is given by ˜ (g, f ) = N N 1 + s2w Clearly, this reduces to N when h is exactly equal to f and is smaller than N otherwise. It can thus be interpreted to mean that N samples must be used in the importance sampling ˜ samples drawn exactly procedure using h to obtain the same precision as estimating using N from f (i.e. using crude Monte Carlo). Thus it measures the loss in precision in approximating f with g, and is a useful measure for comparing different importance functions. When using standardized weights an equivalent measure is ˜ (g, f ) = N 16
N 1 + cv2
where cv=sw /w is coefficient of variation of w. From the above results, for an importance sampling procedure to succeed some general requirements must be met: • weights must be bounded • small coefficient of variation of the weights • infinite variance of the weights must be avoided P
• max wi / wi should be small, i.e., for calculating the standard error of the the importance sampling estimate the central limit theorem must hold. Since the central limit theorem relies on the average of similar quantities, this condition ensures that a few large weights will not dominate the weighted sum. Since standardized weight could be used even when f is completely determined the following criterion could be used to make a choice between ”unstandardized” and ”standardized” estimators. The mean squared error of the standardized esimator is smaller than that of the ˜ if unstandardized estimator, i.e., MSE(θ˜∗ ) < MSE(θ) Corr(zj , wj ) >
cv(wj ) 2cv(zj )
where zj = gj wj as before.
Antithetic Variates
R
Again, the problem of estimating θ = g(x)f (x)d x is considered. In the control variate and importance sampling variance reduction methods the averaging of the function g(x) done in crude Monte Carlo was replaced by the averaging of g(x) combined with another function. The knowledge about this second function brought about the variance reduction of the resulting Monte Carlo method. In the antithetic variate method, two estimators are found by averaging two different functions, say, g1 and g2 . These two estimators are such that they are negatively correlated, so that the combined estimator has smaller variance. Suppose that θˆ1 is the crude Monte Carlo estimator based on g1 and a sample of size N and θˆ2 , that based on g2 and a sample of size N . Then let the combined estimator be
θ˜ = 1/2 θˆ1 + θˆ2 . P P where θˆ1 = g1 (xj )/N and θˆ2 = g2 (xj )/N . Then h
˜ = 1/4 Var(θˆ1 ) + Var(θˆ2 ) + 2cov(θˆ1 , θˆ2 ) Var(θ) h
= 1/4 σ12 /N + σ22 /N + 2ρσ1 σ2 /N
i
i
˜ over Var(θˆ1 ) will be achieved if where ρ = corr(θˆ1 , θˆ2 ). A substantial reduction in Var(θ) ρ 0. If this is so, then g2 is said to be an antithetic variate for g1 . The usual way of generating values for g2 so that the estimate of θˆ2 will have high negative correlation with θˆ1 17
is by using uniform variates 1 − u to generate samples for θˆ2 of the same u’s used to generate samples for θˆ1 . In practice this is achieved by using streams uj and 1 − uj to generate 2 samples from the required distribution by inversion. The following example will help clarify the idea. Consider estimation of θ = E[g(u)] where u is uniformly distributed between 0 and 1. Then for any function g(·), let θ = E(g(u)) = P
Z
1 0
g(u)du ,
so N1 g(uj ) is an unbiased estimator of θ for a random sample u1 , . . . , uN . But since 1 − u P is also uniformly distributed on (0, 1), N1 g(1 − uj ) is also an unbiased estimator of θ. If g(x) is a monotonic function in x, g(u) and g(1 − u) are negatively correlated and thus are “antithetic variates.” In general, let N 1 X θˆ1 = g(F1−1 (Uj1 ), F2−1 (Uj2 ), . . . , Fm−1 (Ujm )) N j=1 N 1 X g(F1−1 (1 − Uj1 ), F2−1 (1 − Uj2 ), . . . , Fm−1 (1 − Ujm )) θˆ2 = N j=1
where F1 , F2 , . . . , Fm are c.d.f.’s of X1 , X2 , . . . , Xm and U1 , U2 , . . . , Um are iid U (0, 1). Then it can be shown that if g is a monotone function, that cov(θˆ1 , θˆ2 )