Asymptotic Property of Semiparametric Bootstrapping Kriging

0 downloads 0 Views 581KB Size Report
Mar 26, 2015 - ... Simamora, Department of Mathematics, State University of. Medan, North Sumatera, Indonesia. E-mail: elmanani_simamora@mail.ugm.ac.id ...
Applied Mathematical Sciences, Vol. 9, 2015, no. 50, 2477 - 2491 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2015.52104

Asymptotic Property of Semiparametric Bootstrapping Kriging Variance in Deterministic Simulation Elmanani Simamora Department of Mathematics State University of Medan North Sumatera, Indonesia Subanar Department of Mathematics Gadjah Mada University Yogyakarta, Indonesia Sri Haryatmi Kartiko Department of Mathematics Gadjah Mada University Yogyakarta, Indonesia Copyright © 2015 Elmanani Simamora, Subanar and Sri Haryatmi Kartiko. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract Plug-in kriging variance underestimates true kriging variance. This underestimator happens because kriging plug-in predictor ignores the randomness of errors or uncertainty of outputs in the locations of observed data. The correct kriging variance is proposed using semiparametric bootstrapping procedure. The simulation result for increasing observed I/O data location shows three properties, which are: (i) the values of generic estimation of kriging variance, semiparametric bootstrapping kriging variance, is always bigger than plug-in kriging variance, (ii) the decline of the estimation values of both estimators tends to be zero, and (iii) conditional number of correlation matrix increases, enabling matrix in ill condition. One of the causes of ill condition is rounding error in expensive computation, decreasing the accuracy of the estimation, even causing loss of the Corresponding author: Elmanani Simamora, Department of Mathematics, State University of Medan, North Sumatera, Indonesia. E-mail: [email protected]

2478

Elmanani Simamora et al.

solution of kriging equation system. With the assumption that computation aspect is ignored, ill condition, it can be analytically shown that the asymptotic property of both estimators, i.e: plug-in kriging variance and generic estimator, are consistent to zero. Keywords: Kriging, Variance, Bootstrapping, Semiparametric, Asymptotic

1. Introduction In kriging model, model parameters are usually not known for certain, but they are estimated based on observed I/O data (sample) instead. For example, model parameters are stated in vector  [ ,  , ]T , while ˆ  [ˆ , ˆ ,ˆ]T states the estimators.  ,  and  notations state parameters of regresion, process variance, and correlation, respectively. Plugging-in ˆ into kriging predictor produces estimators of Emperical Best Linear Unbiased Predictor (EBLUP) kriging variance, it is stated that plug-in kriging variance then underestimates true kriging variance. Hertog [1] show this analytically in conditional expectation framework. We bring forward this evidence again in the next part by using true kriging variance description in simpler expectation. This underestimator is caused by kriging prediction which is exact interpolation in deterministic simulation, which ignores randomness of errors or uncertainty of outputs in the locations of observed data. Several researchers in deterministic simulation have corrected this underestimation using parametric bootstrapping to consider output uncertainty in the locations of observed data, e.g. [1], [2], [3]. For general discussion on bootstrapping, they refer to [4]. Their purposes are to give generic estimators of kriging variance, typically kriging variance is unknown, closer to the truth and gives false statement for formulation of plug-in kriging variance. Unfortunately, their generic estimators are very different from plug-in kriging variance when process variance estimation, ˆ , is very large. We, Simamora et al. [5], [6], propose a new procedure to generate output uncertainty in the locations of observed data using semiparametric bootstrapping in deterministic simulation. For general discussion on bootstrapping, we refer to [7], [8], [9]. We, Simamora [6], have comparatively studied parametric and semiparametric bootstrapping procedures. Essentially, we show that semiparametric bootstrapping procedure is better than parametric. The aspect measured in that study is the performances of both bootstrappings in (i) estimation values of both generic estimators, and (ii) coverage probability and length of confidence interval estimations. This paper is a continuation of Simamora et al. [5], [6] which will be presented in two sections, namely simulation and analytic sections. The simulation section will consider the properties of simulation results if observed I/O data increases. Meanwhile, the properties which will be considered are (i) values of generic estimation, semiparametric bootstrapping kriging variance, vs plug-

Asymptotic property of semiparametric bootstrapping kriging variance

2479

in kriging variance, (ii) trend of estimation values of both estimators, and (iii) conditional number of correlation matrix. Normally, increasing size of observed data causes conditional number of correlation matrix increases, enabling matrix to be in ill condition. One of the causes of ill condition is rounding error in expensive computation, so estimation accuracy decreases and even there is loss of solution of kriging equation system. Lophaven [10], [11] offers ill condition correlation matrix regulation so that conditional number can be suppressed when observed I/O data is quite large. Meanwhile, Zimmermann [12] gives regulation of maximum likelihood estimator limits of correlation function parameters if the conditional numbers are leading to infinite. For analytic section, we assume computational aspects are not considered and conducted it more natural. The size of observed I/O data leading toward infinite will show that the property of both plug-in and generic kriging variance estimators are consistent to zero. The size of bootstrap sample leading toward infinite can reduce estimation uncertainty or in other words will yield ideal estimation of ideal bootstrap, (see [4], p. 50-53). This paper is arranged as follows Section 1 provides research background. Section 2 summarizes kriging model (including notations and terms), deriving the formula of true kriging variance which is Best Linear Unbiased Predictor (BLUP) and plug-in kriging variance. Section 3 introduces generic estimator algorithm of kriging variance by [6]. Section 4 presents simulation result. Section 5 presents several propositions for analytic studies. Lastly, we draw conclusion.

2. Kriging Models Nowadays, kriging method is widely used in various fields, such as environment, hydrology, geostatistic, engineering, research operation, and economy. The term krigeage (kriging) was coined by Matheron (1963) to respect the name of the inventor, Prof D.G Krige, a mining engineer from South Africa, (see [13], p.50). Sacks et al. (1989) apply kriging on deterministic simulation model where kriging is an exact interpolation which does not have any observation error (nugget effect) in observed I/O data. Several symbols and terminologies in this section are taken from [10], [14]. Output (response) y( x) as realization of stochastic process Y : p

Y ( x)   i gi ( x)  Z ( x) ,

(1)

i 0

where x is input variable with d -dimension. For example g ( x)  [g0 ( x)

expresses selected function and

g p ( x)]T

  [ 0

 p ]T

is regression parameter.

Modelling (1) is the sum of  T g ( x) as linear regression model and Z ( x) as stochastic process. Assume that Z ( x) has E[Z ( x)]  0 and E[Z ( x)Z ( x)]=  2 .

2480

Elmanani Simamora et al.

x and t covariances, two different input variables are notated as C ( x, t )   2 R( x, t ) with R( x, t ) as the correlation between x and t . Lophaven [10] provides several models of correlation function. In this paper, we choose Gaussian correlation function model d

2 R  x, t ,   exp  i xi  ti  .   i 1

Expansion of n -experiment location, X n  [ x1 data produces Gn( p 1)  [g( x1 )

(2) xn ]T as observed input

g ( xn )]T and R nn  [ R( xi , x j , )]in, j 1 which state

function matrix and correlation matrix, respectively. For example x0  X n , an untried point, so correlation vector of every Z ( xi ) in X n with Z ( x0 ) is expressed as r ( x0 )  [ R( x1 , x0 , ) R( xn , x0 , )]T . Kriging prediction in x0 is stated as yˆ ( x0 )   T YX   T (G  Z ) ,

(3)

where   [ 1

 n ]T is weight vector, YX  [ y( x1 ) y( xn )]T is observed output data and Z  [Z ( x1 ) Z ( xn )]T as error vector in n -experiment location. Kriging prediction error is stated as yˆ ( x0 )  y( x0 ) , with y( x0 )   T g ( x0 )  Z ( x0 ) . Kriging predictors are called BLUP if they minimize Mean Squared Error Prediction (MSPE), min E[( yˆ ( x0 )  y( x0 ))2 ]





min  2 1   T R  2 T r ( x0 ) 

(4)

under one similarity constraint condition

E[ yˆ ( x0 )  y( x0 )]  0 GT   g ( x0 )  0 .

(5)

For optimization problem (4) with one similarity constraint (5), we introduce lagrangian function









( ,  )   2 1   T R  2 T r ( x0 )   T GT   g ( x0 ) ,

(6)

where  states lagrange multiplier vector. For example,  * solution of (4) and  * which are suitable Lagrange multiplier vector based on first order necessary condition of optimization problem, (see [15]) produces

Asymptotic property of semiparametric bootstrapping kriging variance



2481



 ( * ,  * )  2 2 R *  r ( x0 )  G *  0 GT  *  g ( x0 )  0 . This last form is generally called kriging equation system in [16]. If it is rewritten as a matrix, it will be  *   R G    r ( x0 )  (7)  T   *    g (x )  . 0   2   G 0   2  Solution of (7) is

*  2  (GT R 1G)1  GT R 1r ( x0 )  g ( x0 )  2  G *   *  R 1  r ( x0 )  2  . 2  

(8)

Substitute (8) to (3), so kriging predictor can be derived as yˆ ( x0 )   *T YX   T g ( x0 )  r T ( x0 )R 1 YX  G  .

MSPE of kriging prediction yˆ ( x0 ) is

 1  

MSPE  yˆ ( x0 )    2 1   *T R *  2 *T r ( x0 ) 2

T

(9)



(10)



(GT R 1G) 1  r T ( x0 )R 1r ( x0 ) ,

where   G R r ( x0 )  g ( x0 ) . Based on selection of correlation function model (2) kriging predictor uses maximum likelihood method to estimate , where likelihood function of model parameter is n 1 1 T (11) L     ln  2   ln| R |  2 YX  G  R 1 YX  G  . 2 2 2 T

1

MLE method is used to discover optimum estimators of correlation function model parameter. Maximum likelihood estimators (MLEs), ˆ  [ˆ , ˆ ,ˆ]T , of log-likehood of (11) produces rather complicated calculation. We use MATLAB DACE Toolbox from [10]. Plugging-in MLE ˆ into kriging predictor (9) produces ˆ 1 Y  Gˆ , yˆ ( x0 )Plug-in  yˆ ( x0 , ˆ )  ˆ T g ( x0 )  rˆT ( x0 )R (12) X



while plugging-in MLE ˆ into (10) MSPE  yˆ ( x0 ) Plug-in  MSPE yˆ n ( x0n ), ˆ  mPlug-in











ˆ 1G )1ˆ  rˆT ( x )R ˆ 1rˆ( x ) ,  ˆ 2 1  ˆT (GT R 0 0 makes MSPE (12), which is called plug-in kriging variance, underestimated.

(13)

2482

Elmanani Simamora et al.

Proposition 1 For example MSPE  yˆ ( x0 )  is true kriging variance at untried point x0 which is generally unknown. Plugging-in ˆ into kriging predictor produces underestimated or biased MSPE  yˆ ( x0 )  estimators. Proof Note MSPE( yˆ ( x0 ))  E ( yˆ ( x0 )  y( x0 )2  , modifying a part of the equation will produce MSPE( yˆ ( x0 ))  E ( yˆ ( x0 )  y( x0 )2   E (( yˆ ( x0 )  yˆ ( x0 )Plug-in )2  2(( yˆ ( x0 )  yˆ ( x0 )Plug-in )( yˆ ( x0 )Plug-in  y( x0 ))  ( yˆ ( x0 )Plug-in  y( x0 ))2 

 E[( yˆ ( x0 )  yˆ ( x0 )Plug-in ]2  2E[( yˆ ( x0 )  yˆ ( x0 )Plug-in ]E[ yˆ ( x0 )Plug-in  y( x0 )]  E[yˆ ( x0 )Plug-in  y( x0 )]2 ,

because kriging predictor is unbiased, so E[y( x0 , )  y( x0 )]  0 , producing  E[( yˆ ( x0 )  yˆ ( x0 )Plug-in ]2  E[yˆ ( x0 )Plug-in  y( x0 )]2

 E[( yˆ ( x0 )  yˆ ( x0 )Plug-in ]2  MSPE  yˆ ( x0 ) Plug-in  MSPE  yˆ ( x0 ) Plug-in . (Q.E.D).

3. Algorithm of Generic Estimator of Kriging Variance In this paper, we adopt directly-semiparametric bootstrapping algorithm from [6] without modification because this study is a continuation of [5], [6]. Directly-Semiparametric Bootstrapping Algorithm 1. Based on observed I/O data, determine MLE ˆ  [ˆ , ˆ ,ˆ]T to estimate the distribution of Y(x). ˆ  L LT , L is lower than triangular 2. Based on step 1, determine ˆ  ˆ 2 R n n n matrix sized n  n based on Cholesky decomposition. 3. Determine U  Ln1YX  [u1 un ]T called decorrelation transformation.

 u 

4.

Determine U  [u1

5.

Sample U n*b1 of step 4, amounting to b  1, 2,

6.

Determine ˆ

n 1

ˆ  T cˆ

un ]T , where ui

i

n j i

n

uj

.

,B.

cˆ  , where cˆ  ˆ 2 rˆ( x0 ) , to get L n1 with Cholesky 2 ˆ 

decomposition. 7. Sample [YX*b , y*b ( x0 )]T  Ln1U n*b1 . 8.

By using bootstrap sample YX*b match a kriging model.

9. Based on step 8, determine kriging prediction in x0 , say yˆ *b ( x0 ) . 10. Based on B bootstrap sample, calculate generic estimator based on formula

Asymptotic property of semiparametric bootstrapping kriging variance

MSPE( yˆ *b ( x0 ))  mSPB 

1 B  ( yˆ *b ( x0 )  y*b ( x0 ))2 . B b1

2483

(14)

4. Simulation Results Simulation-optimization requires relatively long considerable computation time. This is because kriging predictors calculate MLEs numerically using complicated constrained maximization algorithm. Considering this, we only plot two experimental designs for observed I/O data, where data size increases in every input with one, two and three dimensions. All input point experimental designs follow [6], but with increased size of input experimental design. In this simulation, the accuracy of kriging system solution (7) is influenced by conditional number of correlation matrix Rˆ . The magnitude of conditional number,  , is due to: errors in matrix elements, long algorithms, rounding (see [17]). If  is relatively big, kriging system problem (7) produces inaccurate solution or even a loss in the solution. One of the considerations in this paper is the amount of cumulated rounding, called rounding error, in expensive simulation which enables correlation matrix Rˆ in ill condition. The natures observed in this simulation, the increase of the size of observed I/O data, are (i) the values of generic estimation vs plug-in kriging variance, and (ii) trend of estimation values of both estimators, (iii) conditional number of correlation matrix. 4.1. Input Points with One Dimension The experimental design used by [6] for input point with one dimension is sampling strategy with the same distance as input space [0,10] . While selection of multi-modal test function is taken from [1],

f ( x)  0.0579 x4  1.11x3  6.845x2  14.007 x  2 , with initial value 0  1 and correlation parameter space (0, 20] . The sizes of locations of experimental design of observed I/O data are n  4 and n  9 with untried point x0  1.1115 . For bootstrap sample size B  10000 , the simulation in Figure 1 shows that (i) the value of generic estimation, mSPB , is always bigger than plug-in kriging variance, mPlug-in , (ii) the decline of estimation values of both estimators tends to be zero, and (iii) observed I/O data n  4 produces   1 , while n  9 produces   41.7926 .

2484

Elmanani Simamora et al.

(a)

(b) 0.5

80 mSPB

mSPB

mPlug-in

70

mPlug-in

0.45

0.4 60 0.35 50

MSPE

MSPE

0.3 40

0.25 30 0.2 20 0.15

10

0

0.1

0

2000

4000

6000

8000

0.05

10000

0

2000

4000

B

6000

8000

10000

B

Figure 1. Plots of generic estimators vs plug-in kriging variance where inputs have one dimension and bootstrap sample size B  10000 . (a) Observed I/O data n  4 , (b) Observed I/O data n  9 . 4.2. Input Points with Two Dimensions Similar with experimental design for inputs with one dimension, we use unmodified observed I/O data from [6]. Locations of experimental design of observed I/O data are sized n  25 and n  41 with untried point x0  [-4.5 4.5] . With the input space is ( x1i , x2i ) [5 10]  [0 15]  R2 with response vector YX Branin test function 5 5 1 y( x1i , x2i )  ( x2i  2 x12i  x1i  6) 2  10(1  ) cos x1i  10 4  8 1 1 with initial values 0  [2 2] , lob  [10 10 ] and upb  [10 10] . For the same size of bootstrap sample as inputs with one dimension, Figure 2 shows the same behavior as inputs with one dimension. However, for observed I/O data n  25 , n  41 produces   1.0972 104 and   7.992 1012 respectively. (a)

(b) 0.5

1000 mSPB mPlug-in

900

0.45

800

0.4

700 0.35 600

MSPE

MSPE

0.3 500

0.25 m SPB

400

m Plug-in

0.2 300 0.15

200

100

0.1

0

0.05

0

2000

4000

6000 B

8000

10000

0

2000

4000

6000

8000

10000

B

Figure 2. Plots of generic estimators vs plug-in kriging variance where inputs have two dimensions and bootstrap sample sizes B  10000 . (a) Observed I/O data n  25 , (b) Observed I/O data n  41 .

Asymptotic property of semiparametric bootstrapping kriging variance

2485

4.3. Input Points with Three Dimensions Following the experimental locations of [6], for inputs with three dimensions, the sizes of observed I/O data are n  32 and n  200 with untried point x0  [0.5 0.5 0.5] . Multi-modal test function using Hartman 3 has 4 minimum local and one minimum global 4

3

i 1

j 1

f ( x)    i exp( Kij ( x j  H ij )2 ) , where   (1.0,1.2,3.0,3.2)T

 3.0  0.1 K   3.0   0.1

10 10 10 10

30   3689   35  4  4699 , H  10  1091 30    36   381

2673   7470  5547   8828 

1170 4387 8732 5743

where x [0,1]  [0,1]  [0,1]  R3 ;0  [111]; lob  1e  1 1e  1 1e  1; upb  [10 10 10] . The treatment is the same as inputs with one and two dimensions for many bootstrap sample, figure 3 shows the same behavior as inputs with one and two dimensions. However, for observed I/O data n  32 , n  200 produces   1.506 105 ,   5.10711015 , respectively. -3

14

-3

(a)

x 10

2.5

(b)

x 10

mSPB

mSPB

mPlug-in

13

mPlug-in

12

2

11

1.5

MSPE

MSPE

10

9

8

1

7

6

0.5

5

4

0

2000

4000

6000 B

8000

10000

0

0

2000

4000

6000

8000

10000

B

Figure 3. Plots of generic estimators vs plug-in kriging variance where inputs have three dimensions and bootstrap sample size B  10000 . (a) Observed I/O data n  32 , (b) Observed I/O data n  200 .

5. Analytic Study In this section, we assume that computation aspect is not considered and is conducted in more natural way. This means that elements in correlation matrix Rˆ , symmetric and positive definite (SPD), are assumed to have no rounding, so inverse Rˆ is always present. If this is the case, kriging system problem (7)

2486

Elmanani Simamora et al.

always has a solution when the size of I/O data is infinite. The main purpose is to prove that the asymptotic property of plug-in kriging variance and generic estimator are consistent to zero. In proving this, there are several tiered propositions (hierarchy) which will be derived. 5.1. Asymptotic Property of Plug-in Kriging Variance To prove asymptotic property of plug-in kriging variance is consistent to zero, tiered propositions are made. Proposition 2. For example X n  [ x1

xn ]T is n-design location in input space  , where

xi  X n    d and x0n  X n is an untried point in input space  , so ˆ 1rˆ ( x n )  e lim R n

n

n

0

i

Where ei canonical vector of identity matrix I  and i  I  a set on indices. Proof For n   design location, there is i  I  so that x0n  X  . This causes ˆ , because R ˆ 1R ˆ I r ( x0n ) one of i-th column of R     so T  1 n ˆ rˆ ( x )   , 0, , 0,1, 0, , 0,   e . (Q.E.D). lim R n

n

n

0

i

Proposition 3. Due to proposition 2 it can be expressed that ˆ 1rˆ ( x n )  g ( x n ))  0 . 1. lim  n  lim (GT R n n 0 0 n

n





2. lim MSPE yˆ ( x0n ), ˆ  lim mPlug-in  0 . n

n

Proof ˆ 1rˆ ( x n )  e is produced, creating 1. Based on proposition 2, lim R n n 0 i n

ˆ 1rˆ ( x n )  g ( x n )) lim  n  lim (GT R n n 0 0 n 

n 

ˆ 1rˆ ( x n )  g ( x n )  GT lim R n n 0 0 n 

T  i

 G e  g ( x0n )  g ( x0n )  g ( x0n )  0. (Q.E.D)

2. Due to proposition 3.1 is obtained lim MSPE yˆ ( x0n ), ˆ  lim mPlug-in n





n

Asymptotic property of semiparametric bootstrapping kriging variance



ˆ 1G) 1  rˆT ( x n )R ˆ  r1ˆ ( x n )  lim ˆ 2 1   nT (GT R n n n 0 n n 0 n

2487



ˆ 1G) 1 )  lim (rˆT ( x n )R ˆ  r1ˆ ( x n ))   ˆ 2 1  lim ( nT (GT R n n n 0 n n 0 n   n  ˆ 1rˆ ( x n ))   ˆ 2 1  lim (rˆnT ( x0n )R n n 0  n  ˆ 1rˆ ( x n ))  .  ˆ 2 1  lim (rˆnT ( x0n )).lim (R n n 0 n  n 

ˆ 1r ( x n )  e and lim rˆT ( x n )  Rˆ , it is obtained Because lim R n 0 :,i n n 0 i n



lim MSPE yˆ ( x ) n 

n 0



n 

   ˆ 1  Rˆ 

 ˆ 1  Rˆ :,i ei 2

Plug-in

2

ii

 ˆ 2 1  1  0. (Q.E.D). Based on the final proposition, it is shown that asymptotic property of plug-in kriging variance are consistent to zero. 5.2. Asymptotic Property of Generic Estimator In the same way as the above, tiered propositions are made to prove that asymptotic property of generic estimator is consistent to zero. Proposition 4. If [YX*n , y* ( x0n )]T  L*nb1U n*1 is a multivariate sample of quasibootstrap, where b L*n 1 is obtained from the decomposition of Cholesky matrix

ˆ *b ˆ *b   n 1 *b T (cˆ )

cˆ*b  , (ˆ 2 )*b 

it produces 

1. lim yˆ *b ( x0n )   lij*bu*j b . n 

j 1 

2. lim y*b ( x0n )   lij*bu*j b  lim l(*nb1)( n 1)un*b1 n 

j 1

n 

for a i  I n . Proof 1. If YX*nb is an observed response of b-th bootstrap, then kriging predictor in untried x0n is





ˆ *b )1 Y *b  Gˆ *b . yˆ *b ( x0n )  (ˆ *b )T g ( x0n )  (rˆn*b ( x0n ))T (R n Xn

2488

Elmanani Simamora et al.

For n   design location then ˆ *b ) 1 Y *b  G ˆ *b lim yˆ *b ( x0n )  lim ( ˆ *b )T g ( x0n )  (rˆn*b ( x0n ))T ( R n Xn



n 



n 





ˆ *b ) 1  ( ˆ *b )T g ( x0n )  lim (rˆn*b ( x0n ))T ( R n n 

 Y

*b Xn

 G ˆ *b



 



ˆ *b ) 1 .lim Y *b  G ˆ *b ,  ( ˆ *b )T g ( x0n )  lim (rˆn*b ( x0n ))T ( R n Xn n 

n 

ˆ *b )1  eT then because lim(rˆn*b ( x0n ))T (R n i n





 . lim Y

lim yˆ *b ( x0n )  ( ˆ *b )T g ( x0n )  eiT . lim YX*nb  G ˆ *b n 

n 

 ( ˆ *b )T g ( x0n )  eiT

n 

*b X

 G ˆ *b

 

 ( ˆ *b )T g ( x0n )  eiT lim YX*nb  eiT G ˆ *b n 

 ( ˆ *b )T g ( x0n )  eiT lim YX*nb  ( ˆ *b )T g ( x0n ) n 



 eiT lim YX*nb  eiT lim L*nbU n*b   lij*bu *j b . (Q.E.D). n 

n 

j 1

2. Note

 l11*b 0  *b *b *b  YX n  l21 l22  *b *b  *b n   L n 1U n 1    y ( x0 )   *b *b l( n 1)1 l( n 1)2

  u1*b    0   u2*b  ,     l(*nb1)( n 1)  un*b1  0

 u1*b    n 1 *b *b l(*nb1)( n 1)      l( n 1) j u j . un*b1  j 1  

produces y*b ( x0n )  l(*nb1)1

Based on the proof description of proposition 2, it can be concluded that lim l(*nb1)1 l(*nb1) n  is equal to one of i-th lines of lim L*nb , producing n

n



lim y*b ( x0n )   lij*bu*j b  lim l(*nb1)( n 1)un*b1 . (Q.E.D). n 

j 1

n 

Proposition 5. If y*b ( x0n )  l(*nb1)1

 u1*b    n 1 *b *b l(*nb1)( n1)un*b1  0 . l(*nb1)( n 1)      l( n 1) j u j then lim n un*b1  j 1  

Asymptotic property of semiparametric bootstrapping kriging variance

2489

Proof Note equation to find main diagonal element of matrix L n1 using j 1

l jj  ( Ajj   l 2jj )1/2 , k 1

for a matrix A. By using the same provision as ˆ *nb 1 then 1/2

*b ( n 1)( n 1)

l

n     (ˆ 2 )*b R(*nb1)( n1)   (l(*nb1) k ) 2  . k 1   n

Because Rˆ(*nb1)( n1)  1 and lim  (l(*nb1) k )2  (ˆ 2 )*b then produce n 

k 1

1/2

*b *b ( n 1)( n 1) n 1

lim l n 

u

n    lim  (ˆ 2 )*b   l(*nb1) k )2  un*b1  0 . (Q.E.D) n k 1  

Proposition 6. If n   design location and B  0 it will meet the following condition 1 B  lim MSPE( yˆ * ( x0n ))  lim   b 1 ( yˆ *b ( x0n )  y*b ( x0n )) 2   0 . n  n  B  B  B   Proof Note

1 B  lim MSPE( yˆ * ( x0n ))  lim   b 1 ( yˆ *b ( x0n )  y *b ( x0n )) 2  n  B  B  

n  B 

1 B   lim   b 1 lim( yˆ *b ( x0n )  y*b ( x0n )) 2  . B  B n    Due to propositions 4 is obtained



lim yˆ *b ( x0n )  y*b ( x0n ) n 



2



 lim yˆ *b ( x0n )  lim y*b ( x0n ) n 

n 



2

       lij*bu *j b   lij*bu *j b  lim l(*nb1)( n 1)un*b1  n  j 1  j 1 





2

2

  lim l(*nb1)( n 1)un*b1 , n 

and using propositions 5 is finally obtained lim MSPE( yˆ * ( x0n ))  0 . (Q.E.D). n  B 

Based on the last proposition, it is shows that asymptotic property of generic estimator of kriging variance, semiparametric bootstrapping kriging variance, are consistent to zero.

2490

Elmanani Simamora et al.

6. Conclusion From the results of expensive simulation-simulation optimization-where observed I/O data increased, it can be concluded that (i) the values of generic estimation of kriging variance are always bigger than plug-in kriging variance, (ii) the decrease of estimation values of both estimators tend to be zero, and (iii) conditional number of correlation matrix increased. This conditional number is influenced by the amount of rounding in the solution of optimization and the influence of input dimension. This can be seen in the simulation results for inputs with more than one dimension, showing that conditional numbers are relatively bigger. With the assumption that computation aspect is ignored, correlation matrix can be in ill condition. Analytically, it is shown that the asymptotic property of both estimators using tiered propositions (hierarchy) is consistent to zero.

References [1] D. Den Hertog, J.P. Kleijnen, and A. Y. D. Siem, The correct Kriging variance estimated by bootstrapping, Journal of the Operational Research Society, 57.4 (2006), 400-409. http://dx.doi.org/10.1057/palgrave.jors.2601997 [2] J.P. Kleijnen and Ehsan Mehdad, Conditional Simulation for Efficient Global Optimization, Proceedings of the 2013 Winter Simulation Conference: Simulation: Making Decisions in a Complex World. IEEE Press (2013), 969-979. http://dx.doi.org/10.1109/wsc.2013.6721487 [3] J.P. Kleijnen, Simulation-Optimization via Kriging and Bootstrapping: A Survey, Journal of Simulation, 2014, 8.4 241-250. http://dx.doi.org/10.1057/jos.2014.4 [4] Efron, Bradley, and Robert J. Tibshirani, An Introduction to the Bootstrap, Chapman & Hall, New York, 1993. [5] Elmanani Simamora, Subanar and S.H. Kartiko, The Procedure of Kriging Variance Estimation Based on Semi parametric Bootstrapping in Deterministic Simulation. International Journal of Applied Mathematics and Statistics™, 2014, 52.7: 99-110. [6] Elmanani Simamora, Subanar and S.H. Kartiko, A Comparative Study of Parametric and Semiparametric Bootstrapping in Deterministic Simulation. Accepted-International Journal of Applied Mathematics and Statistics™, 2015. [7] Solow, A. R., Bootstrapping Correlated Data. Mathematical Geology,17(7), 1985, 769-775. http://dx.doi.org/10.1007/bf01031616

Asymptotic property of semiparametric bootstrapping kriging variance

2491

[8] Schelin, L., & Sjöstedt-de Luna, S., Kriging Prediction Intervals Based on Semiparametric Bootstrap, Mathematical Geosciences, 42(8), 2010, 985-1000. http://dx.doi.org/10.1007/s11004-010-9302-9 [9] Iranpanah, N., Mansourian, A., Tashayo, B., & Haghighi, F. Spatial Semi-parametric Bootstrap Method for Analysis of Kriging Predictor of Random Field. Procedia Environmental Sciences 3 (2011), 81-86. http://dx.doi.org/10.1016/j.proenv.2011.02.015 [10] Lophaven, S. N., H. B. Nielsen, and J. Sondergaard, DACE: a MATLAB Kriging toolbox, version 2.0. Lyngby, Denmark: IMM Technical University of Denmark, 2002. [11] Lophaven, S. N., Nielsen, H. B., & J. Søndergaard, Aspects of the Matlab Toolbox DACE. Informatics and Mathematical Modelling, Technical University of Denmark, DTU, 2002. [12] Zimmermann, R., Asymptotic Behavior of the Likelihood Function of Covariance Matrices of Spatial Gaussian Processes, Journal of Applied Mathematics 2010, 1-17. http://dx.doi.org/10.1155/2010/494070 [13] Forrester, A., A. S´obester, and A. Keane, Engineering Design via Surrogate Modelling: A Practical Guide, John Wiley & Sons, 2008. http://dx.doi.org/10.1002/9780470770801 [14] Sacks, J., Welch, W. J., Mitchell, T. J., & Wynn, H. P., Design and Analysis of Computer Experiments, Statistical Science, 1989, 409-423. http://dx.doi.org/10.1214/ss/1177012413 [15] Nocedal. J, S.J. Wright, Numerical Optimization, Springer, New York, USA, 1999. http://dx.doi.org/10.1007/b98874 [16] Noel A. C. Cressie, Statistics for Spatial Data, Vol. 900, New York: Wiley, 1993. [17] Ipsen, Ilse CF, Numerical matrix analysis: Linear systems and least squares, Siam, 2009. http://dx.doi.org/10.1137/1.9780898717686

Received: February 17, 2015; Published: March 26, 2015