Improved estimators of common variance of p ... - Springer Link

1 downloads 0 Views 188KB Size Report
Apr 19, 2007 - DOI 10.1007/s00362-006-0010-y. REGULAR ARTICLE. Improved estimators of common variance of p-populations when Kurtosis is known.
Statistical Papers (2008) 49:249–262 DOI 10.1007/s00362-006-0010-y R E G U L A R A RT I C L E

Improved estimators of common variance of p-populations when Kurtosis is known Honest W. Chipoyera · Eshetu Wencheko

Received: 20 June 2005 / Revised: 6 June 2006 / Published online: 19 April 2007 © Springer-Verlag 2007

Abstract The pooled variance of p samples presumed to have been obtained from p populations having common variance σ 2 , S2pooled has invariably been adopted as the default estimator for σ 2 . In this paper, alternative estimators of the common population variance are developed. These estimators are biased and have lower mean-squared error values than S2pooled . The comparative merit of these estimators over the unbiased estimator is explored using relative efficiency (a ratio of mean-squared error values). Keywords Hessian · Kurtosis · Mean-squared error · Non-singular matrix · Pooled sample variance · Positive definite matrix · Relative efficiency

1 Introduction Consider p random samples (X11 , . . . , Xn1 1 ), (X12 , . . . , Xn2 2 ), . . . , (X1p , . . . , Xnp p ) obtained from p independent populations having respective distributions DX1 (.; σ 2 ), DX2 (.; σ 2 ), . . . , DXp (.; σ 2 ), where σ 2 is the common variance for the p distributions. Traditionally, when calculating an estimator of σ 2 , one starts off with respective sample sizes n1 , n2 , . . . , np from the p populations to form a com-

H. W. Chipoyera (B) Department of Statistics and Operations Research, University of Limpopo, P. Bag X1106, Sovenga 0727, Republic of South Africa e-mail: [email protected] E. Wencheko Department of Statistics, Addis Ababa University, P.O. Box 1176, Addis Ababa, Ethiopia

250

H. W. Chipoyera, E. Wencheko

bined sample of size N = S2j ,

p

j=1 nj

with respective unbiased sample variances

given by nj

S2j

1  ¯ j )2 , = (Xkj − X nj − 1

nj ≥ 2.

(1)

k=1

Using these unbiased sample variances one then calculates the pooled sample variance, S2pooled which has invariably been treated as the default estimator for σ 2 where

1  (nj − 1)S2j . N−p p

S2pooled = S2p =

(2)

j=1

Remark 1 One major attraction why S2pooled has been taken as the default estimator of σ 2 is that it is an unbiased estimator of σ 2 . In this paper, two estimators of the common variance σ 2 are developed. 1. The first estimator, which we denote by S2w∗ .pooled is developed as follows:

the estimator has the form S2w.pooled = wS2pooled , w ∈ (0, 1], where the weight w is a function of the sample sizes n1 , n2 , . . . , np and the kurtosis, α4 . For all choices of the factor w = 1 the improved estimator S2w is a biased estimator of σ 2 . The performance of the biased estimators S2w.pooled relative to S2 can be studied by considering the mean square error (MSE) of the biased estimators and the variance of S2pooled . Note that we can express this class of biased estimators of the variance by   Cw (S2w.pooled ; n, w) = S2w.pooled |S2w.pooled = wS2pooled , w ∈ (0, 1) .

Remark 2 In this paper, amongst the class of estimators in the class Cw , the estimator with the lowest value of MSE(S2w.pooled ) is denoted by S2w∗ .pooled . 2. The second class of biased estimators of σ 2 is given by Cω (S2ω.ws , ω) =

⎧ ⎨ ⎩

S2ω.ws =

p  j=1

⎫ ⎬ ωj S2j , ωj > 0 . ⎭

Observe that an estimator from this class has the form S2ω.ws = ω1 S21 + ω2 S22 + · · · + ωp S2p =

p  j=1

ωj S2j = ωT V.

(3)

Improved estimators of common variance of p-populations

251

where ω = (ω1 , . . . , ωp )T (i.e. ω is a non-zero column vector of the weights) and V = (S21 , . . . , S2p )T (i.e. V is a column vector of the unbiased sample variances). We seek to find the vector ω = ω∗ which minimises MSE(S2 ω.ws) leading to an improved estimator of σ 2 given by S2ω∗ .ws = ωT ∗ V. (The motivation for the notation S2ω∗ .ws comes from the fact that the estimator is a weighted sum, and hence the subscript ws.) 2 The theoretical results The theoretical results leading to more efficient estimators of the common variance of p independent populations are discussed here. The approach taken in this paper can be viewed, just like in the papers by Wencheko and Wijekoon (2005) and the forthcoming two pieces of work by Chipoyera and Wencheko (2006), as an extension of the works by Kleffe (1985), Searls and Intarapanich (1990). We now consider the theoretical results for the two estimators of σ 2 . 2.1 Results for S2w∗ .pooled It is a well known fact that the variance of the unbiased sample variance constructed from the jth sample, S2j is given by Var(S2j )



 nj − 3 1 µ4 − σ4 . = nj nj − 1

(4)

Consequently, the mean-squared error of the pooled variance S2p is given by:

MSE(S2pooled ) = var(S2p ) =

p  nj − 1 2 j=1

N−p

var(S2j )



 p  nj − 1 2 1 nj − 3 µ4 − σ4 N−p nj nj − 1 j=1 ⎡ ⎤ p 2  (nj − 1) α4 2 ⎦ 4 =⎣ + σ . N−p nj N−p =

(5)

j=1

On the other hand, the MSE of a biased estimator S2w.pooled = wS2pooled is 2  MSE(S2w.pooled ) = var(wS2p )+ E[wS2p ] − σ 2 = w2 var(S2p )+(w−1)2 σ 4 .

(6)

252

H. W. Chipoyera, E. Wencheko

Differentiation of (6) with respect to w gives   d MSE(S2w.pooled ) dw

and

  d2 MSE(S2w.pooled ) dw2

= 2wvar(S2p ) + 2(w − 1)σ 4

(7)

  = 2 var(S2p ) + σ 4 > 0.

(8)

Since the second derivative is positive we obtain an optimal value of w for which the MSE(S2w ) will be minimum by setting (7) to zero, and this value is w∗ =

σ4 ∈ (0, 1). var(S2p ) + σ 4

(9)

From results (9) and (6), we thus obtain ⎡ w∗ = ⎣

α4 (N − p)2

p  (nj − 1)2 j=1

nj

⎤−1 +

2 + 1⎦ N−p

;

(10)

and MSE(S2w∗ .pooled ) = ω∗2 var(S2p ) + (ω∗ − 1)2 σ 4 ⎧ ⎡ ⎫ ⎤ p ⎨ ⎬ 2  (n − 1) α4 2 ⎦ j = ω∗2 ⎣ + + (ω∗ − 1)2 σ 4 . ⎩ ⎭ N−p nj N−p j=1

(11) Lemma 1 The relative efficiency of S2w∗ .pooled = w∗ S2pooled is p    var(S2p ) (nj − 1)2 α4 2 2 + = 1 + > 1. RE Sw∗ .pooled = 1 + N − p (N − p)2 nj σ4 j=1

(12) Remark 3 We note that the relative efficiency of S2w∗ , RE is always greater than 1; thus, the estimator S2w∗ .pooled of σ 2 is always more efficient than the pooled

variance S2p .

From the results (5), (10) and (11), we note that for data that come from normal distributions (i.e. when α4 = 0), the quantities MSE(S2pooled ), w∗ and MSE(S2w∗ .pooled ) do not depend on the individual sample sizes n1 , n2 , . . . , np ; rather, they depend on the overall combined sample size N.

Improved estimators of common variance of p-populations

253

2.2 Results for S2ω.ws Let S2ω.ws = ω1 S21 + ω2 S22 + . . . + ωp S2p = ωT V. Then

MSE(S2ω.ws ) = var(ωT V) + (1T ω − 1)2 σ 4  p 2 p   2 2 = ωk var(Sk ) + ωk − 1 σ 4 . k=1

(13)

k=1

Differentiating partially with respect to ωi , i = 1, . . . , p yields:

∂MSE(S2ω.ws ) ∂ωi

 =

2ωi var(S2i ) + 2

p 

 ωk − 1 σ 4 ,

(14)

k=1

and

∂ 2 MSE(S2ω.ws ) ∂ωi ωj

=

⎧ ⎨ 2var(S2i ) + 2σ 4 , if i = j ⎩

(15) 2σ 4 ,

if i = j.

Consequently, p ⎞ 2ω1 var(S21 ) + 2σ 4 k=1 (ωk − 1)  ⎜ 2ω2 var(S2 ) + 2σ 4 p (ωk − 1) ⎟ 2 k=1 ⎟ ⎜ ∂MSE(S2p.ws ) ⎟ ⎜ ··· =⎜ ⎟ ··· ⎟ ⎜ ∂ω ⎠ ⎝ ···  p 2 4 2ωp var(Sp ) + 2σ k=1 (ωk − 1) ⎞ ⎛ 4 2 4 σ ... σ4 (σ + σ 2 ) ⎛ ⎞ ⎛ ⎞ S1 1 ⎟ ω1 ⎜ 2 4 4 4 ⎟ ⎜ σ (σ + σ 2 ) . . . σ ⎜ ω2 ⎟ 4⎜ 1 ⎟ ⎟ ⎜ S = 2⎜ 2 ⎟ ⎝ · · · ⎠ − 2σ ⎝ · · · ⎠ ··· ··· ··· ··· ⎠ ω ⎝ 1 2 4 4 4 p σ · · · (σ + σS2 ) σ ⎛

 = 2σ 4 Aω − 1

p

(16)

254

H. W. Chipoyera, E. Wencheko

⎛ ⎜ ⎜ where A = ⎜ ⎜ ⎝

(1 + σ 22 /σ 4 ) S1

1

⎛ 1+ ⎜ ⎜ ⎜ =⎜ ⎜ ⎝

··· 1 

...

1

(1 + σ 22 /σ 4 ) . . .

1

1 S2

α4 n1

+

2 n1 −1

··· 1 

··· ··· · · · (1 + σS22 /σ 4 )

⎟ ⎟ ⎟ ⎟ ⎠

p

1

  1 + αn24 + ···

1 ···



1

2 n2 −1

1





...

1

... 1 ···   ··· . . . 1 + nαp4 +

2 np −1

⎟ ⎟ ⎟ ⎟, ⎟  ⎠ (17)

1 = (1, 1, . . . , 1)T is the column vector of ones of length p and σS22 = var(S2i ), i i = 1, 2, . . . , p. Remark 4 A is clearly a full-rank matrix (and hence a non-singular matrix); when we equate (16) to the p × 1 zero vector and solve the equation, we get ω∗ = A−1 1.

(18)

From (15), we obtain the Hessian matrix H, ⎛ ⎜ ⎜ H = 2⎜ ⎜ ⎝

(σ 4 + σ 22 ) σ4

S1

··· σ4

σ4

...

σ4

(σ 4 + σ 22 ) . . .

σ4

··· σ4

S2



··· ··· · · · (σ 4 + σS22 )

⎟ ⎟ ⎟ ⎟ ⎠

(19)

p

which is a p×p positive definite matrix; thus ω∗ = A−1 1 minimises MSE(S2ω.ws ). Hence, we conclude that among the class of pooled estimators S2ω∗ .ws = ωT V, the estimator S2 ω∗ .ws = 1T A−1 V

(20)

has the least mean-squared error amongst the class of estimators that are linear combinations of individual unbiased sample variances S21 , . . . , S2p obtained from p populations with common variance σ 2 . The mean-squared error of S2ω∗ .ws is given by: ⎛ ⎞

p  α 2 4 2 2⎠ + (ωT MSE(S2ω∗ .ws ) = σ 4 ⎝ ωj∗ + . ∗ 1 − 1) nj nj − 1 j=1

(21)

Improved estimators of common variance of p-populations

255

Table 1 Impact of sample sizes and α4 on S2pooled , S2w .pooled and S2ω .ws ∗ ∗

Distribution Expo(θ ), α4 = 3

N = n1 + n2 n1

10 3

4

5

4

12

20

 2 MSE(S ) pooled 2  MSE(Sw .pooled ) ∗  2ω .ws ) MSE(S ∗ ! 2 ) RE(S

3,459,821

3,442,383

3,437,500

315,190

315,077

315,051

2,227,011

2,219,773

2,217,742

300,058

299,956

299,932

2,222,222 1.5530

2,218,677 1.5510

2,217,742 1.5500

299,985 1.0504

299,942 1.0504

299,932 1.0504

! 2ω .ws ) RE(S ∗

1.5570

1.5520

1.5500

1.0507

1.0505

1.0504

 2 MSE(S ) pooled 2  MSE(Sw .pooled ) ∗  2ω .ws ) MSE(S ∗ ! 2 ) RE(S

1,562,500

1,562,500

1,562,500

127,551

127,551

127,551

w.pooled

N(µ, σ 2 ), α4 = 0

1,250,000

1,250,000

1,250,000

125,000

125,000

125,000

1,250,000 1.2500

1,250,000 1.2500

1,250,000 1.2500

125,000 1.0200

125,000 1.0200

125,000 1.0200

! 2ω .ws ) RE(S ∗

1.2500

1.2500

1.2500

1.0200

1.0200

1.0200

 2 ) MSE(S pooled 2  MSE(Sw .pooled ) ∗  2ω .ws ) MSE(S ∗ ! 2 RE(S )

803,571

810,547

812,500

52,495

52,541

52,551

w.pooled

U(a, b), α4 = −1.2

100

w.pooled

! 2ω .ws ) RE(S ∗

712,025

717,497

719,027

52,058

52,103

52,113

706,763 1.1286

716,146 1.1297

719,027 1.1300

51,993 1.0084

52,088 1.0084

52,113 1.0084

1.1370

1.1318

1.1300

1.0097

1.0087

1.0084

Thus, the relative efficiency of S2ω∗ .ws with respect to S2pooled is given by: RE(S2ω∗ .ws ) =

MSE(S2p ) MSE(S2ω∗ .ws )

= p

α4 N−p

p  nj −1 2

2 j=1 ωj∗



nj

j=1 α4 nj

+

2 nj −1



 +

2 N−p

σ4

2 + (ωT ∗ 1 − 1)

.

(22)

Remark 5 From (12) and (22), we observe that the relative efficiency expressions do not depend on the parameter σ 2 . 2.3 Impact of kurtosis and sample sizes on relative efficiency of the two estimators The impact of the kurtosis and sample sizes on mean-squared error and relative efficiency for the case p = 2 is investigated with the help of the S-PLUS computer program in Appendix A. Table 1 shows the results obtained for the case σ = 50. The results in Table 1 tell us the following. 1. Mean-squared error and relative efficiency values are largely influenced by the combined sample size N and the heterogeneity or homogeneity of the individual sample sizes play very little part in influencing the latter quantities.

256

H. W. Chipoyera, E. Wencheko

2. Larger values of α4 lead to: • Higher relative efficiency values for a given set of fixed sample sizes n1 and n2 . • Estimators that are less robust as evidenced by the mean-squared error values (i.e. we get estimators that vary “wildly” when α4 → 3). • Slightly higher relative efficiency values when sample sizes n1 and n2 become more and more heterogeneous. 3. Smaller values of α4 (i.e. for more “flat” distributions and α4 → −3): • Give fairly more robust estimators than those for the distributions with larger values of α4 as evidenced by the values of mean-squared error of the the three estimators. • Do well for more homogeneous sample sizes (n1 ≈ n2 ); this is apparent from the fact that relative efficiency rises as sample sizes become more homogeneous. 2 and Sω2 ∗ .ws 3 Applications of Sw ∗ .pooled

When p = 1, S2w∗ .pooled and S2ω∗ .ws are found to be equal and the results for S2w∗ .pooled and S2ω∗ .ws are given in the article by Wencheko and Chipoyera (submitted to Statistical Papers).

We illustrate the applications of S2w∗ .pooled and S2ω∗ .ws as estimators of σ 2 for the case p = 2 when the underlying populations distributions are 1. firstly, from N(µA , σ 2 ) and N(µB , σ 2 ), respectively, and 2. both from Expo(θ ). S-PLUS programs for generating or simulating M random samples are given in the Appendix; the first program generates two sets of samples: one set from Population A which are of size nA and the other from Population B which are of size nB so that N = nA + nB . We make use of the fact that for a normal probability distribution, α4 is known to be 0, i.e. α4 = 0. In the second program, again two sets of samples (each of sizes nA and nB ) are generated from the exponential distribution, Expo(θ ). The programs calculate • Estimates of the mean-squared errors of S2pooled , S2w∗ .pooled and S2ω∗ .ws , which  2  2ω .ws ) and MSE(S  2 are MSE(S ) and MSE(S ), respectively. w∗ .pooled



pooled

• The mean absolute deviation from σ 2 of each of the sample estimates of σ 2 , and • Estimates of the relative efficiency of S2w∗ .pooled and S2ω∗ .ws over S2pooled . After setting σ = 50, the results obtained are given in Table 2 and Figs. 1, 2, 3 and 4. The box-plots clearly show that for small N, the incidence of large outlier values is more pronounced for S2pooled than for S2w∗ .pooled and S2ω∗ .ws . However,

Improved estimators of common variance of p-populations

257

8000

Value

6000

4000

2000

0 S2pooled

S2w*.pooled Estimator

S2w*.ws

Fig. 1 Box-plot of estimates of σ 2 from N(µi , σ 2 ) when na = 3 and nb = 7

3500

Value

3000

2500

2000

1500 S2pooled

S2w*.pooled Estimator

S2w*.ws

Fig. 2 Box-plot of estimates of σ 2 from N(µi , σ 2 ) when na = 10 and nb = 90

as N becomes large, the prowess of S2w∗ .pooled and S2ω∗ .ws over S2pooled becomes more and more diluted. 4 Concluding remarks The impression one gets is that the two estimators developed in this paper are more reliable than the classical pooled variance. In particular, the authors of this paper make the following recommendations. 1. In general, if one is using samples where the combined sample size N is small, it is undoubtedly imperative to use S2ω.ws as the estimator for σ 2 .

258

H. W. Chipoyera, E. Wencheko

25000

Value

20000 15000 10000 5000 0 S2pooled

S2w*.pooled Estimator

S2w*.ws

Fig. 3 Box-plot of estimates of σ 2 = 1/θ 2 from N(θ ) when na = 3 and nb = 7

Value

6000

4000

2000

0

S2pooled

S2w*.pooled Estimator

S2w*.ws

Fig. 4 Box-plot of estimates of σ 2 from Expo(θ ) when na = 10 and nb = 90

2. For a fixed combined sample size N, as p → N (N−p → 0 and the individual sample sizes, nj , j = 1, . . . , np become small), the relative efficiency values of the two estimators increase drastically. The use of S2ω.ws is encouraged for such situations. 3. Since the value of kurtosis α4 plays a pivotal role in influencing the relative efficiency of the estimators, it is highly recommended that before deciding which estimator of σ 2 to use, one should calculate the coefficients of kurtosis or at least draw histograms; if the coefficients are each greater than zero, the use of S2ω∗ .ws over the others is highly recommended while if the coefficients are negative, there is not much harm incurred if one uses S2pooled .

Improved estimators of common variance of p-populations

259

Table 2 Summary statistics for S2pooled , S2w .pooled and S2ω .ws ∗ ∗ Distribution/attributes   N µi , 502 ,nA = 3, nB = 7

  N µi , 502 ,nA = 10, nB = 30

  N µi , 502 ,nA = 10, nB = 90

  1 ,n = 3, n = 7 Expo 50 A B

  1 ,n = 10, n = 30 Expo 50 A B

  1 ,n = 10, n = 90 Expo 50 A B

Statistic

S2pooled

S2w .pooled ∗

S2ω.ws

 MSE Standard deviation Mean absolute deviation Relative efficiency

1,542,332 1243 984 1.000

1,236,080 994 932 1.248

1,236,080 994 932 1.248

 MSE Standard deviation Mean absolute deviation Relative efficiency

330,090 575 450 1.000

309,657 546 444 1.066

309,657 546 444 1.066

 MSE Standard deviation Mean absolute deviation Relative efficiency

129,587 360 288 1.000

125,546 353 284 1.033

125,546 353 284 1.033

 MSE Standard deviation Mean absolute deviation Relative efficiency

6,561,647 2558 1667 1.000

3,323,860 1646 1482 1.974

3,274,026 1631 1478 1.2.004

 MSE Standard deviation Mean absolute deviation Relative efficiency

1,383,537 1177 887 1.000

1,178,998 1043 866 1.173

1,175,579 1042 865 1.177

 MSE Standard deviation Mean absolute deviation Relative efficiency

522,352 723 569 1.000

488,221 688 566 1.070

488,175 688 566 1.070

Appendix: S-plus program M_1000 na_4 nb_16 sigma_50 theta_1/sigma mua_1000 mub_1400 vector1_c(1,1) #Program for normally distributed data alpha4_0 a11_1+(alpha4/na+2/(na-1)) a21_1 a12_1 a22_1+(alpha4/nb+2/(nb-1))

260

H. W. Chipoyera, E. Wencheko

a1_c(a11,a21) a2_c(a12,a22) A_cbind(a1,a2) Ainv_solve(A) vect_t(vector1) Nrnuma_rnorm(na,mean=mua,sd=sigma) Nrnumb_rnorm(nb,mean=mub,sd=sigma) for (j in 2:M) { Nraj_rnorm(na,mean=mua,sd=sigma) Nrbj_rnorm(nb,mean=mub,sd=sigma) Nrnuma_cbind(Nrnuma,Nraj) Nrnumb_cbind(Nrnumb,Nrbj) } Nva_(stdev(Nrnuma[,1]))**2 Nvb_(stdev(Nrnumb[,1]))**2 for (j in 2:M) { Nvaj_(stdev(Nrnuma[,j]))**2 Nvbj_(stdev(Nrnumb[,j]))**2 Nva_cbind(Nva,Nvaj) Nvb_cbind(Nvb,Nvbj) } NVmatrix_rbind(Nva,Nvb) Nw_1/(1+(alpha4/na)*((na-1)/(na+nb-2))ˆ2+(alpha4/nb) *((nb-1)/(na+nb-2)) ˆ2+2*(nb-1)/((na+nb-2)ˆ2)+2*(na-1) /((na+nb-2)ˆ2)) NS2pooled_1 NS2pw_1 NS2pws_1 for (j in 1:M) { NS2pooled[j]_((na-1)*Nva[j]+(nb-1)*Nvb[j])/(na+nb-2) NS2pws[j]_vect%*% Ainv %*% NVmatrix [,j] } NS2pw_Nw*NS2pooled NdevS2pooled_(NS2pooled-sigma**2)**2 NdevS2pw_(NS2pw-sigma**2)**2 NdevS2pws_(NS2pws-sigma**2)**2 NMSES2pooled_mean(NdevS2pooled) NMSES2pw_mean(NdevS2pw) NMSES2pws_mean(NdevS2pws) NRES2pw_NMSES2pooled/NMSES2pw NRES2pws_NMSES2pooled/NMSES2pws boxplot(S2pooled,S2pw,S2pws) mabsS2pooled_mean(abs(S2pooled-sigma**2))

Improved estimators of common variance of p-populations

mabsS2pw_mean(abs(S2pw-sigma**2)) mabsS2pws_mean(abs(S2pws-sigma**2)) mad_cbind(mabsS2pooled,mabsS2pw,mabsS2pws) #Program for Exponentially distributed data alpha4_3 Ernuma_rnorm(na,rate=theta) Ernumb_rnorm(nb,rate=theta) for (j in 2:M) { Eraj_rnorm(na,rate=theta) Nrbj_rnorm(nb,rate=theta) Ernuma_cbind(Ernuma,Eraj) Ernumb_cbind(Ernumb,Erbj) } Eva_(stdev(Ernuma[,1]))**2 Eb_(stdev(Ernumb[,1]))**2 for (j in 2:M) { Evaj_(stdev(Ernuma[,j]))**2 Evbj_(stdev(Ernumb[,j]))**2 Eva_cbind(Eva,Evaj) Evb_cbind(Evb,Evbj) } EVmatrix_rbind(Eva,Evb) Ew_1/(1+(alpha4/na)*((na-1)/(na+nb-2))ˆ2+(alpha4/nb) *((nb-1)/(na+nb-2)) ˆ2+2*(nb-1)/((na+nb-2)ˆ2)+2*(na-1) /((na+nb-2)ˆ2)) ES2pooled_1 ES2pw_1 ES2pws_1 for (j in 1:M) { ES2pooled[j]_((na-1)*Eva[j]+(nb-1)*Evb[j])/(na+nb-2) ES2pws[j]_vect%*% Ainv %*% EVmatrix [,j] } ES2pw_Ew*ES2pooled EdevS2pooled_(ES2pooled-sigma**2)**2 EdevS2pw_(ES2pw-sigma**2)**2 EdevS2pws_(ES2pws-sigma**2)**2 EMSES2pooled_mean(EdevS2pooled) EMSES2pw_mean(EdevS2pw) EMSES2pws_mean(EdevS2pws) ERES2pw_EMSES2pooled/EMSES2pw ERES2pws_EMSES2pooled/EMSES2pws boxplot(ES2pooled,ES2pw,ES2pws) EmabsS2pooled_mean(abs(ES2pooled-sigma**2))

261

262

H. W. Chipoyera, E. Wencheko

EmabsS2pw_mean(abs(ES2pw-sigma**2)) EmabsS2pws_mean(abs(ES2pws-sigma**2)) References Kleffe J (1985)Some remarks on improving unbiased estimators by multiplication with a constant. Calinski T, Klonecki W (eds) Linear statistical inference. 150–161., Cambridge, Massachusetts Searls DT, Intarapanich P (1990)A note on an estimator for the variance that utilizes the kurtosis. Am Stat 44:295–296 Wencheko E, Wijekoon P (2005) Improved estimation of the mean in one-parameter exponential families with known coefficient of variation. Stat Pap 46:101–115 Wencheko E, Chipoyera HW Estimation of the variance when kurtosis is known. Statistical Papers (Submitted) Chipoyera HW, Wencheko E (2006) Towards more efficient estimators of the mean vector and variance-Covariance matrix. Statistical Papers (in press)

Suggest Documents