Default Normal Template

6 downloads 0 Views 162KB Size Report
Arcos, C.A.and Rueda, G.M., Variance estimation using auxiliary information an almost unbiased multivariate ratio estimator. Matrika,Vol.45, pp.171-178, 1997.
Journal of Statistical Theory and Applications Volume 1, Number 2, 2002, pp. 143-147

General Estimator for a Finite Population Variance Using Multivariate Auxiliary Information Walid Abu-Dayyeh Department of Statistics Yarmouk University, Irbid, Jordan

M. S. Ahmed Department of Mathematics and Statistics Sultan Qaboos University, Mascut, Oman Email: [email protected]

Abstract Isaki (1983), Arcos and Rueda (1997), and Ahmed et al. (2000) suggested the estimators for a finite population variance by using multivariate auxiliary information. In this paper, we have proposed a general class of estimators and presented its properties.

Keywords: Finite Population, Ratio and Regression estimators, Auxiliary Information. 1. Introduction Suppose U = {U1, U2, ..., UN} is a finite population and yi and xji are the i-th measurements of the study variable y and auxiliary variable xj respectively, j=1,2,...,k. We are interested to estimate the finite population variance, S 02 = ( N  1) 1 iU ( yi  Y ) 2 of the study variable y, where Y is the finite population mean.

There are k-auxiliary variables available in the survey, which are known for the population. Suppose yi and xji are the i-th measurements of the study variable y and

auxiliary variable xj respectively, j=1,2,...,k. Then S 2j  ( N  1) 1 iU ( x ji  X j ) is the 2

finite population variance of the variable xj, where X j is population mean (j=1,2,..., k). Now, a simple random sample of size n has been drawn from U, then respective sample means and variances of y and xj are y  n 1 iS yi and x j  n 1 iS x ji , and

s02  (n  1) 1 iS ( yi  y ) and s 2j  (n  1) 1 iS ( x ji  x j ) . 2

2

Incorporating the auxiliary information, Arcos and Rueda [1] proposed the estimator

d1k for S 02 , j

 S 2j  2 d1 s0   2  , where 1 ,..., k     j 1  s j  k



On leave from Jahangirnagar University, Dhaka, Bangladesh.

(1.1)

Walid Abu-Dayyeh and M. S. Ahmed

144

They also discussed its properties under multivariate normal distribution. Ahmed et al.(2000) proposed another estimator  S 2j  k  d 2 s 0   j 1  j 2  ,  s j   2 0

where 1 ,...,k   ,



k j 0

j 1

(1.2)

Now, we propose the following general class of estimators for S 02 s 2j Sˆg2  g (s02 , u)  g (t ) , where u  (u1 u2 uk ) and u j  2 Sj

(1.3)

Here Sˆ g2 is a function such that h(s02 , e) = s02 , e  (111) , a column vector of order k, satisfying the following regularity conditions: i.

For any sample size, t assumes the value in a bounded closed convex subset H of k+1 dimensional real space containing the point T=( S02 ,e).

ii.

In H, the function g(t) is continuous and bounded.

iii.

The first and second order partial derivatives of g(t) exist, are continuous and bounded in H.

This is a wide class of estimators and a large number of estimators including (1.1) and (1.2) belong to this class. The following are some more classes belong to this class (1.3), t1 = s02 + B(u - e)

(1.4)

k

t 2 = s 02  u iai

(1.5)

i 1

k

t 3 = s02 (2 -  u iai ) i 1

It is noted the estimator d 1 is same as t 2 .

2. Biases and Mean Square Errors Under SRSWOR

Define, R j 

 zt 

S 02 ,  rs ( z, t )  N 1 iU ( zi  Z ) r (t i  T ) s and S 2j

22 ( z, t ) , for z, t = y, x1, x2, …, xk. 20 ( z, t )02 ( z, t )

(1.6)

General Estimator

145

V (s 2j )  V jj  S 4j (  jj  1) and Cov(s 2j , s 2j )  V jj  S 2j S 2j (  jj  1) , C jj  (  jj  1) , C j  C jj ,  jj 

C jj C j C j

, and b j 

V0 j V jj

, b  (b1 b2 ... bk ) where   (n1  N 1 ) ,

For SRSWOR sampling design, Let A  a jj  , where a jj  C jj , and G  ( g 0 j ) , where g 0 j  C0 j for j= 1, 2,…,k, Theorem 2.1: For large n, the bias and mean square of Sˆ g2 are given by,

  MSE Sˆ   S C

B Sˆ g2  C02 hG  12 E ( h ( 2)  ) 2 g

4 0

2 0

 2hG  hAh



Proof:

 

Let  j  u j  1 and     1  2 ... k  , then E  i   0 and E  2j  C jj E  o  j   C0 j ,

E    A , E  0   G Expanding h(t) about the point t =T, in second order Taylor's series, we have Sˆg2 = h(e) + h0 (e)(s02  S02 ) + (u - e) h1 (e) + 12 {(s02  S02 )2 h00 (u*)

 (u - e) h11(u*)(u  e) }  (u  e) h01(u*)(s02  S02 ) = S02 (1   0 ){1 +  h(1) (e) + 12  h(e)  16  j , j , k  i j r hijr (u*)} k

(2.1)

where h. (.) and h.. (.) denote first and second order partial derivatives, and

u*j = 1 +  j (u j - 1) and u  (u1* u 2* ....u k* ) ; 0   j  1 *

Since h(e) = S02 , h0 (e) = 1 , (s02  S 02 )  S 02 (1   0 ) , (u - e) =  , Up to second order approximation (2.1) is Sˆ 2 S 2  S 2 (   h (1) (e)     h (1) (e) + 1  h(e) ) g

0

0

0

0

2

(2.2)

Hence, the bias is B Sˆ g2  E (Sˆ g2  S 02 )  C02 hG  12 E ( h ( 2)  )

 

Again, from (Sˆ g2 S 02 ) 2  S 04 ( 02  2 0 h   h ) Hence, the mean square error is MSE Sˆ 2  S 4 C 2  2hG  hAh

  g

0



0

(2.3)



Theorem 2.2: For large n, the optimum choice of h is

hopt   A1G and the minimum mean square is

(2.4)

Walid Abu-Dayyeh and M. S. Ahmed

  

M min Sˆ g2   C04  GA1G

146



(2.5)

Proof: The proof is straightforward. It is observed that the optimum choice of  j in (1.1) is  jopt  h jopt R j and the optimum choice of  j in (1.2) is  jopt  h jopt

3. Numerical Illustrations and Simulation We have given here the mean square errors and biases of different estimators using a real population data set (Ahmed et al. (1999)). Further, we have carried out a simulation study to show their performance. Suppose y is number of cultivators, x1 is the area of the village and x2 is the number of household in a village. From the data (Ahmed et al. (1999)), we have computed, N =332,  00  1 =9.23, 11  1 =8.49,  22  1 =7.06,  01  1 =0.14,  02  1 =0.24,

12  1 =5.07,

S 02  695135.6 , S12  19465.2 , S 22  11881

In Table 1, we have presented bias and mean square of error of different estimators using above population data, and in Table 2, we have simulated bias and mean square of error using 30,000 repeated samples.

Table 1: MSE and Bias for different estimators for the population data Estimator

MSE

Bias

d1

3.737321010

-60.437

d2

3.737321010

86.74

10

0 33.15

t1 t3

3.7373210 3.737321010

Table 2: Simulated MSE and Biases for different estimators Estimator

MSE

d1

4.622661010

d2 t1 t3

Bias 10

3.45832610 2.9770641010 3.7868421010

91.551 62.014 -3.257 38.15

General Estimator

147

It seems from Table 1, the minimum MSE are same for all the estimators but the biases are different. The estimator t1 is unbiased, since it is regression type estimator and then estimator t 3 has less bias. From simulation results, in Table 2, it seems that the estimator t1 has the minimum variance with less bias.

REFERENCES Ahmed, M. S., Raman, M. S. and Hossain, M. I., Some competitive estimators of finite population variance using multivariate auxiliary information. Journal Information and Management Sciences, Vol.11, No.1, pp. 49-54, 1999. Arcos, C.A.and Rueda, G.M., Variance estimation using auxiliary information an almost unbiased multivariate ratio estimator. Matrika,Vol.45, pp.171-178, 1997. Olkin, I, Multivariate ratio estimation for finite populations. Biometrika.Vol.45, pp.154165, 1958. Srivastava, S.K., An estimator using auxiliary information in sample surveys. Cal. Stat. Assoc. Bull. Vol.16, pp.121-132, 1967. Srivastava, S.K., An estimate of the mean of a finite population using several auxiliary variables. Jour.Ind.Stat.Assoc. Vol.3, pp.189-194, 1965.