DOI: 10.5351/CKSS.2011.18.2.155
Communications of the Korean Statistical Society 2011, Vol. 18, No. 2, 155–164
Ratio-Cum-Product Estimators of Population Mean Using Known Population Parameters of Auxiliary Variates Rajesh Tailora , Rajesh Parmar a , Jong-Min Kim 1,b , Ritesh Tailorc
b
a School of Studies in Statistics, Vikram University Statistics, Division of Science and Mathematics, University of Minnesota-Morris c Institute of Would Science and Technology
Abstract This paper suggests two ratio-cum-product estimators of finite population mean using known coefficient of variation and co-efficient of kurtosis of auxiliary characters. The bias and mean squared error of the proposed estimators with large sample approximation are derived. It has been shown that the estimators suggested by Upadhyaya and Singh (1999) are particular case of the suggested estimators. Almost ratio-cum product estimators of suggested estimators have also been obtained using Jackknife technique given by Quenouille (1956). An empirical study is also carried out to demonstrate the performance of the suggested estimators.
Keywords: Ratio-cum-product estimator, population mean, coefficient of variation, coefficient of kurtosis, bias, mean squared error.
1. Introduction Use of auxiliary information has been in practice to increase the efficiency of the estimators. When the population mean of an auxiliary variate is known, so many estimators for population parameter(s) of study variate have been discussed in the literature. When correlation between study variate and auxiliary variate is positive (high) ratio method of estimation (Cochran, 1940) is used. On the other hand if the correlation is negative, product method of estimation (Robson, 1957; Murthy, 1967) is preferred. In practice information on coefficient of variation(CV) of an auxiliary variate is seldom known. Sisodia and Dwivedi (1981) suggested a modified ratio estimator for population mean of the study variate. Later on Upadhyaya and Singh (1999), derived another ratio and product type estimators using coefficient of variation and coefficient of kurtosis of the auxiliary variate. Singh (1967) utilized information on two auxiliary variates x1 and x2 and suggested a ratio-cum-product estimator for population mean. Singh and Tailor (2005) utilized known correlation coefficient between auxiliary variates (ρ x1 x2 ) x1 and x2 . Singh and Tailor (2005) motivates authors to suggest ratio-cum-product estimators of population mean utilizing the information on co-efficient of variation of auxiliary variates i.e. C x1 and C x2 and co-efficient of kurtosis of auxiliary variates β2 (x1 ) and β2 (x2 ) besides the population means (X¯ 1 and X¯ 2 ) of auxiliary variates x1 and x2 . Let U = {U1 , U2 , . . . , U N } be a finite population of N units. Suppose two auxiliary variates x1 and x2 are observed on Ui (i = 1, 2, . . . , N), where x1 is positively and x2 is negatively correlated with the study variate y. A simple random sample of size n with n < N, is drawn using simple random ¯ sampling without replacement(SRSWOR) from the population U to estimate the population mean(Y) 1
Corresponding author: Associate Professor, Statistics, Division of Science and Mathematics, University of MinnesotaMorris, MN 5627, USA. E-mail:
[email protected]
156
Rajesh Tailor, Rajesh Parmar, Jong-Min Kim, Ritesh Tailor
∑N ∑N of study character y, when the population means X¯ 1 = i=1 x1i /N and X¯ 2 = i=1 x2i /N of x1 and x2 respectively are known. Usual ratio and product estimators given by Cochran (1940) and Robson (1957) respectively for estimating the population mean Y¯ respectively are defined as ( ) X¯ 1 y¯ R = y¯ , (1.1) x¯1 ( ) x¯2 y¯ P = y¯ . (1.2) X¯ 2 Utilizing the information on co-efficient of variations (C x1 and C x2 ) and co-efficient of kurtosis (β2 (x1 ) and β2 (x2 )), Upadhyaya and Singh (1999) suggested ratio and product estimators as (¯ ) X1C x1 + β2 (x1 ) ˆ ¯ Y1 = y¯ , (1.3) x¯1C x1 + β2 (x1 ) ) ( x¯2C x2 + β2 (x2 ) Yˆ¯ 2 = y¯ , (1.4) X¯ 2C x2 + β2 (x2 ) (¯ ) X1 β2 (x1 ) + C x1 Yˆ¯ 3 = y¯ , (1.5) x¯1 β2 (x1 ) + C x1 ) ( x¯2 β2 (x2 ) + C x2 . (1.6) Yˆ¯ 4 = y¯ X¯ 2 β2 (x2 ) + C x2 ¯ Singh (1967) suggested a ratio-cum-product estimator as To estimate Y, ( )( ) X¯ 1 x¯2 ˆ ¯ Y5 = y¯ . x¯1 X¯ 2
(1.7)
Assuming that the correlation coefficient (ρ x1 x2 ) between auxiliary characters x1 and x2 is known, Singh and Tailor (2005) suggested a ratio-cum-product estimator of Y¯ (¯ ) )( X1 + ρ x1 x2 x¯2 + ρ x1 x2 ˆ Y¯ 6 = y¯ . (1.8) x¯1 + ρ x1 x2 X¯ 2 + ρ x1 x2 To the first degree of approximation the mean squared error(MSE) of the estimators y¯ R , y¯ P , Yˆ¯ 1 , Yˆ¯ 2 , Yˆ¯ 3 , Yˆ¯ 4 , Yˆ¯ 5 and Yˆ¯ 6 respectively are [ ] MSE (¯yR ) = θY¯ 2 Cy2 + C 2x1 − 2ρyx1 CyC x1 , (1.9) [ ] MSE (¯yP ) = θY¯ 2 Cy2 + C 2x2 + 2ρyx2 CyC x2 , (1.10) ( ) [ ] MSE Yˆ¯ 1 = θY¯ 2 Cy2 + λ1C 2x1 − 2ρyx1 λ1CyC x1 , (1.11) ( ) [ ] 2 2 2 ˆ MSE Y¯ 2 = θY¯ Cy + λ2C x2 + 2ρyx2 λ2CyC x2 , (1.12) ( ) [ ] MSE Yˆ¯ 3 = θY¯ 2 Cy2 + γ12C 2x1 − 2ρyx1 γ1CyC x1 , (1.13) ( ) [ ] MSE Yˆ¯ 4 = θY¯ 2 Cy2 + γ22C 2x2 + 2ρyx2 γ2CyC x2 , (1.14) ( ) [ ( ) { ( )}] 2 2 2 2 MSE Yˆ¯ 5 = θY¯ Cy + C x1 1 − 2Kyx1 + C x2 1 + 2 Kyx2 − K x1 x2 (1.15)
Ratio-Cum-Product Estimators of Population Mean
157
and { ( ) [ ( ) ( )}] MSE Y¯ˆ 6 = θY¯ 2 Cy2 + µ∗1C 2x1 µ∗1 − 2Kyx1 + µ∗2C 2x2 µ∗2 + 2 Kyx2 − µ∗1 K x1 x2 ,
(1.16)
where (
) Cy , C x1
(
) Cy , C x2
(
) C x1 , C x2
Sy X¯ iC xi K x1 x2 = ρ x1 x2 Cy = , λi = , ¯ ¯ Y XiC xi + β2 (xi ) ( ) S yxi Sx X¯ i β2 (xi ) X¯ i 1 1 − , γi = , µ∗1 = , θ= , C xi = i , ρyxi = ¯ ¯ ¯ n N S y S x1 Xi β2 (xi ) + C xi Xi + ρ x1 x2 Xi ∑N ∑N ∑N ¯ 2 ¯ 2 ¯ ¯ j=1 (y j − Y) j=1 (xi j − Xi ) j=1 (y j − Y)(xi j − Xi ) 2 2 2 Sy = , S xi = and S yxi = , N−1 N−1 N−1 where (i = 1, 2). Kyx1 = ρyx1
Kyx2 = ρyx2
2. Proposed Estimator Assuming that the information on coefficient of variation (C x1 and C x2 ) and co-efficient of kurtosis (β2 (x1 ) and β2 (x2 )) of auxiliary variate x1 and x2 , are known, the proposed estimators are (¯ )( ) X1C x1 + β2 (x1 ) x¯2C x2 + β2 (x2 ) Yˆ¯ 7 = y¯ , (2.1) x¯1C x1 + β2 (x1 ) X¯ 2C x2 + β2 (x2 ) (¯ )( ) X1 β2 (x1 ) + C x1 x¯2 β2 (x2 ) + C x2 ˆ ¯ Y8 = y¯ . (2.2) x¯1 β2 (x1 ) + C x1 X¯ 2 β2 (x2 ) + C x2 ¯ + e0 ), To obtain the bias and mean squared error of the proposed estimators, we assume that y¯ = Y(1 2 2 2 ¯ ¯ x¯1 = X1 (1 + e1 ), x¯2 = X2 (1 + e2 ) such that E(e0 ) = E(e1 ) = E(e2 ) = 0 and E(e0 ) = θCy , E(e1 ) = θC 2x1 , E(e22 ) = θC 2x2 , E(e0 e1 ) = θρyx1 CyC x1 , E(e0 e2 ) = θρyx2 CyC x2 and E(e1 e2 ) = θρ x1 x2 C x1 C x2 . Expressing the Yˆ¯ 7 in terms of e′ s, we get i
(
)( ¯ ) X¯ 1C x1 + β2 (x1 ) X2 (1 + e2 )C x2 + β2 (x2 ) X¯ 1 (1 + e1 )C x1 + β2 (x1 ) X¯ 2C x2 + β2 (x2 ) ( ) )( ¯ ¯ X1C x1 + β2 (x1 ) X2C x2 + β2 (x2 ) + X¯ 2C x2 e2 ¯ = Y(1 + e0 ) X¯ 1C x1 + β2 (x1 ) + X¯ 1C x1 e1 X¯ 2C x2 + β2 (x2 ) −1 ¯ + e0 ) (1 + λ1 e1 ) (1 + λ2 e2 ) = Y(1 ( ) ¯ + e0 ) 1 − λ1 e1 + λ21 e21 (1 + λ2 e2 ) = Y(1 ( ) ¯ + e0 ) 1 − λ1 e1 + λ21 e21 + λ1 λ2 e1 e2 + λ2 e2 = Y(1 ( ) Yˆ¯ 7 = Y¯ 1 − λ1 e1 + λ21 e21 + λ1 λ2 e1 e2 + λ2 e2 + e0 − λ1 e0 e1 + λ2 e0 e2 ( ) ( ) Yˆ¯ 7 − Y¯ = Y¯ −λ1 e1 + λ21 e21 + λ2 e2 − λ1 λ2 e1 e2 + e0 − λ1 e0 e1 + λ2 e0 e2 . ¯ + e0 ) Yˆ¯ 7 = Y(1
Taking expectation of both sides of (2.3) ( ) ( ) E Yˆ¯ 7 − Y¯ = Y¯ E −λ1 e1 + λ21 e21 + λ2 e2 − λ1 λ2 e1 e2 + e0 − λ1 e0 e1 + λ2 e0 e2 .
(2.3)
Rajesh Tailor, Rajesh Parmar, Jong-Min Kim, Ritesh Tailor
158
Substituting the values of E(e0 ), E(e1 ), E(e2 ), E(e21 ), E(e0 e1 ), E(e0 e2 ) and E(e1 e2 ) we get the bias of Yˆ¯ 7 as )] ) ( ( ) [ ( (2.4) B Yˆ¯ 7 = θY¯ λ1C 2x1 λ1 − Kyx1 + λ2C 2x2 Kyx2 − λ1 K x1 x2 . To find the mean squared error of the suggested estimator Yˆ¯ 7 up to first degree of approximation, squaring and taking expectation of (2.3) ( )2 E Y¯ˆ 7 − Y¯ = Y¯ 2 E (e0 − λ1 e1 + λ2 e2 )2 , ( ) ( ) MSE Yˆ¯ 7 = Y¯ 2 E e20 + λ21 e21 + λ22 e22 − 2λ1 e0 e1 + 2λ2 e0 e2 − 2λ1 λ2 e1 e2 . After substituting the values of E(e20 ), E(e21 ), E(e22 ), E(e0 e1 ), E(e0 e2 ) and E(e1 e2 ) we have mean squared error of Yˆ¯ 7 as { ( ) [ ( ) ( )}] MSE Yˆ¯ 7 = θY¯ 2 Cy2 + λ1C 2x1 λ1 − 2Kyx1 + λ2C 2x2 λ2 + 2 Kyx2 − λ1 K x1 x2 . (2.5) Similarly bias and mean squared error of Yˆ¯ 8 can be obtained as [ ( ( ) ) ( )] B Yˆ¯ 8 = θY¯ γ1C 2x1 γ1 − Kyx1 + γ2C 2x2 Kyx2 − γ1 K x1 x2 , ( ) [ ( ) { ( )}] MSE Yˆ¯ 8 = θY¯ 2 Cy2 + γ1C 2x1 λ1 − 2Kyx1 + γ2C 2x2 γ2 + 2 Kyx2 − γ1 K x1 x2 .
(2.6) (2.7)
3. Efficiency Comparison We know that the variance of sample mean y¯ in simple random sampling without replacement(SRSW OR) is ( ) 1 1 2 V(¯y) = − S . (3.1) n N y From (1.9) to (1.16), (2.5), (2.7) and (3.1) we have ( ) (i) MSE Yˆ¯ 7 < MSE (¯y) if Kyx1 >
λ1 2
( λ2 ) Kyx2 > λ1 K x1 x2 − 2
and
(3.2)
( ) (ii) MSE Y¯ˆ 7 < MSE (¯yR ) if ( Kyx1
(λ
1
2
− λ2 K x2 x1
(
) and
Kyx2 > −
1 + λ2 2
) (3.4)
Ratio-Cum-Product Estimators of Population Mean
159
( ) ( ) (iv) MSE Y¯ˆ 7 < MSE Yˆ¯ 1 if ( λ2 ) Kyx2 < λ1 K x1 x2 − 2
(3.5)
λ2 2
(3.6)
( ) ( ) (v) MSE Y¯ˆ 7 < MSE Yˆ¯ 2 if Kyx1 > −λ2 K x2 x1 + ( ) ( ) (vi) MSE Yˆ¯ 7 < MSE Yˆ¯ 3 if either or
(γ + λ ) ( λ2 ) 1 1 if γ1 < λ1 and Kyx2 < λ1 K x1 x2 − 2 2 (γ + λ ) ( λ2 ) 1 1 if γ1 > λ1 and Kyx2 < λ1 K x1 x2 − < 2 2
Kyx1 >
(3.7)
Kyx1
(3.8)
( ) ( ) (vii) MSE Yˆ¯ 7 < MSE Yˆ¯ 4 if either or
(λ (γ + λ ) ) 1 2 2 if γ2 > λ2 and Kyx1 > − λ2 K x2 x1 2 2 (γ + λ ) (λ ) 2 2 1 − λ2 K x2 x1 2 2
Kyx2 > − Kyx2
( ) ( ) (viii) MSE Y¯ˆ 7 < MSE Yˆ¯ 5 if ( ) { } 1 + λ2 1 + λ1 K x2 x1 (λ1 λ2 − 1) Kyx1 > − if λ2 < 1 and Kyx1 > − 2 2 λ1 − 1 ( ) { } 1 + λ2 1 + λ1 K x2 x1 (λ1 λ2 − 1) Kyx1 < − if λ2 > 1 and Kyx1 > − 2 2 λ1 − 1 ( ) ( ) (ix) MSE Y¯ˆ 7 < MSE Yˆ¯ 6 ( ∗ ) µ + λ1 Kyx1 < 1 if 2 ( ∗ ) µ1 + λ1 Kyx1 < if 2 ( ∗ ) µ + λ1 Kyx1 > 1 if 2 ( ∗ ) µ + λ1 Kyx1 > 1 if 2
if one of the following conditions is satisfied { } K x1 x2 (λ1 λ2 − µ∗1 µ∗2 ) λ2 + µ∗2 − λ1 < µ∗1 and Kyx2 > λ2 − µ∗2 2 { } ∗ ∗ K x1 x2 (λ1 λ2 − µ1 µ2 ) λ2 + µ∗2 ∗ λ1 < µ1 and Kyx2 < − λ2 − µ∗2 2 { } ∗ ∗ K x1 x2 (λ1 λ2 − µ1 µ2 ) λ2 + µ∗2 ∗ λ1 > µ1 and Kyx2 > − λ2 − µ∗2 2 { } ∗ ∗ K (λ λ − µ µ ) λ + µ∗2 x1 x2 1 2 2 1 2 λ1 > µ∗1 and Kyx2 < − λ2 − µ∗2 2
(3.9) (3.10)
(3.11) (3.12)
if λ2 < µ∗2
(3.13)
if λ2 > µ∗2
(3.14)
if λ2 < µ∗2
(3.15)
if λ2 > µ∗2
(3.16)
( ) ( ) (x) MSE Yˆ¯ 8 < MSE Yˆ¯ 7 if of the following conditions is satisfied { { ( )} ( )} γ2C 2x2 γ2 + 2 Kyx2 − γ1 K x1 x2 − λ2C 2x2 λ2 + 2 Kyx2 − λ1 K x1 x2 C 2x1 < . λ1 − γ1 C 2x2
(3.17)
Rajesh Tailor, Rajesh Parmar, Jong-Min Kim, Ritesh Tailor
160
It is observed that the proposed estimators Yˆ¯ j ( j = 7, 8) are biased. Bias is disadvantageous in many situations. Keeping this in view, a family of almost unbiased estimators is also proposed using Random Group technique envisaged by Quenouille (1956).
4. A Family of Unbiased Estimators of Population Mean Y¯ Using Random Group Method Suppose a simple random sample of size n = gm is drawn without replacement and split at random into g sub-samples, each of size m. Then Jack-knife type ratio-cum-product estimator for population ¯ using Yˆ¯ 7 is given as mean Y, ′ g 1 ∑ ′ X¯ 1C x1 + β2 (x1 ) x¯2 jC x2 + β2 (x2 ) ˆ ¯ , Y7J = y¯ (4.1) g j=1 j x¯1′ jC x1 + β2 (x1 ) X¯ 2C x2 + β2 (x2 ) where y¯ ′j = (n¯y − m¯y j )/(n − m) and x¯i′ j = (n x¯i − m x¯i j )/(n − m), i = 1, 2; are the sample means based on a sample of (n − m) units obtained by omitting the jth group and y¯ j and x¯i j (i = 1, 2; j = 1, 2, . . . , g) are the sample means based on the jth sub samples of size m = n/g. The bias of Yˆ¯ 7J , upto the first degree of approximation can be easily obtained as ( ) N−n+m [ ( ) ( )] B Yˆ¯ 7J = Y¯ λ1C 2x1 λ1 − Kyx1 + λ2C 2x2 Kyx2 − λ1 K x1 x2 . (4.2) N(n − m) From (2.4) and (4.2) we have ( ) B Yˆ¯ 7 (N − n)(n − m) or ( )= ˆ n(N − n + m) B Y¯ 7J ( ) ( ) ⇒ λ∗ B Yˆ¯ 7 − δ∗ λ∗ B Yˆ¯ 7J = 0
( ) (N − n)(n − m) ( ) B Y¯ˆ 7 − B Yˆ¯ 7J = 0 n(N − n + m)
(4.3) (4.4)
for any scalar λ∗ , where δ∗ =
(N − n)(n − m) . n(N − n + m)
From (4.4), we have ) ( ) ( or λ∗ E Yˆ¯ 7 − Y¯ − δ∗ λ∗ E Yˆ¯ 7J − Y¯ = 0 ] [ ∗ ∗ ∗ ˆ¯ ∗ ∗ ˆ¯ ¯ E λ Y7 − λ δ Y7J − y¯ {λ (1 − δ ) − 1} = Y.
) ( ) ( λ∗ E Y¯ˆ 7 − y¯ − δ∗ λ∗ E Yˆ¯ 7J − y¯ = 0
Thus we get a general family of almost unbiased ratio-cum-product estimators of Y¯ as ] [ Yˆ¯ 7u = y¯ {1 − λ∗ (1 − δ∗ )} + λ∗ Yˆ¯ 7 − λ∗ δ∗ Yˆ¯ 7J .
(4.5)
or
(4.6)
Remark 1. For λ∗ = 0, Yˆ¯ 7u yields the usual unbiased estimator y¯ while λ∗ = (1 − δ∗ )−1 , gives an almost unbiased estimator for Y¯ as ) (¯ )( X1C x1 + β2 (x1 ) x¯2C x2 + β2 (x2 ) (N − n + m) ∗ Yˆ¯ 7u = g¯y N x¯1C x1 + β2 (x1 ) X¯ 2C x2 + β2 (x2 ) ′ g (N − n)(g − 1) ∑ ′ X¯ 1C x1 + β2 (x1 ) x¯2 jC x2 + β2 (x2 ) . − (4.7) y¯ j ′ Ng x¯1 jC x1 + β2 (x1 ) X¯ 2C x2 + β2 (x2 ) j=1
Ratio-Cum-Product Estimators of Population Mean
161
Which is Jack-knifed version of the proposed estimator Yˆ¯ 7 . Different suites values of λ∗ provides many almost unbiased estimators in (4.6).
5. An Optimum Estimator In Family Yˆ¯ 7u The family of almost unbiased estimator Yˆ¯ 7u at (4.6) can be expressed as Yˆ¯ 7u = y¯ − λ∗ y¯ 1 , where y¯ 1 = [(1 − δ∗ )¯y − y¯ 2 ] and y¯ 2 = Yˆ¯ 7 − δ∗ Yˆ¯ 7J . The variance of Yˆ¯ 7u is given by ( ) V Yˆ¯ 7u = V (¯y) + λ∗2 V (¯y1 ) − 2λ∗ Cov(¯y, y¯ 1 )
(5.1)
(5.2)
which is minimized for λ∗ = Cov(¯y, y¯ 1 )/V (¯y1 ) .
(5.3)
Substitution of (5.3) in (5.2) yields minimum variance of Yˆ¯ 7u as ( ) ( ) {Cov(¯y, y¯ 1 )}2 = V (¯y) 1 − ρ201 , min .V Y¯ˆ 7u = V (¯y) − V (¯y1 )
(5.4)
where ρ01 is the correlation coefficient between y¯ and y¯ 1 . From (5.4) it is clear that min .V(Yˆ¯ 7u ) < V (¯y). To obtain the explicit expression of the variance of Yˆ¯ 7u , we write the following results upto terms of order n−1 , as ) ( ) ( ( ) (5.5) MSE Yˆ¯ 7J = Cov Yˆ¯ 7 , Yˆ¯ 7J = MSE Yˆ¯ 7 and
( ) ( ) [ ] Cov y¯ , Yˆ¯ 7 = Cov y¯ , Yˆ¯ 7J = θY¯ 2 Cy2 − λ1 ρyx1 CyC x1 + λ2 ρyx2 CyC x2 ,
(5.6)
where MSE(Yˆ¯ 7 ) is given by (2.5). Using (2.5), (3.1) and (5.6) in (5.2), the variance of Yˆ¯ 7u upto the terms of order n−1 is given as [ ( ) ( ) V Yˆ¯ 7u = θY¯ 2 Cy2 + λ∗2 (1 − δ∗ )2 λ21C 2x1 + λ22C 2x2 − 2ρ x1 x2 C x1 C x2 λ1 λ2 ( )] −2λ∗ (1 − δ∗ ) λ1 ρyx1 CyC x1 − λ2 ρyx2 CyC x2 , (5.7) which is minimized for λ∗ =
(1 −
δ∗ )
λ1 ρyx1 CyC x1 − λ2 ρyx2 CyC x2 ( ) = λ∗opt . λ21C 2x1 + λ22C 2x2 − 2λ1 λ2 ρ x1 x2 C x1 C x2
(5.8)
Substitution of the value of λ∗opt in Yˆ¯ 7u yields the optimum estimator Yˆ¯ 7u(opt) (say). Thus the resulting minimum variance of Yˆ¯ 7u is given by ( )2 ( ) ( ) λ ρ C − λ ρ C 1 yx x 2 yx x 1 1 2 2 ˆ (5.9) min .V Yˆ¯ 7u = θY¯ 2Cy2 1 − 2 2 = V Y¯ 7u(opt) . 2 2 λ1C x1 + λ2C x2 − 2λ1 λ2 ρ x1 x2 C x1 C x2
Rajesh Tailor, Rajesh Parmar, Jong-Min Kim, Ritesh Tailor
162
The optimum value λ∗opt of λ∗ can be obtained quite accurately through past data or experience. Adopting the similar procedure, using proposed estimator Yˆ¯ 8 , we can obtain an almost unbiased family of estimators Yˆ¯ 8u . Further the variance of the proposed almost unbiased family of estimators Yˆ¯ 8u to the first degree of approximation is given by ( ( ) [ ) V Yˆ¯ 8u = θY¯ 2 Cy2 + λ∗2 (1 − δ∗ )2 γ12C 2x1 + γ22C 2x2 − 2ρ x1 x2 C x1 C x2 γ1 γ2 ( )] −2γ∗ (1 − δ∗ ) γ1 ρyx1 CyC x1 − γ2 ρyx2 CyC x2 λ∗ =
(1 −
δ∗ )
γ1 ρyx1 CyC x1 − γ2 ρyx2 CyC x2 ∗ ( ) = γopt γ12C 2x1 + γ22C 2x2 − 2γ1 γ2 ρ x1 x2 C x1 C x2
and resulting min .V(Yˆ¯ 8u ) is obtained as ( )2 ( ) ( ) γ ρ C − γ ρ C 1 yx x 2 yx x 1 1 2 2 ˆ min .V Yˆ¯ 8u = θY¯ 2Cy2 1 − 2 2 = V Y¯ 8u(opt) . 2 2 γ1 C x1 + γ2 C x2 − 2γ1 γ2 ρ x1 x2 C x1 C x2
6. Empirical Study ¯ a natural population data sets is being To observe the relative performance of different estimators of Y, considered • Population [Source: Steel and Torrie (1960, p.282)] y : Log of leaf burn in sec., x1 : Potassiam percentage, x2 : Clorine percentage. The required population parameters are Y¯ = 0.6860, X¯ 1 = 4.6537, X¯ 2 = 0.8077,
Cy = 0.4803, C x1 = 0.2295,
ρyx1 = 0.1794, ρyx2 = −0.4996,
N = 30, β2 (x1 ) = 1.56,
C x2 = 0.7493,
ρ x1 x2 = 0.4074,
β2 (x2 ) = 1.40.
n = 6,
To see the performance of the various estimators in comparison to y¯ , we calculate the percent relative efficiency of all estimators with respect to y¯ which is the ratio of the variance of y¯ to the mean squared error of the estimator multiplied by 100. The Percent relative efficiency(%) of the estimators y¯ , y¯ R , y¯ P , Yˆ¯ 1 , Yˆ¯ 2 , Yˆ¯ 3 , Yˆ¯ 4 , Yˆ¯ 5 , Yˆ¯ 6 , Yˆ¯ 7 , Yˆ¯ 7(opt) , Yˆ¯ 8 and Yˆ¯ 8(opt) have been computed and presented in Table 1. Formulae for percent relative efficiencies of different estimators are given below: Cy2 V(¯y) = 2 × 100 MSE(¯yR ) Cy + C 2x1 − 2ρyx1 CyC x1 V(¯y) V(¯y) PRE (¯yP , y¯ ) = × 100 = 2 × 100 MSE(¯yP ) Cy + C 2x1 + 2ρyx1 CyC x1 PRE (¯yR , y¯ ) =
( ) PRE Yˆ¯ 1 , y¯ =
Cy2 V(¯y) × 100 ( ) × 100 = 2 Cy + λ1C 2x1 − 2ρyx1 λ1CyC x1 MSE Yˆ¯ 1
Ratio-Cum-Product Estimators of Population Mean
163
Table 1: Percent relative efficiencies of different estimators of with respect to Estimators PREs Estimators PREs
y¯ 100.00 Yˆ¯ 5 75.50
y¯ R 94.62 Yˆ¯ 6 142.17
y¯ P 53.33 Yˆ¯ 7 155.10
Yˆ¯ 1 97.21 (opt) Yˆ¯ 7 169.81
Yˆ¯ 2 58.74 Yˆ¯ 8 156.96
Yˆ¯ 3 95.39 (opt) Yˆ¯ 8 165.14
Y¯ˆ 4 106.06
( ) PRE Y¯ˆ 2 , y¯ =
Cy2 V(¯y) × 100 ( ) × 100 = 2 Cy + λ2C 2x2 + 2ρyx2 λ2CyC x2 MSE Yˆ¯ 2
( ) PRE Y¯ˆ 3 , y¯ =
Cy2 V(¯y) × 100 ( ) × 100 = 2 Cy + γ12C 2x1 − 2ρyx1 γ1CyC x1 MSE Yˆ¯ 3
( ) PRE Yˆ¯ 4 , y¯ =
Cy2 V(¯y) × 100 ( ) × 100 = 2 Cy + γ22C 2x2 + 2ρyx2 γ2CyC x2 MSE Yˆ¯ 4
( ) PRE Yˆ¯ 5 , y¯ =
Cy2 V(¯y) ( ) { ( )} × 100 ( ) × 100 = Cy2 + C 2x1 1 − 2Kyx1 + C 2x2 1 + 2 Kyx2 − K x1 x2 MSE Yˆ¯ 5
) ( PRE Y¯ˆ 6 , y¯ =
Cy2 V(¯y) ( ) { ( )} × 100 ( ) × 100 = Cy2 + µ∗1C 2x1 µ∗1 − 2Kyx1 + µ∗2C 2x2 µ∗1 + 2 Kyx2 − µ∗1 K x1 x2 MSE Yˆ¯ 6
) ( PRE Y¯ˆ 7 , y¯ =
Cy2 V(¯y) ( ) { ( )} × 100 ( ) × 100 = Cy2 + λ1C 2x1 λ1 − 2Kyx1 + λ2C 2x2 λ2 + 2 Kyx2 − λ1 K x1 x2 MSE Yˆ¯ 7
( ) PRE Yˆ¯ 7(opt) , y¯ =
V(¯y) ( ) × 100 = MSE Yˆ¯ (opt) 7
( ) PRE Yˆ¯ 8 , y¯ = ) ( PRE Yˆ¯ 8(opt) , y¯ =
1−
1 ( )2 λ1 ρyx1 C x1 − λ2 ρyx2 C x2
× 100
λ21C 2x1 + λ22C 2x2 − 2λ1 λ2 ρ x1 x2 C x1 C x2
Cy2 V(¯y) ( ) { ( )} × 100 × 100 = ( ) Cy2 + γ1C 2x1 γ1 − 2Kyx1 + γ2C 2x2 γ2 + 2 Kyx2 − γ1 K x1 x2 MSE Yˆ¯ 8 V(¯y) ) × 100 = ( MSE Yˆ¯ (opt) 8
1−
1 ( )2 γ1 ρyx1 C x1 − γ2 ρyx2 C x2
× 100.
γ12C 2x1 + γ22C 2x2 − 2γ1 γ2 ρ x1 x2 C x1 C x2
Table 1 shows that the suggested estimators Yˆ¯ 7 (or Yˆ¯ 7(opt) ) and Yˆ¯ 8 (or Yˆ¯ 8(opt) ) with λ∗ = λ∗(opt) and α∗ = are more efficient than usual unbiased estimator y¯ , ratio estimator y¯ r , product estimator y¯ P , ratiocum-product estimators suggested by Singh (1967) and Singh and Tailor (2005) with considerable gain in efficiency. Thus, if coefficient of variation (C x1 and C x2 ) and coefficient of kurtosis (β2 (x1 ) and β2 (x2 )) are known of auxiliary variates x1 and x2 , both estimators are recommended for use in practice. α∗(opt)
Acknowledgements Authors are thankful to referee for his valuable suggestions regarding the improvement of the paper.
164
Rajesh Tailor, Rajesh Parmar, Jong-Min Kim, Ritesh Tailor
References Cochran, W. G. (1940). The estimation of the yields of the cereal experiments by sampling for the ratio of grain to total produce, The Journal of Agricultural Science, 30, 262–275. Murthy, M. N. (1967). Sampling Theory and Methods, Statistical Publishing Society, Calcultta. Quenouille, M. H. (1956). Notes on bias in estimation, Biometrika, 43, 353–360. Robson, D. S. (1957). Application of multivariate polykays to the theory of unbiased ratio-type estimation, Journal of the American Statistical Association, 52, 511–522. Singh, H. P. and Tailor, R. (2005). Estimation of finite population mean using known correlation coefficient between auxiliary characters, Statistica, 65, 407–418. Singh, M. P. (1967). Ratio-cum-product method of estimation, Metrika, 12, 34–42. Sisodia, B. V. S. and Dwivedi, V. K. (1981). A modified ratio estimator using coefficient of variation of auxiliary variable, Journal Indian Society Agricultural Statistics, 33, 13–18. Upadhyaya, L. N. and Singh, H. P. (1999). Use of transformed auxiliary variable in estimating the finite population mean, Biometrical Journal, 41, 627–636. Received June 2010; Accepted January 2011