Columbia International Publishing Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79 doi:10.7726/jams.2016.1005
Research Article
Estimation of Common Variance in Normal Distribution Using Linex Loss Function Binod Kumar Singh1* Received: 30 July 2015; Published online: 14 May 2016 © The author(s) 2016. Published with open access at www.uscip.us
Abstract Linex loss function is employed in the analysis of several central statistical estimation and prediction problem. Linex loss function which rises exponentially on one side of zero and almost linearly on the other side of zero. Linex loss function is used in both overestimation and underestimation. Preliminary test of significance has been used in various fields of investigation viz., analysis of variance regression analysis, agriculture, medicine and environmental science. In this paper author proposes a pooled estimator for the common variance in normal distribution and studies its property under Linex loss function. Author also proposes preliminary test estimator for the variance in normal distribution and studies its property and it is found that results are same as Adke et al (1987). In this paper author also suggests a preliminary test estimator for variance using Linex loss function. Keywords: Mean Square Error (MSE); Linex Loss Function; Preliminary Test Estimator (PTE)
1. Introduction While estimating in a practical situation, asymmetric loss functions are preferred over square error loss functions, as the former is more appropriate than the latter in many estimation problems. Linex loss function (a type of asymmetric loss function) is employed in the analysis of several central statistical estimation and prediction problem. Linex loss function which rises exponentially on one side of zero and almost linearly on the other side of zero. Linex loss function behaves linearly for large under-estimation errors (∆0), in which case the exponential term dominates and vanishes when there is no estimation (∆=0). In the context of real estate assessment Varian (1975) proposed an asymmetric loss function called Linex loss function (linear-exponential) as ______________________________________________________________________________________________________________________________ *Corresponding e-mail:
[email protected],
[email protected] 1 University of Petroleum & Energy Studies,Dehradun, India
59
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
L(a, ) e a c 1 ,-
(1.1)
Here a, c>0 and Δ= 1 . Also, here is the population mean and is the estimate of .
Zellner (1986) points out that if c=a, then the equation (1.1) is minimized at ∆=0. With this restriction equation (1.1) reduces to L(a, ) e a a 1 ,- (1.2) Zellner (1986) points out that for negative values of ‘a’ Linex loss retains its linear-exponential character and for small values of a Linex loss is nearly symmetric and approximately proportional to square error loss. But for larger value of a it is quite asymmetric. The other form of Linex loss function is
L a, b(e
a
a 1), 1, a and b 0.
(1.3)
Here a and b are shape and scale parameter respectively. If a 0 , Linex loss reduces to square error loss.
La,
La,
a>0 a 𝑡] = exp{− 𝜃}. Here, the overestimation of the reliability function can have marked consequence than under estimation. Linex loss function can also be used in following situations: (i) The cost of arriving 10 min early in the airport is quite different from arriving 10 min late. (ii) The loss of booking a lecture room that is 10 seats too big for your class is different from that of a room that is 10 seats too small. When the restrictions (as in multicollinearity problem, many researchers have also considered the cases when same exact or stochastic linear restrictions on the unknown parameters are assumed to hold) considered are suspected, one may combine the unrestricted estimators and restricted estimators to obtain new estimators with better performances, which leads to the preliminary test estimator PTE). The preliminary test approach estimation was firstly proposed by Bancraft (1944) and then has been studied by many authors, such as Judge and Bock (1978), Ahmed (1992), Saleh and Kibria (1993), Billah and Saleh (1998), Ahmed and Rahbar (2000), Kim and Saleh (2003), Srivastava (1976) and Kibria (2004). In much theoretical research work, the error terms in linear models are assumed to be normally and independently distributed. The question how the performances of the PTE change under non-normally distributed disturbances has received attention in the literature in various contexts. Kibria and Saleh (2004) introduced the preliminary test ridge estimator and Arashi and Tabatabaey (2008) considered the Stein-type estimators in linear models with multivariate Student-t errors. Huntsberger (1955), Goodman (1953), Bhattacharya and Srivastava (1974), Katti (1962), Hirano (1966), Hirano (1984) and Pandey and Srivastava (1985) used preliminary test estimator in different distributions. Scolve, Morris and Radhakrishnan (1972) studied on non-optimality of preliminary-test estimators for the mean of a multivariate normal distribution. Let x1 , x2 ,..., xn1 be the random sample of size n1 from a normal distribution having a probability density function f
( x, 1 , 12 )
1 x 2 1 x , 1 , 12 0 exp 2 1 2 1 1
(1.4) 61
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
Also, y1 , y 2 ,........y n2 be another random sample of size n2 from a normal distribution having a probability density function f ( y, 2 , 22 )
1 y 2 exp 2 2 2 2 1
2
y , 2 , 22 0
The estimates for µ1, µ2, σ12, σ22 are x , y , s12 and s 22 respectively, where x s12
1 n1 1
2
n1
x
i
x and
i 1
s 22
1 n2 1
1 n1 1 n2 x , y yj i n n1 i 1 j 1 2
2
n2
y
(1.5)
j
y
j 1
Bancroft and Han (1985) published a note on pooled variance and Pandey and Malik (1994) and Pandey and Srivastava (2001) have suggested an estimator for the common variance as Ta' a1 s12 a 2 s 22 a3 s1 s 2 where σ12 = σ22 = σ2 . Here a1, a2 and a3 are suitable constants such that a1 + a2 + a3 ≠ 1. In section 2, the author considers an estimator Ta' a1 s12 a 2 s 22 a3 s1 s 2 with general criterion a1 + a2 + a3 ≠ 1 and studies its properties under Linex loss function. In section 3, the author also proposes preliminary test estimator for σ12 in case of two normal populations and properties of the preliminary test estimators for σ12 is also discussed. It is found that results are same as Adke et al (1987). In section 4, the author also proposes preliminary test estimator for the variance in a normal distribution with N (µ, σ12) in the general class of estimator using Linex loss function. The improve estimator for σ12 under the Linex loss function in the class of estimators Y5 c5 s12 is Y5
2a v1 v1 2 2 1 e s1 where v1 n1 1 2a
In case of two normal distributions with N (1 , 2 ) and N ( 2 , 2 ) , where σ2 is common variance the Linex loss function can be proposed and the improve estimator is ˆ 2
v1 v 2 2a
2a v1 v2 2 2 1 e S ,
where S2
(n1 1) s12 (n2 1) s 22 v1 s12 v 2 s 22 n1 n2 2 v1 v 2
(1.6)
For the square error loss it will be 62
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
ˆ 2
(n1 1) s12 (n2 1) s 22 v1 v 2 2
(1.7)
In practical situations, pooled estimator (σ12 = σ22 = σ2) were suggested by Pandey and Malik (1994) then Ta’ can be considered and if they are not equal and one is interested in estimating σ12, then preliminary test estimator may be proposed. The new estimator Y7 for σ12 can be defined as follows: a1 s12 a 2 s 22 a3 s1 s 2 if 12 22 is accepted 2a Y7 v1 v1 2 2 s1 otherwise 1 e 2a
(1.8)
2. Estimation of Common Variance in Normal Distribution in General Class of Estimator under Linex Loss Function Pandey and Malik. (1994) considered a general class of estimators for common variance σ2 as
Ta' = Ta' a1s12 a2 s 22 a3 s1s 2
(2.1)
Ta' = a1s12 a2 s 22 if a3 0
s 22 2 s 1
Ta' = s12 a1 a 2
Also, E(Ta' ) (a1 a2 a3 c1c2 ) 2
MSE ( Ta' ) = [a12k + a22r + a32+ 1+2a1a3c2g1+2a1a2+2a2a3c1g2- 2(a1 + a2 + a3c1c2)] σ4
(2.2) (2.3)
where, 1 n1 n 1 2 2 2 2 2 2 2 c1 , c 2 , n 1 n 2 1 n 2 1 1 n1 1 2 2 ni 2 2 E s i ci ni 1 ni 1 2 1 2
(2.4)
63
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
n n 1 4 E si3 i ci 3 , E si4 i ni 1 ni 1 n1 1 2 2 g 1 (2.6) n 1 n 1 1 1 2 3 2
(2.5)
n2 1 2 2 g 2 n 1 n 1 2 2 2 3 2
(2.7)
Here, k=m1+1, r=1+m2, m1 If a3
2 2 , m2 n2 1 n1 1
(2.8)
1 a1 a 2 , then estimator Ta' is unbiased. c1c 2
Minimizing MSE ( Ta' ) w.r.t. a1, a2, a3, thus, a1 = p1p–1, a2 = p2p–1, a3 = p3p–1 Where p kr kc12 g 22 1 2c1c2 g1 g 2 rc22 g12 p1 (r c12 g 22 ) 1(1 c12 c2 g 2 ) c2 g1 (c1 g 2 rc2 c1 ) p2 k (1 c12 c2 g 2 ) 1(1 c1c2 g1 g 2 ) c2 g1 (c1c2 c2 g1 ) p3 k (rc1c2 c1 g 2 ) 1(c1c2 c2 g1 ) 1(c1 g 2 rc2 g1 )
Therefore, MMSE (Minimum Mean Square Error) estimator is Ta'
p1 s12 p 2 s 22 p3 s1 s 2 p
with p -1 p p3 c1c 2 4 MSE (Ta' ) 1 - 1 2 p
(2.9)
The invariant form of Linex loss function for the estimator Ta' a1s12 a2 s 22 a3 s1s 2 is a s a s a s s a 1 1 2 22 3 1 2 a e e 2
L( a, )
2
2 2 a s a 2 s 2 a3 s1 s 2 a 1 1 1 1 2
with
64
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
2 a
2
Ra, 1 a a12 k a 22 r 2a1a 2 2a1a3 c 2 g1 2a 2 a3 c1 g 2
(2.10)
a a 2a1 a 2 a3 c1c 2 1 3
Minimizing (2.10) w.r.t a1, a2 and a3 thus, a1 = p1′p–1, a2 = p2′p–1, a3 = p3′p–1 where p kr kc12 g 22 1 2c1c2 g1 g 2 rc22 g12 a 1 2 ' p1 r c12 g 22 ) 1(1 c12 c 2 g 2 ) c 2 g1 (c1 g 2 rc 2 c1 1 a
a 1 2 p 2' k (1 c12 c 2 g 2 ) 1(1 c1c 2 g1 g 2 ) c 2 g1 (c1c 2 c 2 g1 ) 1 a a 1 2 ' p3 k (rc1c 2 c1 g 2 ) 1(c1c 2 c 2 g1 ) 1(c1 g 2 rc 2 g1 ) 1 a
(2.11)
(2.12)
(2.13)
Putting the values of a1, a2, a3 in (2.10) MSE ( Ta' ) is calculated. The relative efficiency of Ta' under MSE and Linex loss function is calculated in the table from 2.1 to 2.3. The table shows that for smaller values of a, n1, n2 and when (a1 + a2 + a3 ≠ 1) the estimator performs better under Linex loss function.
3. Preliminary Test Estimator for 𝝈𝟐𝟏 in Normal Distribution As we know that Bancroft (1944) introduced preliminary test estimator (PTE). Let x1 , x2 ,........xn1 be the 𝑛1 observations were taken from the normal distribution with N (µ1, σ12). In case of the estimator Y4 c 4 s12 , under the mean square error (MSE) criterion the improve estimator is Y4
(n1 1) s12 2 4 2 4 with MSE (Y4 ) 1 MSE ( s12 ) 1 . n1 1 n1 1 n1 1
In practical situations, it is possible that σ12 = σ22 and if H0:σ12 = σ22 is accepted, then Ta′ estimator will be considered (Pandey and Malik (1994)). The proposed preliminary test estimator for σ12 (σ12 > σ22) is
65
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
a1 s12 a 2 s 22 a3 s1 s if H 0 : 12 22 is accepted ˆ 12 (n1 1) s12 Y4 , otherwise n1 1
(3.1)
2 s12 2 a1 s1 a 2 s 2 a 3 s1 s 2 if 2 s2 ˆ 12 2 2 s1 (n1 1) s1 n 1 Y4 if s 2 2 1
(3.2)
or
Here λ is the critical value of F at υ1= n1 – 1, υ2 = (n2 – 1) degrees of freedom. Since the distributions of s12 and s22 are independent thus
f s12 , s 22
s12
n1 1 1 2
s22
n2 1 1 2 n1
1 2 1
n1 1 2
n2 1 2 2
n2 1
(3.3)
n1 n2 1 n 2 2 1
1 n2 1 2 2
Put (s12/s22 ) = u and s22 = v, then 1 uvn1 1 v n2 1 n1 1 n n 2 1 1 2 1 2 2 2 2 2 2 u v e 1
n1 1
2 1
f u, v
n1 n2 1 n 2 2 1
n2 1 2
1 n2 1 2 2
n 2 1 2 2
n2 1
(3.4)
Thus, ˆ 12
a1uv a 2 v a3 v u if u n1 1 uv if u n 2 1
(3.5)
k uv if u S if u
ˆ 12
where, S = a1uv a 2 v a3 v u and k
kuv 0
n1 1 n2 1
2
MSE ˆ 12
12
(3.6)
f (u, v) du dv
S 0
2
12
f (u, v) du dv
(3.7)
0
66
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
MSE
k 0
2 2 2
u v
14
14
f (u, v) du dv
a uv a v a v 1
0 n1 1 n 2 1
n1 1 1 2
n1 3 1 2
n2 2 2
2 1
MSE ˆ 12
n1 1 n u 2 1
2kuv 12
4k 2 u
2
ˆ 12
n1 1 n 2 1 1 2 2
1 2 1
n1 1 2
n2 1 2 2
n1 1 n 2 1 , 1 2 2
1 n1 2
0
0
0
-
n1 1
1 2 2 2
1 n1 2
n1 1 n 2 1 1 2 2
n1 n 2 du 2
n1 3 n 1 2
n1 1
n2 1
n1 3 n 1 2
n1 1 n 2 1 1 2 2
1 n1 2
n1 n 2 du 2
u (n1 1) 22 (n 2 1) 12
n1 n2 2
+
n1 3 2
du
+
n1 n2 2 2
n 2 2 22 2 2 1
n1 3 2
du
-
n1 n2 2 2
n 2 2 22 2 2 1
u (n1 1) 22 (n 2 1) 12
+
n 2 2 22 2 2 1
u (n1 1) 22 (n 2 1) 12
n1 n2 2
2 n 1 n 1 u (n1 1) 2 1 2 1 (n 2 1) 12 2 2
n1 1 2
2 1 2 2
u (n1 1) 22 (n 2 1) 12
n1 3 n 1 2
n2 1
n1 1 2 n
n1 1 n 2 1 1 2 2
n1 1
1 2 1
n1 1 2 n
n 2 1
n1 1 2
n1 1 n u 2 1
4 12 k
n1 1 n 2 1 1 2 2
n1 1 4 2 2 4 1 a 3 u
n1 1 2
f (u, v) du dv
+
n1 n2 2 2
n1 1 1 4 2 2 4 a u 1 2
n1 n2 1 2
1 2 1
n1 1 4 2 2 4 a u 1 1
u
0
n1 n 2 2 du 2
u (n1 1) 22 (n 2 1) 12
n1 1 n u 2 1
4 12 k
3
n n2 2 1 du 2
u (n1 1) 22 (n 2 1) 12
2
2
12
n1 n2 2 2
n1 3 2
du
+
67
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79 n1 1 1 4 2 u 1
n1 1
n2 1
4 14 a1 a 3
n1 u2
n1 1
n1 1 2
n2 1
n1 1 n 2 1 1 2 2
4 14 a 3 a 2
n1 1 u 2
n1 1
n1 1 2
n2 1
n1 1 n1 1 2 n 1 4 2 1 2 a u 1 1 2 1
0
n1 1 n 2 1 1 2 2
du
+
n1 3 2
du
+
n1 n2 2 2
n 2 2 22 2 2 1
n1 1 2
n1 3 2
n 2 2 22 2 2 1
n1 3 n 1 2
n 2 1 22
n1 n2 2 2
2 n 1 n 1 u (n1 1) 2 1 2 1 (n 2 1) 12 2 2
0
+
u (n1 1) 22 (n 2 1) 12
du
n1 n2 1 2
u (n1 1) 22 (n 2 1) 12
n1 3 n 1 2
n1 1 2
n 2 2 22 2 2 1
n1 3 n 1 2
0
n1 1
n1 1 2
2 n n 2 2 2 1 2 2 1
u (n1 1) 22 (n 2 1) 12
n1 1 n 2 1 1 2 2
0
n2 1
n1 1 4 2 4 a a u 1 1 2
1 n1 2
n1 1 n 2 1 , 1 2 2
0
n1 1 2
n1 2 2
du
-
n1 n2 2 2
n1 n 2 du 2
u (n1 1) 22 (n 2 1) 12
n1 n2 2
-
68
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
2 14 a 2
n1 3 u 2
n2 1
n1 1 4 2 2 a u 1 3
n1 1
n1 1 2
n2 1
n2 2
u (n1 1) 22 (n 2 1) 12
n1 1 n 1 2
n1 1 n 2 1 1 2 2
0
n1 1 n 1 2
n1 1 n 2 1 1 2 2
0
n1 1
n1 1 2
n2 2
u (n1 1) 22 (n 2 1) 12
2 2 2 1
n1 1 2
du
-
n1 n2 2
2 2 2 1
n1 1 2
du
n1 n2 2
(3.8)
Hence the above expressions (see annexure) by putting the values of A,B,C,D,E,F,G,H,I,J,K,L,M can be written as,
4 A B 4C a12 D a 22 E a32 F G MSE ˆ 12 14 2a1a 2 H 2a1a3 I 2a3 a 2 J 2a1 K 2a 2 L 2a3 M
(3.9)
Minimizing this MSE ˆ 12 w.r.t. to a1, a2, a3 thus a1
r r1 r , a 2 2 , a3 3 r r r
Here r = DEF – DJ2 – H2F + 2HIJ – I2E r1 = KEF – KJ2 – HLF + HMJ + ILJ – IME r2 = DEF – DMJ – KMF +KIJ +IHM + I2L r3 = DEM – DLF + H2MF + HIL + KHJ – IKE The Relative Efficiency (RE) of estimator ˆ 12 with respect to Y4 is calculated for (σ12/ σ22) = 0.6,0.8,1.0,1.2, α=5%,10%, n1 = 8,13,21,25,31, n2 = 5,8,12,16, 20,25 in the table from 3.1 to 3.8. Here, we try to find out Relative Efficiency (RE) of ˆ 12 hence greater value to n1 is considered. The table shows that the preliminary test estimator ˆ 12 performs better than the improve estimator Y4 if 0.6
12 1.2 , level of significance is small and sample size is small. 22
69
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
4. Preliminary Test Estimator for The Variance in a Normal Distribution with N (µ, σ12) in the General Class of Estimator using Linex Loss Function Pandey and Singh (1977a) and Pandey and Singh (1977b) have considered the class of estimator Y5 =𝑐5 𝑠12 for estimating σ12 in normal distribution and define the invariant form of Linex loss function as La, e a e
ac5 s12
12
ac s 2 52 1 1 1 1
(4.1)
Which has ac5 s12 2 Ra, e E e 1 a
ac a 1 5
(4.2)
The value of c5 for which risk will be minimum is [Pandey (1997)] 2a v1 v1 2 c5 1 e where v1 n1 1 2a
(4.3)
If |a| → 0, the Linex loss reduces to square error loss. Thus, c5
n1 1 n1 1
(4.4)
The improve estimator for σ12 is 2a v1 v1 2 2 1 e s1 2a 2 v 2a 4a 2 1 1 1 s1 2a v1 2 2v1 22
Y5
(4.5)
The risk under the invariant form of Linex loss function is 2a v R(Y5 ) 1 1 e v1 2 1 a 2
(4.6)
In case of two samples of sizes n1 and n2, which is drawn from two independent normal populations, assuming σ2 is common variance then the pooled estimator for σ2 is S2
(n1 1) s12 (n2 1) s 22 (Under mean square error criterion) n1 n2 2
(4.7) 70
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
Pandey and Singh (1977) has considered the improved estimator for the variance of the estimator Y6 = C6S2 as
(n1 1) s12 (n2 1) s 22 n1 n2 1
with
MSE (Y6 )
2 4 n1 n2 1
(4.8)
The improve estimator under Linex loss function is proposed as Y6
v1 v 2 2a
2a v1 v 2 v1 v2 2 2 1 e S .Here𝐶6 = 2a
2a v1 v2 2 1 e
(4.9)
Combining equation (4.5) and estimator Ta' , the new estimator for σ12 can be defined as a1 s12 a 2 s 22 a3 s1 s if 12 22 is accepted 2a Y7 v1 v1 2 2 s1 otherwise 1 e 2a
(4.10)
The properties of the above estimator under Linex loss function will be studied in the future.
5. Conclusion Pandey and Malik (1994) proposed a pooled estimator for the common variance in normal distribution. Here the author proposes the above estimator under general criterion a1 + a2 + a3 ≠ 1 and studies its property under Linex loss function. The relative efficiency of the above under MSE and Linex loss function is calculated. It is observed that for smaller values of a, n1, n2 and when (a1 + a2 + a3 ≠ 1) the estimator performs better under Linex loss function. The author also proposes preliminary test estimator for σ12 in case of two normal populations and properties of the preliminary test estimators for σ12 is also discussed. It is found that results are same as Adke et al (1987). It is observed that the preliminary test estimator ˆ 12 performs better than the improve estimator if 0.6
12
1.2 , level of significance is small and sample size is also small. The author 22 also proposes preliminary test estimator for the variance in a normal distribution with N (µ, σ12) in the general class of estimator using Linex loss function. Here the author also highlights the significance of Linex loss function with few practical examples.
6. Scope for Further Research In this paper author has proposed an estimator for further study. Here author suggested preliminary test estimator for the variance in normal distribution in the general class of estimator using Linex loss functions and properties of the estimator can be tested under Linex Loss function. 71
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
7. References Adke, S.R., Waikar, V.B.and Schuurman, F.J. (1987). A two stage shrinkage test estimator for the mean of an exponential distribution, Comm. Stat. Theo. Meth.,16, 1821-1834. http://dx.doi.org/10.1080/03610928708829474 Ahmed, S. E. (1992). Shrinkage preliminary test estimation in multivariate normal distributions, J. Stat. Comput. Simul. 43, 177-195. http://dx.doi.org/10.1080/00949659208811437 Ahmed, S. E., and Rahbar, M. H. (2000). Shrinkage and pretest nonparametric estimation of regression parameters for censored data with multiple observations at each level of covariate, Biometrics 42, 511-525. http://dx.doi.org/10.1002/1521-4036(200008)42:43.0.CO;2-I Arashi, M., and Tabatabaey, S. M. M. (2008). Stein-type improvement under stochastic constraints: Use of multivariate Student-t model in regression, Statist. Probab. Let. 78 2142-2153. http://dx.doi.org/10.1016/j.spl.2008.02.003 Bancroft, T.A. (1944). On biases in estimation due to the use of preliminary test of significance, Ann. Math. Stat.,15, 190-204. http://dx.doi.org/10.1214/aoms/1177731284 Bancroft, T.A. and Han, C.P. (1985).A note on pooling variance, Jour. Amer. Stat. Assoc.,78, 981-983. http://dx.doi.org/10.1080/01621459.1983.10477049 Bhattacharya, S.K and Srivastava, V.K. (1974). A preliminary test procedure in life testing, Jour. Amer. Stat. Assoc., 69, 726-729. http://dx.doi.org/10.1080/01621459.1974.10480195 Billah, B., and Saleh, A. K. Md. E. (1998). Conflict between pretest estimators induced by three large sample tests under a regression model with Student t-error, Statistician, 47,593-606. http://dx.doi.org/10.1111/1467-9884.00157 Goodman, L.A. (1953). A simple method for improving some estimators, Ann.Math.Stat.24, 114-117. http://dx.doi.org/10.1214/aoms/1177729089 Hirano, K. (1966). Estimation procedure based on preliminary test, shrinkage technique and information criterion, Ann. Math. Stat., 29, 21-34. http://dx.doi.org/10.1007/BF02532771 Hirano, K. (1984). A preliminary test procedure for the scale parameter of exponential distribution when the selected parameter is unknown, Ann.Inst.Stat.Math.36, 1-9. http://dx.doi.org/10.1007/BF02481948 Huntsberger, D.V. (1955). A generalization of a preliminary testing procedure of pooling data, Ann.Math.Stat.26, 734-743. http://dx.doi.org/10.1214/aoms/1177728431 Judge, G. G., and Bock, M. E. (1978). The statistical implications of pre-test and stein-rule estimators in econometrics, North-Holland Publishing Company, Amsterdam. Katti, S.K. (1962).Use of some a priori knowledge in the estimation of men from double samples, Biometrics 18,139-147. http://dx.doi.org/10.2307/2527452 Kibria, B. M. G. (2004).Performance of the shrinkage preliminary test ridge regression estimators based on the conflicting of W, LR and LM tests, J. Stat. Comput. Simul. 74,703-810. http://dx.doi.org/10.1080/0094965031000120181 Kibria, B. M. G., and Saleh, A. K. Md. E. (2004). Preliminary test ridge regression estimators with student's terror and conflicting test statistics, Metrika 59, 105-124. 72
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79 http://dx.doi.org/10.1007/s001840300273 Kim, H. M., and Saleh, A. K. Md. E. (2003). Preliminary test estimates of the parameters of simple linear model with measurement error, Metrika 57, 223-251. Pandey, B.N. (1997). Test estimator of the scale parameter of the exponential distribution using Linex loss function, Comm. Stat. Theo .Meth., 26, 2191-2200. http://dx.doi.org/10.1080/03610929708832041 Pandey, B.N. and Srivastava, R. (1985). On shrinkage estimation of the exponential scale parameter, IEEE. Trans. Reliab., R-32, 224-226. http://dx.doi.org/10.1109/TR.1985.5222124 Pandey, B.N.and Malik, H.J. (1994). Some improved estimates of common variance of two populations, Comm. Stat. Theo. Meth., 23 (10), 3019-3035. http://dx.doi.org/10.1080/03610929408831430 Pandey, B.N.and Singh, B.P. (1977). On the estimation of population variance in a normal distribution, Jour. Scientific Research, B.H.U., 27,221-225. Pandey, B.N.and Singh, J. (1977a). Estimation of variance of normal population using a priori information, Jour. .Sta. Assoc., 15, 141-150. Pandey, B.N.and Singh, J. (1977b). A note on the estimation of variance in exponential density, Sankhya, 39, 294-298. Pandey, B.N.and Srivastava A.K. (2001). Estimation of variance using asymmetric loss function, IAPQR, 26 (2), 109-123. Saleh, A. K. Md. E., and Kibria, B. M. G. (1993). Performances of some new preliminary test ridge regression estimators and their properties, Comm. Statist. Theory, Methods 22, 2747-2764. http://dx.doi.org/10.1080/03610929308831183 Scolve, Morris and Radhakrishnan (1972). Non-Optimality of Preliminary-Test Estimators for the Mean of a Multivariate Normal Distribution, Ann.Math.Stat.Vol-43, No.-5, 1481-1490. Srivastava, S.R. (1976). A preliminary test estimator for the variance of a normal distribution, Jour. Ind. Stat. Assoc.,19, 107-111. Varian, H.R. (1975). A Bayesian approach to real estate assessment, In studies in Bayesian Econometrics and statistics in honor of L. J. Savage, Eds S.E. Feinberge and A, Zellner, Amsterdam, North Holland, 195208. Zellner, A. (1986).Bayesian estimation and prediction using asymmetric loss function, Jour. Amer. Stat. Assoc., 81, 446-451. http://dx.doi.org/10.1080/01621459.1986.10478289
8. Annexure 8.1 Tables Table 2.1 Relative Efficiency of the estimator Ta' under MSE and Linex loss function when a1 a 2 a3 1 and a=0.1 n2 2 4 6 n1 2 1.07 1.12 1.17 4 1.12 1.17 1.22 6 1.17 1.22 1.28 73
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
Table 2.2 Relative Efficiency of the estimator Ta' under MSE and Linex loss function when a1 a 2 a3 1 and a=0.2 n2 2 4 6 n1 2 1.17 1.29 1.44 4 1.29 1.44 1.62 6 1.44 1.62 1.86 Table 2.3 Relative Efficiency of the estimator Ta' under MSE and Linex loss function when a1 a 2 a3 1 and a=0.3 n2 2 4 6 n1 2 1.31 1.57 1.99 4 1.57 1.99 2.68 6 1.99 2.69 4.18 Table 3.1 Relative Efficiency of estimator ˆ 12 w.r.to Y4 for n2
n1
12 0.6 and 5 % 22
8
13
21
25
31
1.419
1.272
1.174
1.146
1.119
8
1.472
1.303
1.257
1.207
12
1.738
1.477
1.406
1.331
16
1.653
1.535
1.453
20
1.829
1.705
1.576
5
25
1.730
Table 3.2 Relative Efficiency of estimator ˆ 12 w.r.to Y4 for n2
8
13
21
25
31
1.406
1.264
1.169
1.141
1.119
8
1.447
1.289
1.246
1.201
12
1.679
1.446
1.381
1.311
16
1.598
1.512
1.421
20
1.748
1.641
1.527
5
25
n1
12 0.8 and 5 % 22
1.662 74
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
Table 3.3 Relative Efficiency of estimator ˆ 12 w.r.to Y4 for n2
n1
12 1.00 and 5 % 22
8
13
21
25
31
1.393
1.256
1.165
1.136
1.111
8
1.420
1.273
1.233
1.19
12
1.654
1.406
1.347
1.286
16
1.528
1.454
1.375
20
1.642
1.552
1.459
5
25
1.558
Table 3.4 Relative Efficiency of estimator ˆ 12 w.r.to Y4 for n2
n1
12 1.20 and 5 % 22
8
13
21
25
31
1.381
1.248
1.159
1.136
1.110
8
1.394
1.257
1.219
1.179
12
1.557
1.368
1.315
1.259
16
1.464
1.399
1.330
20
1.549
1.473
1.393
5
25
1.464
Table 3.5 Relative Efficiency of estimator ˆ 12 w.r.to Y4 for n2
8
13
21
25
31
1.399
1.260
1.160
1.142
1.115
8
1.447
1.183
1.247
1.201
12
1.639
1.365
1.388
1.283
16
1.591
1.531
1.404
20
1.232
1.676
1.436
5
25
n1
12 0.6 and 10 % 22
1.630
75
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
Table 3.6 Relative Efficiency of estimator ˆ 12 w.r.to Y4 for n2
n1
12 0.8 and 10 % 22
8
13
21
25
31
1.329
1.248
1.112
1.135
1.110
8
1.411
1.236
1.229
1.187
12
1.611
1.385
1.348
1.231
16
1.495
1.465
1.356
20
1.632
1.587
1.420
5
25
1.560
Table 3.7 Relative Efficiency of estimator ˆ 12 w.r.to Y4 for n2
n1
12 1.00 and 10 % 22
8
13
21
25
31
1.363
1.237
1.137
1.129
1.106
8
1.379
1.226
1.211
1.173
12
1.542
1.375
1.309
1.205
16
1.431
1.398
1.307
20
1.520
1.479
1.403
5
25
1.495
Table 3.8 Relative Efficiency of estimator
ˆ 12 w.r.to
n2
8
13
21
25
31
1.349
1.227
1.146
1.124
1.01
8
1.354
1.230
1.196
1.161
12
1.495
1.326
1.279
1.229
16
1.409
1.352
1.272
20
1.484
1.417
1.346
5
25
n1
12 Y4 for 2 1.20 and 10 % 2
1.408
76
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79
8.2 Formulas
Let A=
n1 1 n k 2u 2 1
1 2 1
C=
n1 1 n u 2 1
1 2 1
0
12 k
n1 1 u 2
0
0
n1 n 2 2 du 2
n2 1 2 2
1 n1 2
n1 1 2
n 1 2 2 2
n1 n2 2 2
n1 n 2 2 du 2
1 n1 2
n1 n2 1 2
n1 n 2 du 2
2 n 1 n 2 1 u (n1 1) 2 1 1 (n 2 1) 12 2 2
n1 1
n1 1 2
n2 1
n1 1
n1 1 2
n2 1
n1 3 n 1 2
n1 1
n2 1
n1 3 n 1 2
n1 3 2
du
n1 n2 2 2
n 2 2 22 2 2 1
2 n 1 n 1 u (n1 1) 2 1 2 1 (n 2 1) 12 2 2
n1 1 2
n1 n2 2
n 2 2 22 2 2 1
n1 3 n 1 2
2 n 1 n 1 u (n1 1) 2 1 2 1 (n 2 1) 12 2 2
n1 1 4 2 4 u 1
F=
n1 1 2
n1 1 2 1
n1 1 1 4 2 4 1 u
E=
n1 3 2
2 n 1 n 2 1 u (n1 1) 2 1 , 1 2 (n 2 1) 12 2
n1 1 4 2 4 u 1
D=
n2 1 2 2
2 n 1 n 1 u (n1 1) 2 1 2 1 (n 2 1) 12 2 2
B= 14
n1 1 2
du
n1 n2 2 2
n 2 2 22 2 2 1
2 n 1 n 1 u (n1 1) 2 1 2 1 2 (n 2 1) 1 2 2
n1 3 2
n1 3 2
du
n1 n2 2 2
77
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79 n1 1 1 4 2 u 1
G=
H=
2
n1 1 4 2 1u
n2 1
2 n n 2 2 2 1 2 2 1
n1 1
n1 1 2
n 2 1
n1 3 2
n1 4 2 2 u 1
I=
n1 1
n2 1
n1 1 4 2 2 u 1
n1 1
n1 1 2
n1 3 n 1 2
n1 1 n1 1 2 n 1 4 2 1 u 1 2 1
n 2 1 22
n1 1 2
n1 3 4 2 u 1
0
n1 1
n1 1 2
n2 1
du
n1 3 2
du
n1 n2 2 2
n 2 2 22 2 2 1
n1 2 2
du
n1 n2 2 2
n1 n 2 du 2
2 n 1 n 1 u (n1 1) 2 1 2 1 (n 2 1) 12 2 2
0
n1 3 2
n1 n2 2 2
n 2 2 22 2 2 1
2 n 1 n 1 u (n1 1) 2 1 2 1 (n 2 1) 12 2 2
0
L=
n1 3 n 1 2
n2 1
du
2 n n 2 2 2 1 2 2 1
2 n 1 n 1 u (n1 1) 2 1 2 1 (n 2 1) 12 2 2
0
K=
n1 1 2
n1 1 2
n1 n2 1 2
2 n 1 n 2 1 u (n1 1) 2 1 1 (n 2 1) 12 2 2
0
J=
1 n1 2
2 n 1 n 2 1 u (n1 1) 2 1 , 1 2 (n 2 1) 12 2
0
n1 1
n1 1 2
n1 1 n 1 2
n1 n2 2
n2 2
2 2 2 1
2 n 1 n 2 1 u (n1 1) 2 1 1 (n 2 1) 12 2 2
n1 1 2
du
n1 n2 2
78
Binod Kumar Singh/ Journal of applied Mathematics and Statistics (2016) Vol. 3 No. 2 pp. 59-79 n1 1 4 2 u 1
M=
0
n1 1
n1 1 2
n 2 1
n1 1 n 1 2
n2 2
2 2 2 1
2 n 1 n 2 1 u (n1 1) 2 1 1 (n 2 1) 12 2 2
n1 1 2
du
n1 n2 2
79