On The Estimation of R = P(Y > X) for a Class of Lifetime Distributions ...

3 downloads 0 Views 212KB Size Report
Nov 1, 2014 - gamma distribution, Weibull distribution, Half-Normal distribution, Rayleigh distribution, .... which is the MLE of generalised gamma distribution.
J. Stat. Appl. Pro. 3, No. 3, 369-378 (2014)

369

Journal of Statistics Applications & Probability An International Journal http://dx.doi.org/10.12785/jsap/030308

On The Estimation of R = P(Y > X) for a Class of Lifetime Distributions by Transformation Method Surinder Kumar∗ and Mayank Vaish Department of Applied Statistics, School for Physical Sciences, Babasaheb Bhimrao Ambedkar University, Lucknow-226025, India Received: 24 Sep. 2014, Revised: 15 Oct. 2014, Accepted: 18 Oct. 2014 Published online: 1 Nov. 2014

Abstract: The problem of estimating the R = P(Y > X) v´ is-a-´vis point and interval estimation are considered for a class of lifetime distributions. In order to obtain these estimators, the major role is played by the transformation methods. Keywords: Class of lifetime distributions, Stress-Strength reliability, Transformation method, MLE, UMVUE, Confidence interval.

1 Introduction A lot of work has been done in the literature to deal with various inferential problems related to R = P(Y > X) , which represents the reliability of an item of random strength Y subject to a random stress X. For a brief review, one may refer to Church and Harris (1970) [6], Enis and Geisser (1971)[9], Downton (1973)[8], Tong (1974, 1975)[12,13], Kelly, Kelly and Schucany (1976)[10], Sathe and Shah (1981)[11], Chao (1982)[5], Awad and Gharraf (1986)[1], Chaturvedi and Surinder (1999)[4], Chaturvedi and Sharma (2007)[3]. The purpose of present paper is many-fold. We considered a class of lifetime distributions proposed by Chaturvedi and Rani (1997)[2], which covers many lifetime distributions as specific cases. In section 2, the MLE and UMVUE of ’R’ is derived. In section 3, we construct the confidence interval for ’R’. In order to derive the MLE, UMVUE and confidence interval for ’R’, the major role is played by the transformation method.

2 MLE and UMVUE of R = P(Y > X ) for a Class of lifetime distributions Chaturvedi and Rani (1997)[2] has defined the following class of distributions f (x; θ , a, b, c) =

cxac−1 exp(−xc /θ b ); θ abΓ a

x, θ , a, b, c > 0,

(1)

where θ is assumed to be unknown and a, b, c are known constants. On considering different values for a, b, c, the pdf’s of different continuous distributions, such as one-parameter exponential distribution, gamma distribution, generalized gamma distribution, Weibull distribution, Half-Normal distribution, Rayleigh distribution, Chi-distribution and Maxwell’s failure distribution etc., can be obtained. Let the random variables X and Y follows the class of lifetime distributions given at (1). Theorem 1: The MLE of R = P(Y > X) is given by R˜ = ∗ Corresponding



a1 η¯ a2 ε¯ + a1η¯

 a1

  a1 η¯ 1 , 2 F1 a1 , 1 − a1; a1 ; B(a1 , a2 ) a2 ε¯ + a1η¯

(2)

author e-mail: [email protected] c 2014 NSP

Natural Sciences Publishing Cor.

S. Kumar, M. Vaish : On The Estimation of R = P(Y > X) for a Class ...

370

where, ε¯ =

1 n1 c1 1 n2 c2 ∑ Xi = T¯X (say) and, η¯ = ∑ Y = T¯Y (say). n1 i=1 n2 j=1 j

Proof: Let us consider the transformation xc = ε in (1), we get f (ε ; a, λ ) =

ε a−1 exp(−ε c /λ ); λ aΓ a

ε , a, λ > 0,

(3)

which follows gamma distribution, where λ = θ b . Now, let us consider ε and η two independent random variables which follows gamma distribution with parameters (a1 , λ1 ) and (a2 , λ2 ) respectively, where ε = xc1 and η = yc2 . Thus, for R = P(η > ε ), we have η  R=P >1 , ε or,   η /λ2 λ1 +1 > +1 , P ε /λ1 λ2 or, P Here, random variable z =



 ε /λ1 λ2 < . η /λ2 + ε /λ1 λ1 + λ2

ε /λ1 , has a beta distribution with pdf η /λ2 + ε /λ1

f (z, a1 , a2 ) = [B (a1 , a2 )]−1 za1 −1 (1 − z)a2−1 , or, R = I 

(a1 , a2 ) , λ2  λ1 + λ2

(4)



On using the relationship between incomplete beta distribution and hypergeometric series, we get    a1  λ2 λ2 1 R= . 2 F1 a1 , 1 − a1; a1 ; λ1 + λ2 B(a1 , a2 ) λ1 + λ2

(5)

ε¯ η¯ and, λ˜ 2 = in (5), we get the MLE of R = P(η > ε ), On replacing λ1 and λ2 by their respective MLE’s i.e , λ˜ 1 = a1 a2 where ε and η are independent gamma distributions. The reliability for the random strength Y and random stress X i.e. R = P(Y > X) is given by (6) and MLE of R is given by (7) respectively. !a1 !! θ2b2 θ2b2 1 R= , (6) 2 F1 a1 , 1 − a1; a1 ; B(a1 , a2 ) θ b1 + θ b2 θ b1 + θ b2 1

R˜ =



2

a1 η¯ a2 ε¯ + a1η¯

1

a1

   a1 η¯ 1 , 2 F1 a1 , 1 − a1; a1 ; B(a1 , a2 ) a2 ε¯ + a1 η¯

1 n1 c1 1 n2 c2 ∑ Xi = T¯X (say) and, η¯ = ∑ Y = T¯Y (say) n1 i=1 n2 j=1 j Hence, the theorem follows. where ε¯ =

Corollary 1. 1.For a = b = c = 1, T¯Y , R˜ = ¯ TX + T¯Y c 2014 NSP

Natural Sciences Publishing Cor.

2

(7)

J. Stat. Appl. Pro. 3, No. 3, 369-378 (2014) / www.naturalspublishing.com/Journals.asp

371

1 n1 1 n2 where, T¯Y = ∑ Y j and, T¯X = ∑ Xi , n2 j=1 n1 i=1 which is the MLE of exponential distribution. 2.For b = c = 1 and ’a’ as positive integer, R˜ =



a1Y¯ ¯ a2 X + a1Y¯

a1

  a1Y¯ 1 , 2 F1 a1 , 1 − a1 ; a1 ; B(a1 , a2 ) a2 X¯ + a1Y¯

which is the MLE of gamma distribution. 3.For b = c = α ,  a1   a1 T¯Y 1 a1 T¯Y a , 1 − a ; a ; , R˜ = F 1 1 1 2 1 a2 T¯X + a1 T¯Y B(a1 , a2 ) a2 T¯X + a1T¯Y 1 n1 α 1 n2 α where, T¯Y = ∑ Y j and, T¯X = ∑X , n2 j=1 n1 i=1 i which is the MLE of generalised gamma distribution. 4.For a = 1 and b = c, T¯Y , R˜ = ¯ TX + T¯Y 1 n1 c 1 n2 c where, T¯Y = ∑ Y j and, T¯X = ∑X , n2 j=1 n1 i=1 i which is the MLE of Weibul distribution. 5.For a = 1/2, b = c = 2,   1/2  T¯Y 1 1 1 T¯Y 1 , ; ; F R˜ = ¯ , 2 1 π 2 2 2 T¯X + T¯Y TX + T¯Y 1 n1 2 1 n2 2 where, T¯Y = ∑ Y j and, T¯X = ∑X , n2 j=1 n1 i=1 i which is the MLE of Half-Normal distribution. 6.For a = b = 1 and c = 2, T¯Y R˜ = ¯ , TX + T¯Y 1 n1 2 1 n2 2 where, T¯Y = ∑ Y j and, T¯X = ∑X , n2 j=1 n1 i=1 i which is the MLE of Rayleigh distribution. 7.For a = α /2, b = 1 and c = 2, R˜ =



T¯Y T¯X + T¯Y

α /2

1  α α  2 F1 B , 2 2



 α α α T¯Y , ,1 − ; ; ¯ 2 2 2 TX + T¯Y

1 n2 2 1 n1 2 where, T¯Y = ∑ Y j and, T¯X = ∑X , n2 j=1 n1 i=1 i which is the MLE of Chi distribution. 8.For a = 3/2, b = 1 and c = 2, R˜ =



T¯Y T¯X + T¯Y

3/2

8 2 F1 π



 T¯Y 3 −1 3 , , ; ; ¯ 2 2 2 TX + T¯Y

1 n2 2 1 n1 2 ∑ Y j and, T¯X = ∑X , n2 j=1 n1 i=1 i which is the MLE of Maxwell’s failure distribution. where, T¯Y =

c 2014 NSP

Natural Sciences Publishing Cor.

S. Kumar, M. Vaish : On The Estimation of R = P(Y > X) for a Class ...

372

Theorem 2: The UMVUE of R = P(Y > X) is given by  ∞ (−1)i B [(n2 − 1)a2 + i + 1, a1 + j]  a2 −1   ∑   B [a1 , (n1 − 1)a1] B [a2 , (n 1)a2] i=0 (n2 − 1)a2 + i i  2 −    ∞   TY a1 + j  j (n1 −1)a1 −1  ; if TY < TX × (−1) ∑  j  TX  j=0     where 0 ≤ i ≤ a1 − 1 < ∞ and 0 ≤ j ≤ (n1 − 1)a1 − 1 < ∞ ˆ R=   ∞   B [(n1 − 1)a1, a1 + j] (−1)i a2 −1   ∑  i  B [a , (n − 1)a ] B [a , (n − 1)a ] (n − 1)a + i  1 1 1 2  2  2 i=0 2 2   j   ∞ (n2 −1)a2 +i TX j  × ∑ (−1) ; if TX < TY   j  j=0 TY    where 0 ≤ i ≤ a2 − 1 < ∞ and 0 ≤ j ≤ (n2 − 1)a2 + i < ∞ n1

n2

i=1

j=1

(8)

where, TX = ∑ Xic1 and, TY = ∑ Y jc2 .

Proof: Let us consider the transformation xc = ε in (1), we get f (ε ; a, b, c, θ ) =

ε a−1 exp(−ε /λ ) λ aΓ a

which is gamma distribution, where λ = θ b . Now to obtain P(η > ε ), we have to obtain the UMVUE of f (ε ; a, λ ) i.e. fˆ(ε ; a, λ ) and f (η ; a, λ ) i.e. fˆ(η ; a, λ ) which is given by  a −1    ε 1 ε (n1 −1)a1 −1 1 1− ; if 0 < ε < n1 ε¯ f (ε ; a1 , λ ) = (9) B (a1 , (n1 − 1)a1) (n1 ε¯ )a1 n1 ε¯

Replacing ε by η and n1 by n2 in (9), we get the UMVUE of f (η ; a2 , λ ). Now, let us consider ε and η be two independent random variables follows gamma distribution with parameters (a1 , λ1 ) and (a2 , λ2 ) respectively, where ε = xc1 and η = yc2 . Z ∞Z ∞ Rˆ = P(η > ε ) = fˆ(η ; a2 , λ2 ) fˆ(ε ; a1 , λ1 )d η d ε 0

1 = B[a1 , (n1 − 1)a1]B[a2 , (n2 − 1)a2]

let, 1 −

=

ε

 Z n1 ε¯ Z n2 η¯ η a2 −1 0

ε

(n2 η¯ )a2

η 1− n2 η¯

(n2 −1)a2 −1

ε a1 −1 × (n1 ε¯ )a1

  ε (n1 −1)a1 −1 dη dε 1− n2 ε¯

η =z n2 η¯

1 B[a1 , (n1 − 1)a1]B[a2 , (n2 − 1)a2]

Z n1 ε¯ Z (1− ε ) n2 η¯ 0

ε

z(n2 −1)a2 −1 (1 − z)a2 −1 ×

ε a1 −1 (n1 ε¯ )a1

  ε (n1 −1)a1 −1 1− dzd ε n2 ε¯

  ∞ a2 − 1 (−1)i 1 = ∑ (n2 − 1)a2 + i i B[a1 , (n1 − 1)a1]B[a2 , (n2 − 1)a2] i=0    Z min(n1 ε¯ ,n2 η¯ )  ε (n1 −1)a1 −1 ε a1 −1 ε (n1 −1)a1 −1 1− 1− × dzd ε n1 ε¯ (n1 ε¯ )a1 n2 η¯ 0 c 2014 NSP

Natural Sciences Publishing Cor.

J. Stat. Appl. Pro. 3, No. 3, 369-378 (2014) / www.naturalspublishing.com/Journals.asp

373

Now, consider the case when n1 ε¯ > n2 η¯ , in this situation, let, 1 −

=

ε =z n2 η¯

  ∞ a2 − 1 (−1)i 1 ∑ (n2 − 1)a2 + i i B [a1 , (n1 − 1)a1] B [a2 , (n2 − 1)a2] i=0    (n1 −1)a1 −1 Z 1 n2 η¯ [(n2 η¯ )(1 − z)](a1 −1) (n2 −1)a2 +i 1− z × (n2 η¯ )dz (1 − z) n1 ε¯ (n1 ε¯ )a1 0

  ∞ a2 − 1 (−1)i 1 = ∑ (n2 − 1)a2 + i i B [a1 , (n1 − 1)a1] B [a2 , (n2 − 1)a2] i=0    Z 1 ∞ n2 η¯ a1 + j j (n1 − 1)a1 − 1 (−1) × (1 − z)a1 + j−1 z(n2 −1)a2 +i dz ∑ ¯ n ε j 0 j=0 1

=

  ∞ a2 − 1 (−1)i B [(n2 − 1)a2 + i + 1, a1 + j] ∑ (n2 − 1)a2 + i i B [a1 , (n1 − 1)a1] B [a2 , (n2 − 1)a2] i=0 ∞

   (n1 − 1)a1 − 1 n2 η¯ a1 + j ; if n2 η¯ < n1 ε¯ j n1 ε¯

(10)

   (n2 − 1)a2 + i n1 ε¯ j × ∑ (−1) ; if n2 η¯ > n1 ε¯ j n2 η¯ j=0

(11)

× ∑ (−1) j j=0

Similarly, we can tackle the case when n2 η¯ > n1 ε¯ , we get   ∞ a2 − 1 (−1)i B [(n1 − 1)a1, a1 + j] = ∑ (n2 − 1)a2 + i i B [a1 , (n1 − 1)a1] B [a2 , (n2 − 1)a2] i=0 ∞

j

Now, to obtain the UMVUE for the class of distribution, n1

n2

i=1

j=1

substituting n1 ε¯ = ∑ Xic1 = TX and, n2 η¯ = ∑ Y jc2 = TY in (10) and (11), and the Theorem follows.

Corollary 2. 1.For a = b = c = 1 ,

Rˆ =

n2

  1+ j n1 −2 TY Γ n1 Γ n2  j   ∑ (−1)   TX  Γ (n1 − j − 1)Γ (n2 + j + 1) j=0      

n2 −1 Γ n1 Γ n2 ∑ (−1) j Γ (n2 − j)Γ (n1 + j) j=0



TX TY

j

; if TY < TX

; if TY > TX

n1

where, TY = ∑ Y j and, TX = ∑ Xi , j=1

i=1

which is the UMVUE of exponential distribution.

c 2014 NSP

Natural Sciences Publishing Cor.

S. Kumar, M. Vaish : On The Estimation of R = P(Y > X) for a Class ...

374

2.For b = c = 1 and ’a’ as positive integer,      a2 −1  (−1)i   B [(n2 − 1)a2 + i + 1, a1 + j] ∑    i=0 (n2 − 1)a2 + i  B [a1 , (n1 − 1)a1] B [a2 , (n2 − 1)a  2]   a1 + j  (n1 −1)a1 −1   T Y (n −1)a −1 ×  (−1) j 1 j 1 ∑  TX j=0 Rˆ =     a2 −1  B [(n1 − 1)a1, a1 + j] (−1)i   ∑   B [a1 , (n1 − 1)a1] B [a2 , (n2 − 1)a2] i=0 (n2 − 1)a2 + i    j  (n2 −1)a2 +i   (n2 −1)a2 +i TX  j  (−1) ∑ × j TY j=0 n2

n1

j=1

i=1

a2 −1 i

a2 −1 i

; if TY < TX

; if TX < TY

where, TY = ∑ Y j and, TX = ∑ Xi ,

which is the UMVUE of gamma distribution. 3.For b = c = α ,  (−1)i B [(n2 − 1)a2 + i + 1, a1 + j] a2 −1   ∑   B [a1 , (n1 − 1)a1] B [a2 , (n2 − 1)a  i=0 (n2 − 1)a2 + i   2]   a1 + j (n1 −1)a1 −1   T  Y (n −1)a −1   (−1) j 1 j 1 × ∑   TX j=0  Rˆ =   a2 −1  B [(n1 − 1)a1, a1 + j] (−1)i   ∑   B [a1 , (n1 − 1)a1] B [a2 , (n2 − 1)a (n2 − 1)a2 + i    2] i=0  j (n2 −1)a2 +i   T  X 2 +i  × (−1) j (n2 −1)a ∑ j TY j=0 n2

n1

j=1

i=1

a2 −1 i



a2 −1 i

; if TY < TX

; if TX < TY

where, TY = ∑ Y jα and, TX = ∑ Xiα ,

which is the UMVUE of generalised gamma distribution. 4.For a = 1 and b = c,   1+ j n1 −2 Γ n1 Γ n2  j TY   (−1) ∑   TX  Γ (n1 − j − 1)Γ (n2 + j + 1) j=0 Rˆ =   j  n2 −1  Γ n1 Γ n2 TX    ∑ (−1) j Γ (n2 − j)Γ (n1 + j) j=0 TY n2

n1

j=1

i=1

; if TY < TX

; if TY > TX

where, TY = ∑ Y jc and, TX = ∑ Xic ,

which is the UMVUE of Weibull distribution. 5.For a = b = 1 and c = 2,   1+ j n1 −2 Γ n1 Γ n2  j TY   ∑ (−1)   TX  Γ (n1 − j − 1)Γ (n2 + j + 1) j=0 ˆ R=   j  n2 −1  Γ n1 Γ n2 TX  j   ∑ (−1) Γ (n2 − j)Γ (n1 + j) j=0 TY n2

n1

j=1

i=1

where, TY = ∑ Y j2 and, TX = ∑ Xi2 ,

which is the UMVUE of Rayleigh distribution.

c 2014 NSP

Natural Sciences Publishing Cor.

; if TY < TX

; if TY > TX

J. Stat. Appl. Pro. 3, No. 3, 369-378 (2014) / www.naturalspublishing.com/Journals.asp

375

6.For a = α /2, b = 1 and c = 2 ,

Rˆ =

n2

i h  (n −1)α α −1  B 2 2 + i + 1, α2 + j  2 (−1)i   i h i h ∑   α α i=0 (n2 −1)α  +i  B α2 , (n2 −1) B α2 , (n1 −1)  2 2 2    α (n −1) α   1  −1  2 +j (n1 −1)α 2  −1 TY  2  × ∑ (−1) j  j  TX j=0  h i   α α  α −1  B (n1 −1) , + j 2  2 2 (−1)i   i h i h ∑   (n α α i=0 2 −1)α  +i B α2 , (n2 −1) B α2 , (n1 −1)  2  2 2    α (n −1)   1  j +i (n2 −1)α  2  +i TX  2 × ∑ (−1) j j TY j=0

α −1 2

i

 ; if TY < TX

α −1 2

i

 ; if TX < TY

n1

where, TY = ∑ Y j2 and, TX = ∑ Xi2 , j=1

i=1

which is the UMVUE of Chi distribution.

3 Interval estimation of R = P(Y > X ) Theorem 3: The Confidence interval for R = P(Y > X) is given by 



    (a , a ) < R < I  (a , a ) = 1 − γ P 1 2 1 2  I (a1 T¯Y /a2 T¯X )F1−γ2 (a1 T¯Y /a2 T¯X )Fγ1     (a1 T¯Y /a2 T¯X )F1−γ + 1 (a1 T¯Y /a2 T¯X )Fγ1 + 1

(12)

2

1 n1 c1 1 n2 c2 where, T¯X = ∑ Xi and, T¯Y = ∑Y . n1 i=1 n2 j=1 j

Proof: Let us consider ε and η be two independent random variables follows gamma distribution with parameters (a1 , λ1 ) and(a2 , λ2 ) respectively, where ε = xc1 and η = yc2 . Denote λ2 a1 η¯ λ = ; λ˜ = (13) λ1 a2 ε¯ Now, it is well known that,

Similarly,

2n1 ε¯ 2 ∼ Gamma(n1 a1 , 2) ≡ χ2n 1 a1 λ1 2n2 η¯ 2 ∼ Gamma(n2 a2 , 2) ≡ χ2n 2 a2 λ2

where, χν2 is the p.d.f. of the Chi-Square distribution with ν degree of freedom. Hence,

λ˜ 2n2η¯ /n2λ2 a2 = λ 2n1ε¯ /n1 λ1 a1 =

2 χ2n /2n2a2 2 a2 2 χ2n /2n1a1 1 a1

∼ F (2n1 a1 , 2n2 , a2 )

(14)

where F(ν1 , ν2 ) denotes the Snedecor’s F-distribution with ν1 and ν2 degree of freedom. Analogously, λ˜ ∼ F(2n1 a1 , 2n2 a2 ) λ c 2014 NSP

Natural Sciences Publishing Cor.

S. Kumar, M. Vaish : On The Estimation of R = P(Y > X) for a Class ...

376

For any δ denote by Fδ = Fδ (2n1 a1 , 2n2 a2 ) , the 1 − δ quantile (i.e. the δ cut off points) of F(2n1 a1 , 2n2 a2 ) distribution. Also, the 1 − δ quantile of F(2n2 a2 , 2n1 a1 ) distribution is related to Fδ as follows. Fδ (2n2 a2 , 2n1 a1 ) = [F1−δ (2n1 a1 , 2n2 a2 )]−1 Let, γ1 and γ2 be non-negative numbers such that γ1 + γ2 = γ . Then, h i P λ˜ F1−γ2 < λ < λ˜ Fγ1 = 1 − γ

(15)

Recall now that R = I λ (a1 , a2 ) . Since, Iz (a, b) is an increasing function of z for any a, b. So, is I λ (a1 , a2 ) as a λ +1 λ +1 function of λ . Hence (15) implies that,       P I ˜ (a1 , a2 ) < R < I ˜ (a1 , a2 ) = 1 − γ λ Fγ1  λ F1−γ2  ˜λ F1−γ + 1 ˜λ Fγ + 1 1 2

(16)

The Confidence interval (16) has originally derived by Constantine et al (1986)[7]. a1 η¯ and λ1 = θ1b1 , λ2 = θ2b2 Now, in (16) replacing λ˜ = a2 ε¯ Thus, R = I 

(a1 , a2 ) and the confidence interval for R is given as, (a1 , a2 ) = I λ2 θ2b2      b λ1 + λ2 θ 1 + θ b2 





1

2





   (a , a ) = 1 − γ   (a , a ) < R < I P 1 2  1 2 I (a1 η¯ /a2 ε¯ )F1−γ2 (a1 η¯ /a2ε¯ )Fγ1     (a1 η¯ /a2 ε¯ )F1−γ2 + 1 (a1 η¯ /a2 ε¯ )Fγ1 + 1

1 n1 c1 1 n2 c2 ∑ Xi = T¯X (say) and, η¯ = ∑ Y = T¯Y (say). n1 i=1 n2 j=1 j and the Theorem follows. where, ε¯ =

Corollary 3. 1.For a = b = c = 1 in (17), we get confidence interval for one parameter exponential distribution. ! λ˜ F1−γ2 λ˜ Fγ1 P = 1 − γ,

Suggest Documents