Entropy 2014, 16, 443-454; doi:10.3390/e16010443 OPEN ACCESS
entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article
Entropy Estimation of Generalized Half-Logistic Distribution (GHLD) Based on Type-II Censored Samples Jung-In Seo and Suk-Bok Kang * Department of Statistics, Yeungnam University, Dae-dong, Gyeongsan 712-749, Korea; E-Mail:
[email protected] * Author to whom correspondence should be addressed; E-Mail:
[email protected]; Tel.: +82-53-810-2323. Received: 21 October 2013; in revised form: 5 December 2013 / Accepted: 16 December 2013 / Published: 2 January 2014
Abstract: This paper derives the entropy of a generalized half-logistic distribution based on Type-II censored samples, obtains some entropy estimators by using Bayes estimators of an unknown parameter in the generalized half-logistic distribution based on Type-II censored samples and compares these estimators in terms of the mean squared error and the bias through Monte Carlo simulations. Keywords: entropy; expected Bayesian estimation; generalized half-logistic distribution; Type-II censored sample
1. Introduction Shannon [1], who proposed information theory to quantify information loss, introduced statistical entropy. Several researchers have discussed inferences about entropy. Baratpour et al. [2] developed the entropy of upper record values and provided several upper and lower bounds for this entropy by using the hazard rate function. Abo-Eleneen [3] suggested an efficient method for the simple computation of entropy in progressively Type-II censored samples. Kang et al. [4] derived estimators for the entropy function of a double exponential distribution based on multiply Type-II censored samples by using maximum likelihood estimators (MLEs) and approximate MLEs (AMLEs). Arora et al. [5] obtained the MLE of the shape parameter in a generalized half-logistic distribution (GHLD) based on Type-I progressive censoring with varying failure rates. Kim et al. [6] proposed Bayes estimators of the shape parameter and the reliability function for the GHLD based on progressively
Entropy 2014, 16
444
Type-II censored data under various loss functions. Seo et al. [7] developed an entropy estimation method for upper record values from the GHLD. Azimi [8] derived the Bayes estimators of the shape parameter and the reliability function for the GHLD based on Type-II doubly censored samples. The cumulative distribution function (cdf) and probability density function (pdf) of the random variable, X, with the GHLD are, respectively: λ 2e−x (1) F (x) = 1 − 1 + e−x and: f (x) = λ
2e−x 1 + e−x
λ
1 1 + e−x
x > 0, λ > 0
(2)
where λ is the shape parameter. As a special case, if λ = 1, then this is the half-logistic distribution. The rest of this paper is organized as follows: Section 2 develops an entropy estimation method with Bayes estimators of the shape parameter in the GHLD based on Type-II censored samples. Section 3 assesses the validity of the proposed method by using Monte Carlo simulations, and Section 4 concludes. 2. Entropy Estimation Let X be a random variable with the cdf, F (x), and the pdf, f (x). Then, the differential entropy of X is defined by Cover and Thomas [9] as: Z ∞ H(f ) = − f (x) log f (x)dx (3) −∞
Let X1:n , X2:n , · · · , Xn:n be the order statistics of random samples X1 , X2 , · · · , Xn from the continuous pdf, f (x). In the conventional Type-II censoring scheme, r is assumed to be known in advance, and the experiment is terminated as soon as the r-th item fails (r ≤ n). Park [10] provided the entropy of a continuous probability distribution based on Type-II censored samples in terms of the hazard function, h(x) = f (x)/(1 − F (x)), as: Z ∞ r X fi:n (x) log h(x)dx (4) H(f ) = 1 − log (n − i + 1) − −∞
i=1
where fi:n (x) is the pdf of the i-th order statistic, Xi:n . He also derived a nonparametric entropy estimator of Equation (4) as: Hm =
r h n n o i X r r log (xi+m:n − xi−m:n ) − log (n − i + 1) − n 1 − log 1 − 2m n n i=1
(5)
where m is a positive integer smaller than r/2, which is referred to as window size; Xi−m:n = X1:n for i − m < 1 and Xi+m:n = Xr:n for i + m > n. Theorem 1. The entropy of the GHLD based on Type-II censored samples is: H(f ) =
r X i=1
[1 − log (n − i + 1) − I]
(6)
Entropy 2014, 16
445
where: ∞
Γ(n + 1) X 2−j Γ(n − i + j/λ + 1) I = log λ − Γ(n − i + 1) j=1 jΓ(n + j/λ + 1) Proof. Let u = 1 − F (x). Then: Z ∞ I= fi:n (x) log h(x)dx 0 Z 1 1 1/λ Γ(n + 1) n−i i−1 u (1 − u) log λ 1 − u du = Γ(i)Γ(n − i + 1) 0 2 Z 1 Γ(n + 1) 1 1/λ n−i i−1 du = log λ + u (1 − u) log 1 − u Γ(i)Γ(n − i + 1) 0 2 Z ∞ X Γ(n + 1) 2−j 1 n−i+j/λ = log λ − u (1 − u)i−1 du Γ(i)Γ(n − i + 1) j=1 j 0
(7)
(8)
∞
Γ(n + 1) X 2−j Γ(n − i + j/λ + 1) = log λ − Γ(n − i + 1) j=1 jΓ(n + j/λ + 1) by using: log (1 − z) = −
∞ X zj j=1
for |z| < 1
j
(9)
Note that entropy Equation (6) depends on the shape parameter, λ. The following subsections estimate λ by using two Bayesian inference methods: one method makes use of a conjugate prior; the other, distributions of hyperparameters that are parameters of a prior. 2.1. Bayes Estimation Theorem 2. A natural conjugate prior for the shape parameter, λ, is the gamma prior, given by: π(λ|α, β) =
β α α−1 −βλ λ e , Γ(α)
α, β > 0
(10)
Then, the Bayes estimator of λ based on the square error loss function (SELF) is obtained as: ˆS = r + α λ β+h
(11)
where: h = (n − r) log
1 + e−xr:n 2e−xr:n
+
r X i=1
log
1 + e−xi:n 2e−xi:n
(12)
Entropy 2014, 16
446
Proof. The likelihood function based on the Type-II censoring scheme is written as: r Y n! n−r f (xi:n ) [1 − F (xr:n )] L(λ) = (n − r)! i=1 λ(n−r) Y λ r −xr:n n! 2e 2e−xi:n 1 r = λ −x −x r:n i:n (n − r)! 1+e 1+e 1 + e−xi:n i=1
(13)
ˆ = r/h from Equation (13). Then, it is easy to obtain the MLE of λ to be λ From gamma prior Equation (10) and likelihood function Equation (13), the posterior distribution of λ is obtained as: L(λ)π(λ|α, β) L(λ)π(λ|α, β)dλ λ (β + h)r+α r+α−1 −(β+h)λ = λ e Γ(r + α)
π(λ|x) = R
(14)
which is a gamma distribution with the parameters (r + α, β + h). Here, the Bayes estimator of λ based on the SELF is the posterior mean, and therefore, this completes the proof. 2.2. E-Bayesian Estimation Han [11] introduced a new Bayesian estimation method referred to as the expected Bayesian (E-Bayesian) estimation method to estimate failure probability. The present paper uses this method to obtain another Bayes estimator of the shape parameter, λ. In Subsection 2.1, the gamma prior is supposed to obtain the Bayes estimator of λ. Hence, according to Han [12], the following prior distributions for α and β are considered: 2(c − β) , 0 < α < 10 < β < c c2 1 π2 (α, β) = , 0 < α < 1, 0 < β < c c 2β 0 < α < 1, 0 < β < c π3 (α, β) = 2 , c π1 (α, β) =
(15) (16) (17)
which are decreasing, constant and increasing functions of β, respectively. Theorem 3. For prior distributions (15), (16) and (17), the corresponding E-Bayes estimators of λ based on the SELF are, respectively: h i 2 1 c ˆ λES1 = 2 r + (c + h) log 1 + −c (18) c 2 h 1 1 c ˆ λES2 = r+ log 1 + (19) c 2 h c i 2 1 h ˆ λES3 = 2 r + c − h log 1 + (20) c 2 h
Entropy 2014, 16
447
Proof. For the prior distribution (15): Z Z ˆ ˆ S π1 (α, β)dβdα λES1 = λ α
β
Z c c−β 2 1 = 2 (r + α) dβdα c 0 0 β +h Z 1 c+h 2 −c (r + α)dα = 2 (c + h) log c h 0 i 2 1 h c = 2 r+ (c + h) log 1 + −c c 2 h Z
(21)
Similarly: ˆ ES2 = λ
Z Z
ˆ S π2 (α, β)dβdα λ 1 c 1 r+ log 1 + = c 2 h α
β
(22)
and: Z Z
ˆ S π3 (α, β)dβdα λ α β 2 c i 1 h = 2 r+ c − h log 1 + c 2 h
ˆ ES3 = λ
(23)
ˆ ESi (i = 1, 2, 3). The following Theorem presents the relationships among E-Bayes estimators λ Theorem 4. For 0 < c < h, E-Bayes Estimators satisfy: ˆ ES3 < λ ˆ ES2 < λ ˆ ES1 λ
(24)
ˆ ES1 = lim λ ˆ ES2 = lim λ ˆ ES3 lim λ
(25)
and: h→∞
h→∞
h→∞
Proof. From Equations (18)–(20): ˆ ES1 − λ ˆ ES2 = λ ˆ ES2 − λ ˆ ES3 = 2 λ c2
1 r+ 2
h
i c c + h log 1 + −c 2 h
(26)
For 0 < c < h: ∞ c X (−1)i−1 c i c + h log 1 + −c= +h −c 2 h 2 i h i=1
c
c2 + 2ch c3 + 2c2 h c4 + 2c3 h − + − ··· − c 4h2 6h3 2h2 c c3 3c4 c5 =c − + − + ··· 12h2 12h3 40h4 15h5 2 3c4 c c 8c =c 1− + 1− + ··· 12h2 h 40h4 9h =
>0
(27)
Entropy 2014, 16
448
by using: ∞ X
log (1 + z) =
j=1
(−1)j−1
zj j
for |z| < 1
(28)
for
(29)
That is: ˆ ES1 − λ ˆ ES2 = λ ˆ ES2 − λ ˆ ES3 > 0 λ
0