Statistics A Journal of Theoretical and Applied Statistics
ISSN: 0233-1888 (Print) 1029-4910 (Online) Journal homepage: http://www.tandfonline.com/loi/gsta20
Estimating a linear parametric function of a doubly censored exponential distribution Yogesh Mani Tripathi, Constantinos Petropoulos, Farha Sultana & Manoj Kumar Rastogi To cite this article: Yogesh Mani Tripathi, Constantinos Petropoulos, Farha Sultana & Manoj Kumar Rastogi (2018) Estimating a linear parametric function of a doubly censored exponential distribution, Statistics, 52:1, 99-114, DOI: 10.1080/02331888.2017.1344242 To link to this article: https://doi.org/10.1080/02331888.2017.1344242
Published online: 05 Jul 2017.
Submit your article to this journal
Article views: 110
View related articles
View Crossmark data
Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=gsta20
STATISTICS, 2018 VOL. 52, NO. 1, 99–114 https://doi.org/10.1080/02331888.2017.1344242
Estimating a linear parametric function of a doubly censored exponential distribution Yogesh Mani Tripathia , Constantinos Petropoulos Manoj Kumar Rastogic
b , Farha Sultanaa
and
a Department of Mathematics, Indian Institute of Technology, Patna, India; b Department of Mathematics, University of Patras, Rio, Greece; c National Institute of Pharmaceutical Education and Research, Hajipur, India
ABSTRACT
ARTICLE HISTORY
For an arbitrary strictly convex loss function, we study the problem of estimating a linear parametric function μ + kσ , k is a known constant, when a doubly censored sample is available from a two-parameter exponential E(μ, σ ) population. We establish the inadmissibility of the best affine equivariant (BAE) estimator by deriving an improved estimator. We provide various implications for quadratic and linex loss functions in detail. Improvements are obtained for the absolute value loss function as well. Further a new class of estimators improving upon the BAE estimator is derived using the Kubokawa method. This class is shown to include some benchmark estimators from the literature.
Received 13 June 2016 Accepted 1 June 2017 KEYWORDS
Brewster-Zidek estimator; censored samples; inadmissibility; equivariant estimator MSC 2010
62C99; 62F10
1. Introduction In many life testing studies, very often one focuses on deriving efficient procedures for unknown parameters of an underlying population. Sometimes it is of interest to obtain estimates for various important functions of these parameters such as quantiles, hazard rate function, and failure probabilities. Censoring is customary in such studies and traditionally, is a well-accepted method for observing the data. Meeker and Escobar [1] describe in detail various applications of censoring in survival analysis and related areas. Estimation problems based on censored samples have found widespread applications in literature. In the past, several authors have analysed different statistical models based on censored samples. Lawless [2] and Meeker and Escobar [1] have discussed a variety of results in this direction. The exponential distribution is used widely as a model in many reliability studies involving lifetimes of mechanical and electronic components, survival times in critical diseases etc. Initial attempts to study various probabilistic and statistical properties of exponential distribution in the presence of censoring were made by Epstein and Sobel [3,4] and Epstein [5]. Some important applications of exponential distribution in statistical theory have also been explored in detail in the treatise by Balakrishnan and Basu [6]. In this article, we consider estimation of a parametric linear function of the unknown parameters from a two-parameter exponential distribution when it is known that samples are type II doubly censored. Such censoring is widely used in several areas of statistical inference including life testing and reliability analysis. This censoring occurs when values of some extreme observations/lifetimes are not recorded and can be treated as a combination of left and right censoring. In particular left censoring occurs in situations where the first few lifetimes are not observed. One may refer to the recent treatise of Balakrishnan and Cramer [7] for further details on double censoring. Several examples of type II doubly censored data abound in reliability and life CONTACT C. Petropoulos
[email protected]
© 2017 Informa UK Limited, trading as Taylor & Francis Group
100
Y. M. TRIPATHI ET AL.
testing experiments. Khan et al. [8] described that in studying the effect of pesticide over lifetimes of a prescribed number of termites may lead to doubly censored observations in the sense that survival times of the first few observations and the last few observations are often influenced by factors other than the pesticide itself. In clinical trial, the disease progression studies of patients suffering from a deadly disease often lead to doubly censored observations. Here left censoring may occur due to the fact that the infection may precede the beginning of the study. So the lifetimes of subjects that are failed after the beginning of the study are recorded. In such studies the survival time of a subject during the disease progression is usually followed until their diagnostic tests or until the subject is lost to follow up (see for instance, [9]) and so right censoring can be employed during followed-up period. It is natural to incorporate these information on the observations to make adequate inference about the unknown quantities of interest. Several researchers have discussed type II double censoring in literature. Sindhu et al. [10] obtained different Bayes estimates of the unknown shape parameter of a Burr II distribution. A numerical comparison between different estimates has been made using simulations. Feroze and Aslam [11] obtained related results for a Burr X distribution. Pak et al. [12] considered estimation of the scale parameter of a Rayleigh distribution and obtained maximum likelihood estimate of it based on type II doubly censored data. In this paper estimation of a linear parametric function of the parameters of a two-parameter exponential distribution is taken up based on doubly censored data. Suppose that X1 , X2 , . . . , Xn denote ordered observations from a two-parameter exponential distribution with density function given by fX (x, μ, σ ) =
1 −(x−μ)/σ , e σ
x > μ, μ ≥ 0, σ > 0.
(1)
Here, the location parameter μ and the scale parameter σ are unknown. Let some initial samples, say (a − 1), are censored and the test is terminated at the instance bth failure is observed. Then the resulting data are commonly known as doubly type II censored and is given by Xa ≤ Xa+1 ≤ · · · ≤ Xb ,
where 1 ≤ a ≤ b ≤ n.
We consider estimation of μ + kσ under a loss function of the form L((d − μ − kσ )/σ ) where k is a nonnegative constant. The function L(t) is strictly convex for −∞ < t < ∞. It is mentioned in [13] that (Xa , V) forms a complete and sufficient statistic for (μ, σ ), where V = −(n − a)Xa + Xa+1 + · · · + (n − b + 1)Xb . The respective probability densities are 1 fXa (x) = A[1 − e−(x−μ)/σ ]a−1 [e−(x−μ)/σ ]n−a+1 , σ
x > μ > 0, σ > 0,
(2)
and fV (v) =
1 e−v/σ v b−a−1 , σ b−a (b − a)
v > 0, σ > 0,
(3)
where A=
n! . (a − 1)!(n − a)!
(4)
At the recent past this estimation problem has attracted some attentions among researchers and is closely related to quantile estimation (a = 1, b = n). The case a = 1 and b = n resembles to the complete sample situation. Estimation of μ + kσ corresponds to the pth, (0 < p < 1), quantile estimation for the exponential distribution as given in (1) provided k = ln (1 − p). Particularly in life testing studies it is quite important and of interest as well to derive procedures for making inference upon quantiles. In such investigation the lifetime of a product is often considered as a quality characteristic.
STATISTICS
101
So inference upon mean residual lifetime or any other percentile lifetime is of great practical interest to reliability practitioners. We refer to Chakraborti and Li [14] and Keating et al. [15] for further details on quantile estimation. Furthermore in clinical trials one of the main objective is to analyse the effect of a treatment on survival times of subjects under study. For instance, in such investigation it may be of concern that about what percentage of subjects may survive up to a specific time period. Accordingly large or small values of the index k can be taken into consideration. Based on doubly censored samples, Elfessi [13] derived improved estimators of the scale parameter σ and the location parameter μ under the squared error loss function. He also considered estimation of μ + kσ and obtained an improved estimator under the squared error loss. In each case, author showed that improved estimators are better than the corresponding minimum risk equivariant estimators. Madi [16] obtained further improvements over the best affine equivariant (BAE) estimator of the scale parameter σ under squared error and entropy losses. We notice that the estimators obtained by Madi stem from the work of Brewster [17] who initially obtained smooth improved procedures for σ under the complete sampling case, a = 1 and b = n. Concerning the estimation of quantiles, Petropoulos and Kourouklis [18] derived estimators improving upon the BAE estimator under an arbitrary strictly convex loss function under the complete sampling. Specifically, squared error and linex loss functions are studied in detail for the case a = 1 and b = n. Using the method of Kubokawa [19], authors also obtained a new procedure under the scale invariant squared error loss which improves upon the BAE estimator. One may also refer to Marchand and Strawderman [20,21] for some further applications of Kubokawa method in different estimation problems. In Section 2, we first illustrate some preliminary results. Then in Theorem 2.1, for large values of k, we present an estimator which improves upon the BAE estimator of μ + kσ under an arbitrary strictly convex loss function. In consequence, this result not only encompasses the findings of Elfessi but also generalizes the condition on k to strictly convex loss function. It should be notice that our dominance result is based on the method adopted in [18,22,23]. We present two applications of this result and obtain the corresponding improving estimators with respect to the linex and absolute value loss functions. To the best of our knowledge, this estimation problem has not been studied with respect to these loss functions in literature before. It is of further interest to investigate and expand upon these findings for small values of k. In the process, we obtain procedure dominating the BAE estimator under the squared loss. We further extend this result to the case of linex loss function and derive improving estimators (Theorems 2.3 and 2.4). In Section 3, a new estimator is derived using the integral expression of risk difference (IERD) approach of Kubokawa under the scale invariant squared error loss function. We show that this class includes the generalized Bayes estimator of μ + kσ with respect to a uniform prior over the parameter space {(μ, σ ) : 0 < μ < x, σ > 0}. The above class is shown to include some other known estimators of μ + kσ as well. Finally, some concluding remarks are given in Section 4.
2. Preliminaries and inadmissibility results In this section we first discuss some useful properties of basic estimators. Then we proceed to prove the main dominance result and also present various interesting implications of this result later in the section. Let a doubly censored random sample Xa , Xa+1 , . . . , Xb be available from an exponential E(μ, σ ) distribution and based on such a sample, consider estimating μ + kσ with respect to the loss of the form L((d − μ − kσ )/σ ), where L(t) denotes an arbitrary strictly convex function on (−∞, ∞). Thus we work with loss functions which are characterized as L ≥ 0, L (t) < 0 on (−∞, 0), L (t) > 0 on (0, ∞) and L(0) = 0. As a consequence the function L(t) is differentiable almost everywhere and we assume that interchange of derivative and integral involving L(t) is allowed. The risk functions of the estimators considered in this paper depend on (μ, σ ) only through μ/σ and so without loss of generality, we take the parameter σ as one. We further observe that the problem of estimating μ + kσ remains invariant under the affine group of transformations G = {ga1 ,b1 : ga1 ,b1 (x) = a1 x + b1 , a1 > 0, −∞ < b1 < ∞} and an equivariant estimator has the form δc = Xa + cV, c > 0.
102
Y. M. TRIPATHI ET AL.
The risk of such an estimator is independent of (μ, σ ) and is a strictly convex function of c. Subsequently in the following lemma we obtain the minimum risk equivariant estimator of μ + kσ . We adopt the following notations for the sake of brevity. The expression Eμ,σ (.) will denote expectation under μ and σ , that is, E0,1 (.) will denote expectation under μ = 0 and σ = 1. Likewise Eμ (.) and Eσ (.) will denote expectation under μ and σ respectively. Lemma 2.1: For an arbitrary strictly convex loss function L(t) the risk function of δc is convex in c and is uniquely minimized at c = c0 satisfying E0,1 [V ∗ L (U + c0 V ∗ − k)] = 0,
(5)
for which U = (Xa − μ)/σ and V ∗ = V/σ . The estimator δ0 with δ0 = Xa + c0 V,
(6)
is the BAE estimator of μ + kσ with respect to the loss L(t). Remark 2.1: For L(t) = t 2 , Equation (5) yields that c0 = (k − β)/(b − a + 1) where β = a i=1 (1/(n − i + 1)). The corresponding estimator is denoted by δ01 . Remark 2.2: For L(t) = ept − pt − 1, p = 0, it is seen that c0 is solution to the equation E0,1 [(ep(U+c0 V
∗ −k)
− 1)V ∗ ] = 0.
After simple calculations we have ⎡ 1/(b−a+1) ⎤ 1⎣ e−pk n!(n − a − p + 1) ⎦, c0 = 1− p (n − a)!(n − p + 1)
(7)
provided n + 1 > a + p.
(8)
We denote this estimator by δ02 . Remark 2.3: Let L(t) = |t|. In this case the corresponding c0 is obtained by solving the equation E0,1 [V ∗ (1 − 2FU (k − c0 V ∗ ))] = 0,
(9)
where FU (.) denotes the distribution function of U. We establish our results for k satisfying certain conditions. We first discuss the case when k is large. 2.1. Dominance results for large k In this section we establish the inadmissibility of the estimator δ0 under the loss function L(t) when k is large. Notice that this estimation problem remains invariant under the scale group of transformations. In order to improve δ0 , we consider estimators of the form δψ = Vψ(W),
(10)
for a measurable function ψ(.) and W = Xa /V. Next we discuss a lemma which will be useful in proving the main result presented in Theorem 2.1. Lemma 2.2: Suppose that Y is a random variable with distribution f (y) and support of the distribution is on [0, ∞). Let ρ(y) be a function that changes its sign once from negative to positive on the real line such
STATISTICS
103
that ρ(y) < 0 for y < y0 and ρ(y) > 0 for y > y0 . If h(y) be an increasing function, with nonnegative values, such that ∞ ρ(y)f (y)h(y) dy = 0. (11) 0
Then
∞
0
ρ(y)f (y) dy ≤ 0.
(12)
Proof: We have 0=
0
∞
ρ(y)f (y)h(y) dy =
≥ h(y0 )
y0
0
ρ(y)f (y)h(y) dy +
ρ(y)f (y) dy + h(y0 )
0
= h(y0 )
y0
∞ 0
∞ y0
∞
y0
ρ(y)f (y)h(y) dy
ρ(y)f (y) dy
ρ(y)f (y) dy.
Thus the proof is completed.
In the next theorem we establish the main dominance result for the estimator δ0 under the convex loss function L((d − μ − kσ )/σ ). The improving estimator is of Stein type (see, [18,23]). Theorem 2.1: Consider estimation of μ + kσ under an arbitrary strictly convex loss function L(t) with properties L(t) ≥ 0 on (−∞, ∞); L(0) = 0, L (t) < 0 on (−∞, 0) and L (t) > 0 on (0, ∞). Let the random variable Y follows a gamma G(b − a + 1, σ ) distribution and b1 be the unique solution to the equation Eσ =1 [YL (b1 Y − k)] = 0.
(13)
Then the estimator δψ0 = Vψ0 (W), W = Xa /V, dominates the BAE estimator δ0 provided E0,1 [VL (Xa + b1 V − k)] < 0,
(14)
and ψ0 is given by ψ0 (W) =
W + b1 (W(n − a + 1) + 1), W + c0 ,
b1 W(n − a + 1) < c0 − b1 , b1 W(n − a + 1) ≥ c0 − b1 .
Proof: The risk function of the estimator δψ is written as R(δψ , μ) = Eμ,1 [Eμ,1 {L(Vψ(W) − μ − k) | W = w}].
(15)
We now analyse the conditional risk function g(ψ, μ) = Eμ,1 {L(Vψ(W) − μ − k) | W = w}. Since g(ψ, μ) is strictly convex in ψ, it is minimized at ψ = c(μ, w) such that Eμ,1 {L (Vc(μ, w) − μ − k)V | W = w} = 0.
(16)
In the process we show that c(μ, w) ≤ c(0, w)
for μ > 0.
(17)
104
Y. M. TRIPATHI ET AL.
Observe that for μ > 0, we have vc(μ, w) − μ − k ≤ 0. Otherwise monotonicity of L (.) will contradict (16). Now since 0 < μ < vw, it is seen that (17) will hold true if E0,1 {L (Vc(μ, w) − wV − k)V | W = w} ≤ 0.
(18)
Now using the following argument 0 = Eμ,1 {L (Vc(μ, w) − μ − k)V | W = w} ≥ Eμ,1 {L (Vc(μ, w) − Vw − k)V | W = w}, we are able to conclude that (18) indeed holds true and so the inequality (17) is established. In consequence we observe that ∞ L (c(0, w)v − k)v b−a+1 e−v(w(n−a+1)+1) [1 − e−wv ]a−1 dv = 0. 0
Subsequently after substituting v(w(n − a + 1) + 1) = y and simplifying we have
∞ c(0, w) L y − k yb−a+1 e−y [1 − e−wy/(w(n−a+1)+1) ]a−1 dy = 0. w(n − a + 1) + 1 0 Now using the Lemma 2.2 the above expression is rewritten as
∞ c(0, w) L y − k yb−a+1 e−y dy ≤ 0. w(n − a + 1) + 1 0
(19)
(20)
From (13) and (20) we find that c(0, w) ≤ b1 (w(n − a + 1) + 1).
(21)
b1 < c0 .
(22)
Further (5) and (14) ensure that Now from (17), (21), (22) we are able to conclude that Pμ (ψ0 = w + c0 ) > 0 for all μ > 0. Moreover since L(.) is strictly convex in ψ, we have Eμ,1 {L(Vψ0 (W) − μ − k) | W = w} < Eμ,1 {L(V(w + c0 ) − μ − k) | W = w}. In consequence, from (15) it is seen that R(δψ0 , μ) < R(δ0 , μ), for all μ > 0. This completes the proof of the theorem.
An observation is now presented in the remark given below. Remark 2.4: The estimator δψ0 in Theorem 2.1 can be rewritten as follows: ⎧ c0 b−1 ⎨ 1 −1 δ1 = Xa + b1 Y, 0 < W < , ∗ δ = n−a+1 ⎩ δ0 = Xa + c0 V elsewhere. Considering the problem of testing H0 : μ = 0 vs H1 : μ = 0, then δψ0 is a ‘testimator’ choosing between δ1 or δ0 , depending on whether or not the test for H0 , with acceptance region given by 0 < Xa /V < (c0 b−1 1 − 1)/(n − a + 1), accepts H0 . We notice by (13), that b1 Y is the best scale equivariant
STATISTICS
105
estimator (b.e.e.) of κσ under the loss L((d − κσ )/σ ), when μ = 0. So, the gain in estimating θ = μ + κσ by δ, is achieved by better estimating for κσ (when μ = 0), because then, the estimator is δ1 and not the b.e.e. Otherwise (μ = 0), we estimate θ = μ + κσ by δ0 . It is of interest to revisit the findings of Elfessi [13], who considered estimation of μ + kσ under the loss function L(t) = t 2 . Recall that in this case c0 = (k − β)/(b − a + 1). Furthermore Equation (13) gives b1 = k/(b − a + 2) and (14) implies that k > β(b − a + 2). Now we have the following result which is a consequence of the previous theorem. Corollary 2.1: The BAE estimator δ0 (Xa , V) of μ + kσ is inadmissible with respect to a scale invariant squared error loss function when k > β(b − a + 2). The improving estimator is given by δψ ∗ where ⎧ ⎪ ⎪ ⎨W +
k (W(n − a + 1) + 1), b − a +2 ψ ∗ (W) = ⎪ ⎪ ⎩W + k − β , b−a+1
k − β(b − a + 2) , b−a+1 k − β(b − a + 2) kW(n − a + 1) ≥ . b−a+1
kW(n − a + 1)
(b − a + 2)h(p),
1 (n − a)!(n − p + 1) where h(p) = − ln , p n!(n − a − p + 1)
p < n − a + 1.
(27)
106
Y. M. TRIPATHI ET AL.
Proof: In this case Equation (14) implies that E0,1 [V(ep(Xa +b1 V−k) − 1)] < 0. That is, e−pk E0,1 (epXa )E0,1 (V epb1 V ) − E(V) < 0, or, e−pk
n!(n − a − p + 1) (b − a) − (b − a) < 0, (n − a)!(n − p + 1) (1 − pb1 )b−a+1
Now utilizing (24) in the above expression, Equation (27) is obtained after some simplification.
Remark 2.6: Let the loss function be linex defined as L(t) = ept − pt − 1, p = 0. Then, c0 is given by (8), b1 is given by (24) and the condition on k is as obtained in (27). We now analyse h(p) with respect to p, p < n − a + 1. Recall that
1 (n − a)!(n − p + 1) h(p) = − ln p n!(n − a − p + 1) n n−a (n − a)! 1 ln + ln(ν − p) − ln(ν − p) =− p n! ν=1 ν=1 n 1 (n − a)! =− ln(ν − p) . (28) ln + p n! ν=n−a+1 Differentiating this with respect to p we have n n 1 1 (n − a)! 1 h (p) = 2 ln + ln(ν − p) + p n! p ν=n−a+1 ν − p ν=n−a+1 ≥ 0,
for p < n − a + 1.
Thus h(p) is increasing in p. Now from (28) we have lim h(p) = 0,
p→−∞
lim
p→n−a+1
h(p) = ∞,
lim h(p) =
p→0
n ν=n−a+1
1 . ν
Finally from (27) it is observed that for any given k > 0, the estimator δψ0 dominates the BAE estimator δ0 with c0 given in (8) under the linex loss function provided p < p0 where p0 is defined as k = (b − a + 2)h(p0 ). Next we pursue with the following two lemmas related to the absolute valued loss L(t) = |t|. Lemma 2.5: In case L(t) = |t|, then the Equation (13) yields that b1 =
k , Med(Y ∗ )
where Y ∗ has a gamma G(b − a + 2, 1) distribution.
(29)
STATISTICS
107
Proof: Here L (t) = −1, t < 0 and L (t) = 1, t > 0. Accordingly Eσ =1 [YL (b1 Y − k)] = 0, implies that −
1 (b − a + 1)
k/b1 0
yyb−a e−y dy +
1 (b − a + 1)
∞ k/b1
yyb−a e−y dy = 0.
Equivalently the above expression can be rewritten as 1 − (b − a + 2)
k/b1
y
b−a+1 −y
e
0
1 dy + (b − a + 2)
∞ k/b1
yb−a+1 e−y dy = 0.
It is seen that (29) easily follows from the above expression. Lemma 2.6: In case L(t) = |t|, then the condition on k, using Equation (14), is obtained as E0,1 [V{1 − 2FU (k − b1 V)}] < 0,
(30)
where FU (.) is the distribution function of U. Proof: For the loss L(t) = |t|, the inequality (14) refers to −
0
∞
v
k−b1 v
0
fU,V (u, v) du dv +
0
∞
v
∞
k−b1 v
fU,V (u, v) du dv < 0,
(31)
which is reexpressed as (30).
Remark 2.7: Consider estimating μ + kσ under the loss function L(t) = |t| with c0 given by (9) and b1 given by (29). Observe that condition on k is derived from the fact that c0 − b1 > 0, where b1 = k/Med(Y ∗ ). Equivalently, c0 > k/Med(Y ∗ ) implies that k < c0 Med(Y ∗ )
(32)
Now for Y ∗ following G(b − a + 2, 1) distribution, a result of Choi [24] suggests that b−a+1+
2 2 < Med(Y ∗ ) ≤ min b − a + 1 + ln 2, b − a + 1 + + (2(b − a + 1) + 2)−1 . 3 3
Thus (32) is applied if, k < c0 (b − a + 1 + 23 ),
1 ≤ a ≤ b ≤ n.
(33)
Now we have the following result. Corollary 2.2: The estimator δψ0 dominates the BAE estimator δ0 for estimating μ + kσ under the absolute loss function provided k satisfies (33) where c0 = c0 (k) is given by (9). We next discuss the case when k is small and obtain estimators improving upon δ0 .
108
Y. M. TRIPATHI ET AL.
2.2. Dominance results for small k In this section we obtain dominance results under small k. Elfessi [13] obtained improved estimators of μ + kσ under the squared error loss for small values of k, namely 0 ≤ k ≤ β/(b − a + 2). In what follows, we establish some new improved estimators of μ + kσ for small k’s under the loss function L(.). For the quadratic loss function we prove the following result. Theorem 2.2: The BAE estimator δ01 (Xa , V) of μ + kσ is inadmissible under the quadratic loss L(t) = t 2 if k < (b − a + 2)/(n − 2a + 2), n+2 > 2a. The improving estimator is given by δψ ∗∗ where ⎧ 1 + w(n − 2(a − 1)) 1 k − β(b − a + 2) ⎪ ⎪ , 0 0. The last expectation is taken with respect to the density function h∗1 (v) defined as h∗1 (v) ∝ v b−a+1 [1 − e−(wv−μ) ]a−1 [e−v(w(n−a+1)+1) ],
v > μ/w.
(35)
Now observe that for the density h∗2 (v) given by h∗2 (v) ∝ v b−a+1 ewv(a−1) [e−v(w(n−a+1)+1) ],
v > 0,
(36)
the function h∗1 (v)/h∗2 (v) ↓ v. Then using a result of Lehmann and Romano [25] the Equation (34) yields that 0 ≤ Eh∗1 {L (Vc(μ, w) − k)} < Eh∗2 {L (Vc(μ, w) − k)},
(37)
where the last expectation is taken with respect to the density function h∗2 (v). Thus for L(t) = t 2 , we have Eh∗2 {(Vc(μ, w) − k)} > 0, which yields c(μ, w) > k
1 + w(n − 2(a − 1)) . b−a+2
Now define ψ ∗∗ (w) = max{k((1 + w(n − 2(a − 1)))/(b − a + 2), w + c0 } and observe that c(μ, w) > ψ ∗∗ (w) ≥ w + c0 . Further the inequality between ψ ∗∗ and w + c0 is strict on the set of a positive measure. Thus by strict convexity we have Eμ,1 {L(δψ ∗∗ − μ − k) | W = w} < Eμ,1 {L(δ01 − μ − k) | W = w},
(38)
on the set 0 < w < (1/(k(n − 2(a − 1)) − (b − a + 2)))((k − β(b − a + 2))/(b − a + 1). This completes the proof of the theorem.
STATISTICS
109
Next we establish dominance result for the linex loss L(t) = ept − pt − 1 and obtain estimator improving over the BAE estimator δ02 . First we treat the case where the loss parameter p is negative and prove the following result. Theorem 2.3: The BAE estimator δ02 (Xa , V) of μ + kσ is inadmissible under the linex loss L(t) = ept − pt − 1 for negative p provided 0 ≤ k ≤ min[(b − a + 2)/(n − 2a + 2), −(1/p) ln{(n − a)! (n − p + 1)/n!(n − a − p + 1)}], n+2 > 2a and is improved by the estimator δψˆ where ⎧ 1 + w(n − 2(a − 1)) b−a+2 ⎪ ⎪ k , 0 k b−a+2 ˆ Then define ψ(w) = max{k((1 + w(n − 2(a − 1)))/(b − a + 2)), w + c0 } with c0 given by (8). The rest of the proof is now same as in Theorem 2.2. The range of w and that k is obtained as follows. k
1 + w(n − 2(a − 1)) > w + c0 , b−a+2
implies that ⎡ 1/(b−a+1) ⎤ 1 + w(n − 2(a − 1)) 1⎣ e−pk n!(n − a − p + 1) ⎦. k >w+ 1− b−a+2 p (n − a)!(n − p + 1) Or, ⎡ k + w{k(n − 2(a − 1)) − (b − a + 2)} >
b−a+2⎣ 1− p
e−pk n!(n − a − p + 1)
1/(b−a+1) ⎤ ⎦.
(n − a)!(n − p + 1)
Since k > 0, the above inequality holds true, if ⎡ 1/(b−a+1) ⎤ b−a+2⎣ e−pk n!(n − a − p + 1) ⎦. w{k(n − 2(a − 1)) − (b − a + 2)} ≥ 1− p (n − a)!(n − p + 1) Thus it is seen that w≤
⎡
b−a+2 ⎣1 − p{k(n − 2(a − 1)) − (b − a + 2)}
provided k is as mentioned above in the statement.
e−pk n!(n − a − p + 1) (n − a)!(n − p + 1)
1/(b−a+1) ⎤ ⎦.
110
Y. M. TRIPATHI ET AL.
Result for the positive p is derived next. Theorem 2.4: The BAE estimator δ02 (Xa , V) of μ + kσ is inadmissible under the linex loss L(t) = ept − pt − 1 for positive p provided 0 ≤ k ≤ min[(b − a + 2)/(n − 2a + 2), −(1/p) ln{(n − a)! (n − p + 1)/n!(n − a − p + 1)}], n+2 > 2a and is improved by the estimator δψˆ ∗ where ⎧ 1 + w(n − 2(a − 1)) b−a+2 ⎪ ⎪ k , w> ⎪ ⎪ b−a+2 p(k(n − 2(a − 1)) − (b − a + 2)) ⎪ ⎪ ⎪ ⎡ ⎪ 1/(b−a+1) ⎤ ⎨ −pk n!(n − a − p + 1) ∗ e ψˆ (w) = ⎦, × ⎣1 − ⎪ ⎪ (n − a)!(n − p + 1) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩w + k − β , otherwise. b−a+1 Proof: In this case we will show that c(μ, w) < k
1 + w(n − 2(a − 1)) . b−a+2
In order that this is true, from (37) we must have
∞ 0
1 + w(n − 2(a − 1)) L vk − k v b−a+1 ewv(a−1) [e−v(w(n−a+1)+1) ] dv ≥ 0. b−a+2
(39)
Now for p ≥ 0, L (t) = p(ept − 1) ≥ p2 t, the last inequality holds if ∞
0
vk
1 + w(n − 2(a − 1)) − k v b−a+1 ewv(a−1) [e−v(w(n−a+1)+1) ] dv ≥ 0. b−a+2
(40)
Now observe that
∞
1 + w(n − 2(a − 1)) − k v b−a+1 ewv(a−1) [e−v(w(n−a+1)+1) ] dv ≥ vk b−a+2 0
∞
1 + w(n − 2(a − 1)) vk − k v b−a+1 [1 − e−(wv−μ) ](a−1) [e−v(w(n−a+1)+1) ] dv = 0. b−a+2 μ/w Thus it is established that (40) holds true. The rest of the proof is now same as the preceding theorem. For illustrative purposes, we have numerically computed risk functions of estimators δψ ∗ , δ01 , δψ ∗∗ , (−0.1) (−0.1) 0.1 0.1 δ02 , δ ˆ , δ02 , δ ˆ∗ using Monte Carlo simulations. The upper scripts in the last four estimators ψ ψ represent the values of the parameter p in linex loss. In Table 1 the risk values are presented for an arbitrarily chosen sample size n = 20 where a and b are taken as 2 and 18 respectively. It has been observed that risk values of different estimators depend upon μ and σ through μ/σ . So these are computed for different configurations of μ. Respective values of k are chosen in appropriate manner so that conditions of corresponding results hold true. From tabulated risk we observe that the respective minimum risk equivariant estimators are uniformly improved by the proposed Stein-type estimators. It seems that in the neighbourhood of μ = 0 the performance of Stein-type estimators is quite good, as expected.
STATISTICS
111
Table 1. Risk values of different estimators of μ + kσ for n = 20. δ01
δψ ∗
δ01
δψ ∗∗
(−0.1)
δ02
(−0.1) ψ
0.1 δ02
δ 0.1 ˆ∗
δˆ
ψ
μ
k = 1.9
k = 1.9
k = 0.1
k = 0.1
k = 0.01
k = 0.01
k = 0.01
k = 0.01
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
0.19939 0.19525 0.19717 0.19243 0.20083 0.19889 0.20227 0.19394 0.19765 0.19412 0.19533
0.18535 0.18758 0.19597 0.19094 0.19381 0.1943 0.19507 0.19236 0.1934 0.19341 0.1951
0.00504 0.0052 0.00543 0.00539 0.00557 0.00546 0.00537 0.00521 0.00528 0.00519 0.00502
0.00363 0.00512 0.00533 0.00505 0.00549 0.00536 0.00532 0.00501 0.00517 0.00511 0.00502
0.00584 0.00581 0.0058 0.00589 0.00591 0.00578 0.00575 0.00591 0.00584 0.00573 0.00569
0.00417 0.00572 0.00571 0.00582 0.00586 0.00571 0.00569 0.00582 0.00576 0.00562 0.00565
0.00557 0.00547 0.00559 0.00566 0.00568 0.00588 0.00588 0.00592 0.00599 0.00605 0.00553
0.00383 0.0053 0.00548 0.00555 0.00557 0.00567 0.00577 0.00571 0.00593 0.00604 0.00553
3. A class of improving estimators In this section we derive improving estimators of μ + kσ under the loss function L(t) = t 2 using the IERD approach of Kubokawa [19] (see also, [26]). Consider estimators of the form δφ = Xa + Vφ(W) for an absolutely continuous function φ(.) where W = Xa /V. We prove the following result. Theorem 3.1: Assume that the following conditions hold for the function φ(y). (i) φ(y) is nondecreasing (ii) limy→∞ φ(y) = c0 ∞ vy ∞ vy (iii) φ(y) ≥ φ0 (y), where φ0 (y) = 0 0 (k − u)[1 − e−u ]a−1 e−u(n−a+1) e−v v b−a du dv/ 0 0 [1 − e−u ]a−1 e−u(n−a+1) e−v v b−a+1 du dv. Then the estimator δφ improves δ0 for estimating μ + kσ under the scale invariant squared error loss function. Proof: Without loss of generality, we take σ = 1. The risk difference of estimators δφ and δ0 is RD = 2
∞ ∞
=2
μ/v
0
∞
φ (y)
0
∞ w
L (wv + vφ(y) − μ − k)vφ (y) dyf (w, v) dw dv
∞ y μ/v
0
L (wv + vφ(y) − μ − k)vf (w, v) dw dv dy.
Using the transformation U = WV − μ, V = V RD = 2
∞
0
φ (y)
∞ vy−μ
0
0
L (u + vφ(y) − k)vf (u, v) du dv dy.
This risk difference will be nonnegative provided 0
∞ vy−μ 0
L (u + vφ(y) − k)vf (u, v) du dv ≥ 0.
This is exactly same as 0
∞ vy−μ 0
L (u + vφ(y) − k)v[1 − e−u ]a−1 e−u(n−a+1) e−v v b−a−1 du dv ≥ 0.
(41)
112
Y. M. TRIPATHI ET AL.
Now for L(t) = t 2 we want, ∀y > 0, μ > 0, that ∞ vy−μ
0
0
(u + vφ(y) − k)[1 − e−u ]a−1 e−u(n−a+1) e−v v b−a du dv ≥ 0.
(42)
For Equation (42) to hold true, it suffices to show that for every v > 0,
vy−μ 0
(u + vφ(y) − k)[1 − e−u ]a−1 e−u(n−a+1) du ≥ 0.
(43)
Equivalently, vy−μ φ(y) ≥ φ(y; μ) =
0
(k − u)[1 − e−u ]a−1 e−u(n−a+1) du = Eμ (k − U). vy−μ [1 − e−u ]a−1 e−u(n−a+1) du 0
Using Lemma 3.4.2 in [25], it is verified that Eμ (k − U) ≤ E0 (k − U), that means φ(y; μ) ≤ φ(y; 0) in the above equation, or else (43) holds true if 0
vy
(u + vφ(y) − k)[1 − e−u ]a−1 e−u(n−a+1) du ≥ 0.
In that case, Equation (42) holds true if 0
∞ vy 0
(u + vφ(y) − k)[1 − e−u ]a−1 e−u(n−a+1) e−v v b−a du dv ≥ 0.
That means φ(y) ≥ φ0 (y),
for all y > 0.
This completes the proof of the theorem.
Lemma 3.1: The estimator δφ0 = Xa + Vφ0 (W) dominates the BAE estimator of μ + kσ under the loss L(t) = t 2 . Proof: Obviously (ii) and (iii) of Theorem 3.1 hold true, it suffices to show that φ0 (y) is nondecreasing in y. Let y1 < y2 , we have to show that φ0 (y1 ) < φ0 (y2 ). ∞ vy But, φ0 (y) = Ey ((k − U)/V), where f (u, v; y) = [1 − e−u ]a−1 e−u(n−a+1) e−v v b−a+1 / 0 0 [1 − e−u ]a−1 e−u(n−a+1) e−v v b−a+1 du dv, 0 < u < vy, v > 0. Also, φ0 (y) = Ey ((k − U)/V) = EEy ((k − U)/v | V = v). It is easily verified that f (u | v; y2 )/f (u | v; y1 ) is nondecreasing in u, so using Lemma 3.4.2 in [25] it is proved that Ey1 ((k − U)/v | V = v) ≤ Ey2 ((k − U)/v | V = v), the latter means that φ0 (y1 ) < φ0 (y2 ). Remark 3.1: δφ0 is the generalized Bayes estimator of μ + kσ with respect to the prior distribution π(μ, σ ) =
1 , σ
0 < μ < x, 0 < σ < ∞.
Remark 3.2: The estimator δψ ∗ (see, Corollary 2.1) is a member of the class of the estimators δφ . Observe that conditions (i) and (ii) hold for δψ ∗ . It is further observed that for k > β(b − a + 2) we have φ(w) = (k/(b − a + 2))(w(n − a + 1) + 1) where 0 < w < (k − β(b − a + 2))/k(b − a + 1) (n − a + 1). Elfessi [13] proved that for estimating σ the estimator dS = VψS (W) with ψS (W) =
STATISTICS
113
min{1/(b − a + 1), (W(n − a + 1) + 1)/(b − a + 2)} improves upon the best equivariant estimator V/(b − a + 1). Madi [16] obtained further improvements by obtaining Brewster-Zidek type estimator dBZ = VψBZ (see, [17]) where ∞ vy [1 − e−u ]a−1 e−u(n−a+1) e−v v b−a du dv ψBZ = ∞0 vy0 . −u a−1 e−u(n−a+1) e−v v b−a+1 du dv 0 [1 − e ] 0 Following the proof of Proposition 2.4 in [27], it is verified that ψS (W) ≥ ψBZ , so for the given range of w, we have φ(w) ≥ kψBZ (w) > φ0 (w). Thus condition (iii) also holds for the estimator δψ ∗ .
4. Concluding remarks In this paper we studied the problem of estimating a linear parametric function of a two-parameter exponential E(μ, σ ) distribution in the presence of double censoring. We showed that the BAE estimator δ0 of μ + kσ is inadmissible under an arbitrary strictly convex loss function when k satisfies certain conditions. We further studied two applications of this result and derived estimators improving upon δ0 with respect to the linex and the absolute value loss functions. A numerical study has been provided for these estimators using Monte Carlo simulation. Further, by making use of the Kubokawa method we established a dominance result under the quadratic loss function. We showed that this class include a generalized Bayes estimator and the Brewster-Zidek type estimator of μ + kσ . It is noticeable that for k = 0, we cannot produce better estimators for μ from Section 2.2, in the sense that the part of the estimator which is not δ0 , becomes zero. In near future we would like to study the problem of estimating the parameter μ under double censoring.
Disclosure statement No potential conflict of interest was reported by the authors.
Acknowledgments We are grateful to a referee for his encouragement and constructive suggestions that led to significant improvements in the presentation and contents of this paper. Authors also extend their sincere thanks to the Editor for his helpful comments.
Funding The authors Yogesh Mani Tripathi and Farha Sultana gratefully acknowledge the financial support for this research work under a grant SR/S4/MS:785/12 from the SERB, Department of Science & Technology, India
ORCID C. Petropoulos
http://orcid.org/0000-0001-5185-7037
References [1] [2] [3] [4] [5] [6] [7] [8]
Meeker WQ, Escobar LA. Statistical methods and reliability data. New York: Wiley; 1998. Lawless JF. Statistical models and methods for lifetime data. New York: Wiley; 1982. Epstein B, Sobel M. Life testing. J Am Stat Assoc. 1953;48:486–502. Epstein B, Sobel M. Some theorems relevant to life testing from an exponential distribution. Ann Math Stat. 1954;25:375–381. Epstein B. Simple estimators of the parameters of exponential distributions when samples are censored. Ann Inst Statist Math. 1956;8:15–26. Balakrishnan N, Basu AP. Exponential distribution: theory, methods and applications. Amsterdam: Gordon and Breach Publishers; 1995. Balakrishnan N, Cramer E. The art of progressive censoring. New York: Birkhauser; 2014. Khan HMR, Haq MS, Provost SB. Predictive inference for future responses given a doubly censored sample from a two parameter exponential distribution. J Statist Plan Infer. 2006;136:3156–3172.
114
Y. M. TRIPATHI ET AL.
[9] Sun J. The statistical analysis of interval-censored failure time data. New York: Springer; 2006. [10] Sindhu TN, Feroze N, Aslam M. Analysis of doubly censored Burr type II distribution: a Bayesian look. Electr J Appl Statist Anal. 2015;8:154–169. [11] Feroze N, Aslam M. Bayesian analysis of Burr type X distribution under complete and censored samples. Int J Pure Appl Sci Technol. 2012;11:16–28. [12] Pak A, Parham GA, Saraj M. On estimation of Rayleigh scale parameter under doubly type II censoring from imprecise data. J Data Sci. 2013;11:305–322. [13] Elfessi A. Estimation of a linear function of the parameters of an exponential distribution from doubly censored samples. Statist Prob Lett. 1997;42:251–259. [14] Chakraborti S, Li J. Confidence interval estimation of a normal percentile. Am Stat. 2007;61:331–336. [15] Keating JP, Mason RL, Balakrishnan N. Percentile estimators in location-scale parameter families under absolute loss. Metrika. 2010;72:351–367. [16] Madi MT. On the invariant estimation of an exponential scale using doubly censored data. Stat Prob Lett. 2002;56:77–82. [17] Brewster JF. Alternative estimators for the scale parameter of the exponential distribution with unknown location. Ann Stat. 1974;2:553–557. [18] Petropoulos C, Kourouklis S. Estimation of an exponential quantile under a general loss and an alternative estimator under quadratic loss. Ann Inst Statist Math. 2001;53:746–759. [19] Kubokawa T. A unified approach to improving equivariant estimators. Ann Stat. 1994;22:290–299. [20] Marchand É, Strawderman WE. Improving on the minimum risk equivariant estimator of a location parameter which is constrained to an interval or a half interval. Ann Inst Stat Math. 2005;57:129–143. [21] Marchand É, Strawderman WE. On improving on the minimum risk equivariant estimator of a scale parameter under a lower-bound constraint. J Stat Plan Infer. 2005;134:90–101. [22] Brewster JF, Zidek JV. Improving on equivariant estimators. Ann Stat. 1974;2:21–38. [23] Stein C. Inadmissibility of the usual estimator for the variance of a normal distribution with unknown mean. Ann Inst Stat Math. 1964;16:155–160. [24] Choi KP. On the medians of gamma distributions and an equation of Ramanujan. Proc Am Math Soc. 1994;121:245–251. [25] Lehmann EL, Romano JP. Testing statistical hypotheses. New York: Springer Science & Business Media; 2005. [26] Kubokawa T. Shrinkage and modification techniques in estimation of variance and the related problems: a review. Comm Stat Theor Meth. 1999;28:613–650. [27] Kubokawa T, A unified approach to improving equivariant estimators. Technical Report METR 91-1, Dept. Mathematical Engineering, Univ. Tokyo, 1991.