The exponential distribution with a âjumpâ at the change-point a (a > 0) is denoted. Exp(α ... The probability density function (pdf) and distribution function (cdf) for.
c Heldermann Verlag ISSN 0940-5151
Economic Quality Control Vol 18 (2003), No. 2, 195 – 207
Bayesian Predictions for Exponentially Distributed Failure Times With One Change-Point Y. Abdel-Aty, J. Franz, M.A.W. Mahmoud
Abstract: Suppose that a number of items is put on operation and that their life times are identically exponentially distributed up to a time a, when the operating conditions change resulting in a different failure intensity of the exponential distribution for those items which have survived the change-point a. Predictions are made for the lifetime of operating items on condition of observed failures of a part of the items before and after the change-point a. The predictions are based on an Bayes approach, which is briefly introduced. Numerical examples are given to illustrate the results.
1
Introduction
The exponential distribution with a “jump” at the change-point a (a > 0) is denoted Exp(α, β, a). The probability density function (pdf) and distribution function (cdf) for Exp(α, β, a) are, −αx 0≤x≤a αe (1) f (x) = (β−α)a−βx a 0,
a > 0.
The exponential distribution with one jump (change in failure intensity) has been studied by many researches [See, for example, Mattews and Farewell (1982), Nguyen, Rogers and Walker (1984), Pham and Nguyen (1990)] These papers refer to the detection of the change-point a; in contrast, in our investigations the jump point a is fixed and known. Suppose there are n items X = (X1 , X2 , ..., Xn ) with identical lifetime distribution Exp(α, β, a) put on operation. Assume that after observation of the first r1 failure times x(1) ≤ x(2) ≤ . . . ≤ x(r1 ) with x(r1 ) < a on the condition that there will be exactly k1 > r1 failures up to the change-point, a prediction is required on the failure times for some or all remaining k1 − r1 items. Next assume that r2 ordered failure times are observed with x(r2 ) ≥ a and a prediction on the life time is required of some or all of the remaining (n − r2 ) items.
196
Y. Abdel-Aty, J. Franz, M.A.W. Mahmoud
In Lawless (1971) methods are given for constructing a prediction for the failure time of X(r) when the lifetime is Exp(α) and the first s, (s < r) failure times of n items have been observed. In Lawless (1972) a related problem is discussed. A review on classical prediction methods may be found in Patel (1989), which covers the following lifetime distributions: • discrete case: Poisson, Binomial and Negative Binomial • continuous case: Normal, Lognormal, Exponential, Weibull, Gamma, Inverse Gaussian, Pareto, and IFR. Patel also presented nonparametric predictions, among other results. In Nagaraja (1995) besides predictions the following point predictors are discussed: • Best linear unbiased predictor (BLUP), best linear invariant predictor (BLIP), maximum likelihood predictor (MLP), and Bayes predictors. As Geisser (1993) has shown, the problem of prediction can be solved within the Bayesian framework. Several researchers have studied Bayesian prediction in the case of exponential distribution, among them are e.g. Dunsmore (1974), Geisser (1990), Lingappaiah [(1978), (1979), (1986)], AL-Hussaini and Jaheen (1999), AL-Hussaini [(1999), (2001)], Upadhyay and Pendey (1989). The two books by Aitcheson and Dunsmore (1975) and Geisser (1993) are concerned primarily with Bayes prediction.
2
Predictions For Failures Before the Change-Point
Let X(1) < X(2) < ... < X(n) denote the ordered lifetimes of the n items put on operation and consider the case that r1 item have failed until x(r1 ) , while the remaining n − r1 items are still operating. The density function of X(r+s) on the condition of x = (x(1) , x(2) , ..., x(r1 ) ) is given by: 1 [F (x) − F (xr1 )](s−1) [1 − F (x)](n−r1 −s) s n−r s hXr+s (x | x(r1 ) , α) = f (x) (3) [1 − F (xr1 )](n−r1 ) where 1 ≤ s ≤ n − r Suppose the observed failure times are x(1) < x(2) < ... < x(r1 ) and we want to predict X(r1 +s1 ) on the condition that there are exactly k1 failures before a and 1 ≤ s1 ≤ k1 − r1 . For deriving a Bayes prediction, the likelihood function of α must be derived and a prior distribution of α must be specified. The likelihood function of α for given x = (x(1) , x(2) , ..., x(r1 ) ) is as follows:
Bayesian Predictions for Exponentially Distributed Failure Times
197
1 k1 ! k1 −r1 [1 − F (xr1 )] L(α; x) = f (xi ) (k1 − r1 )! i=1
r
k1 ! αr1 e−αu1 (k1 − r1 )!
=
where u1 =
r1 i=1
(4)
xi + (k1 − r1 )xr1 .
The prior distribution of α is selected to be a Gamma distribution with density function: αc−1 e−αd dc π 1 (α) = Γ(c)
c > 0, d > 0
(5)
Then from (4) and (5) the posterior density of α is obtained: π ∗1 (α | x) =
αc+r1 −1 e−α(d+u1 ) (u1 + d)(c+r1 ) Γ(c + r1 )
(6)
Inserting the distribution and density function of Exp(α) into (3), the conditional density function of X(r1 +s1 ) on the condition of x(1) , ..., x(r1 ) is obtained. After elementary manipulation one arrives at the following expression: hX(r1 +s1 ) (x | x, α) = D1 α
s 1 −1
c1j e−(x−xr1 )αk1j
(7)
j=0
where 1 D1 = s1 k1s−r 1 c1j = s1j−1 (−1)j and k1j = k1 − r1 − s1 + j + 1 2.1
Fixed Number of Failures Before the Change-Point
The Bayesian joint probability density of (X(r1 +s1 ) , α) is equal to the product of the posterior density (6) and the conditional density (7) of X(r1 +s1 ) . The Bayesian prediction density function of X(r1 +s1 ) is obtained as density of the corresponding marginal distribution of X(r1 +s1 ) by integrating the joint density with respect to α yielding: p1 (x | x) = D1 (u1 + d)c+r1
s 1 −1
c1j (r1 + c)[(x − xr1 )k1j + (u1 + d)]−(r1 +c+1)
(8)
j=0
The so-called predictive reliability function of X(r1 +s1 ) is defined as follows t P (X(r1 +s1 ) ≥ t | x) = 1 −
p1 (x) | x)dx
xr1 c+r1
= D1 (u1 + d)
s 1 −1 j=0
c1j (k1j )−1 [(t − xr1 )k1j + (u1 + d)]−(r1 +c) (9)
198
Y. Abdel-Aty, J. Franz, M.A.W. Mahmoud
Any prediction for X(r1 +s1 ) with credibility level γ meets the following condition: P [L(x) < X(r1 +s1 ) < U (x)] ≥ γ
(10)
Assuming that symmetric error probabilities will not be far from the best solution, the lower and upper bound L(x) and U (x) of the prediction may be determined by solving the following two equations: P X(r1 +s1 ) ≥ L(x) = 1+γ 2 (11) 1−γ P X(r1 +s1 ) ≥ U (x) = 2 Using (9) for solving (11) numerically, we obtain L and U. (Numerical examples are given in Section 4). Next the so-called non informative prior is used for the parameter α. Note that this choice does not lead to a proper density function, because of the assumed unbounded parameter space. Thus we assume 1 (12) π 2 (α) ∝ α Then, the posterior density of α is given by π ∗2 (α)
αr1 −1 e−αu1 ur11 = Γ(r1 )
(13)
and the bounds of the prediction are obtained from (9) and(11) with c = d = 0. 2.2
Random Number of Failures Before the Change-Point
In reality the number of failures before the change-point is, of course, a random variable denoted by K1 . Each of the items fails with identical probability before the change-point, thus it is reasonable to assume that K1 is binomially distributed:
n k1 (n−k1 ) ∗ P (K1 = k1 ) = p (k1 ) = p q k1 = 0, 1, ..., n (14) k1 where p + q = 1. In Consul (1984) and Gupta and Gupta (1984) it is shown that the predictive density function of X(r1 +s1 ) for random K1 is given by: n 1 g1 (x | x) = p∗ (k1 ) p1 (x | k1 ) P (K1 ≥ s1 + r1 ) k =s +r 1
1
(15)
1
where p1 (x | k1 ) is the predictive density function of X(r1 +s1 ) when k1 is fixed. From (8), (14) and (15) we obtain s n 1 −1 n k (n−k ) c1j (r1 +c) c+r1 1 1 p q D (u + d) 1 1 k1 [(x−xr1 )k1j +(u1 +d)](r1 +c+1) j=0 k1 =s1 +r1 g1 (x | x) = s1 +r −1 1 n i (n−i) pq 1− i i=0
(16)
Bayesian Predictions for Exponentially Distributed Failure Times
199
The predictive reliability function of X(r1 +s1 ) is immediately obtained from (16): g1∗ (t | x) = P (X(r1 +s1 ) ≥ t | x) n 1 −1 n k (n−k ) s 1 p 1q D1 k1 =
k1 =s1 +r1
j=0 k1j
1−
s1 +r 1 −1 i=0
n i
(t−x
c1j
r1 ) k1j +1 u1 +d
(r1 +c)
(17)
pi q (n−i)
Thus, for a random number of failures before the change-point the prediction bounds L and U are obtained by solving (11) based on (17).
3
Predictions For Failures After the Change-Point
Next it is assumed that the failures x = (x(1) , . . . , x(k1 ) , x(k1 +1) . . . , x(r2 ) )
(18)
are observed with x(k1 ) < a ≤ x(k1 +1) ≤ x(r2 ) . In this case a prediction for X(r2 +s2 ) after the change-point a is of interest. For the likelihood function for β we obtain: r2 n−r2 L(β; x) ∝ [1 − F (xr2 )] f (xi ) i=k1 +1
= β
(r2 −k1 ) −βu2
e
where u2 = (n − r2 )xr2 − (n − k1 )a +
(19) r2 i=k1 +1
xi
As prior distribution for the parameter β we again select a gamma prior distribution with density π 1 (β) =
β c−1 e−βd dc Γ(c)
c > 0, d > 0
(20)
yielding the following posterior density of β: π ∗1 (β) =
β c+r2 −k1 −1 e−β(d+u2 ) (u2 + d)(c+r1 −k1 ) Γ(c + r2 − k1 )
(21)
The conditional density function of X(r2 +s2 ) on condition X(r2 ) = x(r2 ) is, hX(r2 +s2 ) (x | β) = D2
[F (x) − F (xr2 )](s2 −1) [1 − F (x)](n−r2 −s2 ) f (x) [1 − F (xr2 )](n−r2 )
Inserting in (??) the respective expressions of the Exp(β), we obtain
(22)
200
Y. Abdel-Aty, J. Franz, M.A.W. Mahmoud
hX(r2 +s2 ) (x | β) = D2 β
s 2 −1
c2j e−(xr2 +s2 −xr2 )βnj
(23)
j=0
where nj = n − r2 − s2 + j + 1 c2j = s2j−1 (−1)j and 2 D2 = s2 n−r s2 The Bayesian prediction density function of X(r2 +s2 ) is obtained by integrating the product of posterior density (21) and the conditional density (23) and integrating over β: p2 (x | x) =
s 2 −1 j=0
D2 (r2 + c − k1 )(u2 + d)(c+r2 −k1 ) c2j [(x − xr2 )nj + (u2 + d)](r2 +c−k1 +1)
(24)
The predictive reliability function of X(r2 +s2 ) is immediately obtained from (24): P (X(r2 +s2 ) ≥ t | x) =
s 2 −1 j=0
D2 (u2 + d)c+r2 −k1 c2j nj [(t − xr2 )nj + (u2 + d)](r2 +c−k1 )
(25)
The lower and upper L and U for a prediction with credibility level γ for X(r2 +s2 ) are obtained by solving (11) numerically using (25). 3.1
Random Number of Items Put on Operation
We now consider the number of items put on operation being a random variable denoted N distributed according a truncated Poisson distribution with parameter δ: P ∗ (n) =
e−δ δ n n! (1 − e−δ )
n = 1, 2, ...
(26)
The predictive density function of X(r2 +s2 ) when N is random is given by: ∞ 1 P ∗ (n)p2 (x | n; x) g3 (x | x) = P (N ≥ s2 + r2 ) n=s +r 2
(27)
2
from (24), (26) and (27) the predictive reliability function is obtained as: s ∞ 2 −1 n −δ c D 2 2j e δ P (X(r2 +s2 ) ≥ t | x) =
n=s2 +r2
n!(1−e−δ )
1−
j=0 nj
s2 +r 2 −1 i=1
(t−x
r2 ) nj +1 u2 +d
(r2 +c−k1 )
(28)
e−δ δ i i!(1−e−δ )
Solving (11) numerically by using (28) yields the desired prediction bounds L and U .
201
Bayesian Predictions for Exponentially Distributed Failure Times
4
Prediction Before the Change-Point Concerning a Future Operation Run
In this section we will discuss the Bayesian prediction concerning a future batch of items Y(1) < Y(2) < ... < Y(m) put on operation with identical distribution Exp(α, β,a) of the failure times as the first batch X(1) < X(2) < ... < X(n) assuming that the life times within and between the batches are independent. The density function of Y(s) for 1 ≤ s ≤ m is
m ∗ hYs (y | α, β) = s [1 − F (y)](m−s) [F (y)](s−1) f (y) s where 1 ≤ s ≤ m.
(29)
The prediction depends on the amount and type of observation available from the first sample. Suppose the observed failure times from the first sample are x(1) < x(2) < ... < x(r1 ) and we want to predict Y(s1 ) with 1 ≤ s1 ≤ k2 where k2 is the number of failures in the future batch before the change-point a. For the remainder it is assumed that the size of the first batch n and the number of failed items k1 before the change point is fixed and known. 4.1
Fixed Number of Failures Before the Change-Point in the Future Batch
The density function of Y(s1 ) on condition of x = (x(1) , ..., x(r1 ) ) is obtained inserting the corresponding expressions of Exp(α) into (29): (k −s ) (s −1) (30) h∗Ys1 (y | α) = D3 e−αy 2 1 1 − e−αy 1 αe−αy or h∗Ys1 (y
| α) = D3 α
s 1 −1
c1j e−αk2j y
(31)
j=0
where k2j = k2 − s1 + j + 1 and D3 = s1
k2 s1
.
The Bayesian prediction density function of Y(s1 ) is obtained as in the previous cases yielding: s 1 −1 c1j (r1 + c) c+r1 (32) p3 (y | x) = D3 (u1 + d) [yk2j + (u1 + d)](r1 +c+1) j=0 From the density function (32) the predictive reliability function of Y(s1 ) is obtained: P (Ys1 ≥ t | x) =
s 1 −1 j=0
k2j
D3 c1j tk2j (u1 +d)
(r1 +c) +1
(33)
The lower and upper prediction bounds L and U with credibility level γ are obtained as numerical solution of (11) using the prediction reliability function (33).
202
Y. Abdel-Aty, J. Franz, M.A.W. Mahmoud
4.2
Random Number of Failures Before the Change-Point in the Future Batch
Next assume more realistically that the number of failures before the change-point in the future batch is a random variable denoted K2 and binomially distributed with parameters m and p,
m k2 (m−k2 ) ∗ p q P (K2 = k2 ) = p (k2 ) = for k2 = 0, 1, ..., m (34) k2 The predictive density function of Y(s1 ) is n p∗ (k2 )p3 (y | x, k2 ) g4 (y | x) = k2 =s1 (35) P (K2 ≥ s1 ) where p3 (y | x, k2 ) is the predictive density function of Y(s1 ) when the number of failures before the change-point is fixed and given by k2 . With (32) and (34) we obtain from (35) m 1 −1 m k (m−k ) s c+r 1 c1j (r1 +c) D3 (u1 +d) 2 p 2q k2 [yk +(u1 +d)](r1 +c+1) g4 (y | x) =
k2 =s1
j=1
s 1 −1
1−[
i=0
m i
2j
(36) pi q (m−i) ]
With the density (36) the predictive reliability function of Y(s1 ) is immediately obtained: m 1 −1 m k (m−k ) s c D 3 1j 2 p 2q tk2j k2 (r +c) g4∗ (y
| x) = P (Y ≥ t | x) =
j=0 k2j [ u1 +d +1]
k2 =s1
1−
s 1 −1 i=0
m i
1
(37)
pi q (m−i)
By means of (37) and solving (11) numerically the prediction bounds L and U are obtained.
5
Prediction After the Change-Point Concerning a Future Operation Run
In this case we want to make a prediction for a failure after the change point Y(s2 ) , i.e., Y(s2 ) > a and 1 ≤ s2 ≤ m−k2 . It is assumed that from the first batch only the observations x(i) > a with i = k1 +1, ..., n are available. The conditional density function corresponding to (1) yields for the likelihood function: L(x; a, β) ∝ β (r2 −k1 ) e−βu2 where x = (xk1 +1 , . . . , xn ) and u2 = (n − r2 )xr2 + (n − k1 )a +
(38) r2 i=k1 +1
xi
In the following sections the prediction problem for different cases of k2 and m are solved.
Bayesian Predictions for Exponentially Distributed Failure Times
5.1
203
Predictions For Fixed Batch-Size and Fixed Number of Failed Items After Change-Point
The density function of Y(s2 ) on condition x(1) , ..., x(r2 ) is obtained by replacing F and f in (30) by the corresponding terms of Exp(β). (m−s2 ) (s2 −1) −β(y−a) 1 − e−β(y−a) h∗Ys2 (y | β) = D4 e−β(y−a) βe (39) or h∗Y s2 (y | β) = D4 β
s 2 −1
c2j e−βmj (y−a)
(40)
j=0
where mj = m − k2 − s2 + j + 1 and D4 = s2
m−k2 s2
.
The Bayesian prediction density function of Y(s2 ) is obtained as marginal distribution with respect to the joint distribution of (Y(s2 ) , β). c+r2 −k1
p4 (y | x) = D4 (u2 + d)
s 2 −1 j=0
c2j (r2 + c − k1 ) [(y − a)mj + (u2 + d)](r1 +c−k1 +1)
(41)
Thus, the predictive reliability function of Y(s2 ) is as follows: P (Y(s2 ) ≥ t | x) =
s 2 −1 j=0
mj
D4 c2j (t−a)mj (u2 +d)
(r1 +c−k1 ) +1
(42)
The lower and upper prediction bounds L and U for credibility level γ for Y(s2 ) are obtained by solving (11) numerically using (42). 5.2
Predictions For Random Batch-Size and Fixed Number of Failed Items After Change-Point
Assume now that the size of the future batch is random and denoted M distributed according to a truncated Poisson distribution with parameter δ. e−δ δ m m = 1, 2, ... (43) P ∗ (m) = m! (1 − e−δ ) Then the predictive density function of Y(s2 ) is ∞ P ∗ (m)p4 (y | m) m=s2 g6 (y | x) = (44) P (M ≥ s2 ) Analogously as in the previous cases with (43), (44) and (45) the predictive reliability function of Y(s2 ) is obtained. s2 ∞ m −δ c D 4 2j e δ P (Ys2 ≥ t | x) =
m=s2
m!(1−e−δ )
j=0 mj s2 −1
1−
(r2 +c−k1 ) (t−a) m +1 u2 +d j
(45)
e−δ δ i
i=1
i!(1−e−δ )
With (45) and solving (11) numerically the bounds L and U for given credibility level γ are obtained.
204
6
Y. Abdel-Aty, J. Franz, M.A.W. Mahmoud
Numerical Examples
For the examples the change-point was set to be a = 1.0 and the size of the batch to be n = 20. For the Gamma-prior distribution of α we selected the parameter values (c, d) = (0.05, 0.1) and for the Gamma prior distribution of β the parameter values (c, d) = (2, 10). The following values were generated: generated values for the distribution parameters α = 0.5 and β = 0.2 lifetimes of the items in the batch x(1) = 0.134910 x(2) = 0.247596 x(3) = 0.258166 x(4) = 0.302317 x(5) = 0.329222 x(6) = 0.365428 x(7) = 0.607163 x(8) = 1.476456 x(9) = 1.720737 x(10) = 2.345809 x(11) = 2.979479 x(12) = 3.378253 x(13) = 3.754769 x(14) = 4.437662 x(15) = 6.459277 x(16) = 6.613639 x(17) = 6.915906 x(18) = 8.140099 x(19) = 9.982992 x(20) = 11.213757 Table 1: The generated lifetimes of the items put on operation. For the examples, the following values for the number of available observations were used: r1 = 5, r2 = 17. Predictions for X(6) , X(7) Based on x = (x(1) , . . . , x(5) ), predictions with credibility level γ = 0.95 for X(6) and X(7) are calculated. First for fixed k1 = 7 and subsequently for random number K1 of failures before the change-point, where K1 ∼ Bi(20, 0.4). For the prior distribution of α the noninformative prior with c = d = 0 and the Gamma prior with c = 0.05 and d = 0.1 were used. The results are given in Table 2. c d L X6 UX6 L X7 UX7 0 0 .334122 1.38266 .39134 2.98829 .05 .1 .334235 1.42177 .393958 3.08538 K1 random 0 0 .332097 1.374985 .353712 1.9880 p = .4 .05 .1 .332216 1.41462 .35474 2.0498 k1 = 7
Table 2: The prediction bounds LX(i) (x) and UX(i) (x) for i = 6, 7. Predictions for X(18) , X(20) Based on x = (x(1) , . . . , x(17) ), predictions with credibility level γ = 0.95 for X(18) and X(20) are calculated. For given k1 = 7 and fixed batch size n = 20 and subsequently for random number batch size N , where N ∼ P o(10), i.e., δ = 10. For the prior distribution of β the noninformative prior with c = d = 0 and the Gamma prior with c = 2.0 and d = 10.0 were used. The results are given in Table 3.
Bayesian Predictions for Exponentially Distributed Failure Times
205
c d LX18 UX18 LX20 UX20 0 0 6.95624 14.015208 8.4688 35.9221 2 10 6.9565 13.8426 8.4961 34.9858 N random 0 0 6.97074 23.876249 7.61137 31.50620 δ = 10 2 10 6.9711 23.5310 7.6237 30.8096 k1 = 7
Table 3: The prediction bounds LX(i) (x) and UX(i) (x) for i = 18, 20. Predictions for Y(1) , X(8) Based on x = (x(1) , . . . , x(5) ), predictions with credibility level γ = 0.95 for Y(1) and Y(8) are calculated. First for fixed k2 = 8 and subsequently for random number K2 of failures before the change-point, where K1 ∼ Bi(20, 0.4), i.e, m = 20. For the prior distribution of α the noninformative prior with c = d = 0 and the Gamma prior with c = 0.05 and d = 0.1 were used. The results are given in Table 4. c k2 = 8 0 .05 K2 random 0 p = .4 .05
d 0 .1 0 .1
LY1 UY1 LY8 UY8 .001225 .26336 .30636 3.9271 .001275 .273138 .31959 4.0666 .001246 .311515 .1819926 2.9413 .001298 .323149 .189825 3.0473
Table 4: The prediction bounds LY(i) (x) and UY(i) (x) for i = 1, 8. Predictions for Y(9) , Y(20) Based on x = (x(1) , . . . , x(17) ), predictions with credibility level γ = 0.95 for Y(9) and Y(20) are calculated. For given k2 = 8 and fixed batch size m = 20, then for random number K2 with K2 ∼ Bi(0.4, 20) and subsequently for random number batch size M , where M ∼ P o(10), i.e., δ = 10. For the prior distribution of β the noninformative prior with c = d = 0 and the Gamma prior with c = 2.0 and d = 10.0 were used. The results are given in Table 5. c d LY9 UY9 LY20 UY20 0 0 1.00935 2.61829 6.4254 40.7746 2 10 1.0101 2.7316 6.5898 39.1305 K2 random 0 0 1.00998 2.8686 4.39919 31.5401 p = .4 2 10 1.0100 2.8246 4.4972 30.4508 M random 0 0 1.012118 3.61522 4.09650 31.8731 δ = 10 2 10 1.0122 3.5579 4.1803 30.7925 k2 = 8
Table 5: The prediction bounds LY(i) (x) and UY(i) (x) for i = 9, 20.
206
Y. Abdel-Aty, J. Franz, M.A.W. Mahmoud
References [1] Aitcheson, J. and Dunsmore,I. R. (1975). Statistical Prediction Analysis. Cambridge University Press, Cambridge. [2] Al-Hussaini, E. K. (1999). Bayesian prediction under a mixture of two exponential components model based on type I censoring. J. Appl. Statist. Science 8, 173-185. [3] Al-Hussaini, E. K. (2001). Prediction: Advances and new research. In: Mathematics and the 21st century. World Scientific Publishing Co., 223-345. [4] Al-Hussaini, E. K. and Jaheen, Z. F. (1999). Parametric prediction bounds for the future median of the exponential distribution. Statistics 32, 267-275. [5] Consul, P. C. (1984). On the distributions of order statistics for a random sample size. Statistica Neerlandica 38, 249-256. [6] Dunsmore, I. R. (1974). The Bayesian predictive distribution in life testing models.Technometrics 16, 455-460. [7] Geisser, S. (1990). On hierarchical Bayes procedures for predicting simple exponential survival. Biometrics 46, 225-230. [8] Geisser, S. (1993). Predictive Inference: An Introduction. Chapman and Hall, London. [9] Gupta, D. and Gupta, R. C. (1984). On the distribution of order statistics for a random sample size. Statistica Neerlandica 38, 13-19. [10] Lawless, J. F. (1971). A prediction problem concerning samples from the exponential distribution with application in life testing. Technometrics 13, 725-730. [11] Lawless, J. F. (1972). On prediction intervals for samples from the exponential distribution and prediction limits for system survival. Sankhya B 34, 1-14. [12] Lingapaiah, G. S. (1978). Bayesian approach to the prediction problem in the exponential population. IEEE Trans. Rel. R-27, 222-225. [13] Lingapaiah, G. S. (1979). Bayesian approach to predction and the spacings in the exponential distributions. Ann. Instit. Statist. Math. 31, 319-401. [14] Lingapaiah, G. S. (1986). Bayes prediction in exponential life testing when sample size is a random variable. IEEE Trans. Rel. 35, 106-110. [15] Matthews, D. E. and Farewell, V. T. (1982). On testing for a constant hazard against a change point alternative. Biometrics 38, 463-468. [16] Nagaraja, H. N. (1995). Prediction problems. In: The Exponential Distribution.: Theory and Applications. Eds. N. Balakrishnan and A. P. Basu, Gordon and Breach, New York, 139-163.
Bayesian Predictions for Exponentially Distributed Failure Times
207
[17] Nguyen, H. T., Rogers, G. S. and Walker; E.A. (1984). Estimation in change point hazard rate model. Biometrica 71, 299-304. [18] Patel, J. K. (1989). Prediction intervals - A review.Communic. Statist. Theory Meth. 18, 2393-2465. [19] Pham, D. T. and Nguyen, H. T. (1990). Strong consistency of the maximum likelihood estimator in the change point hazard rate model. Statistics 21, 203-216. [20] Upadhyay and Pendy (1989). Prediction limits for an exponential distributiom. A Bayes predictive distribution approach. IEEE Trans. Rel. 38, 599-602.
Yahia Abdel-Aty Department of Mathematics, Faculty of Science Al-Azhar University, Nasr City Cairo 11884, Egypt, and Institut f¨ ur Mathematische Stochastik Technische Universit¨at Dresden D-01062 Dresden, Germany J¨ urgen Franz Institut f¨ ur Mathematische Stochastik Technische Universit¨at Dresden D-01062 Dresden, Germany M. A. W. Mahmoud Department of Mathematics, Faculty of Science Al-Azhar University, Nasr City Cairo 11884, Egypt