One-Sample Bayesian Predictive Interval of Future ... - Springer Link

1 downloads 0 Views 119KB Size Report
One-Sample Bayesian Predictive Interval of. Future Ordered Observations for the Pareto. Distribution. JONG-WUU WU1,∗. , SHU-FEI WU2 and CHIN-MEI YU2.
Quality & Quantity (2007) 41:251–263 DOI 10.1007/s11135-006-9001-2

© Springer 2006

One-Sample Bayesian Predictive Interval of Future Ordered Observations for the Pareto Distribution JONG-WUU WU1,∗ , SHU-FEI WU2 and CHIN-MEI YU2 1

Department of Applied Mathematics, National Chiayi University, Chiayi City 60004, Taiwan, R.O.C. 2 Department of Statistics, Tamkang University, Tamsui, Taipei 25137, Taiwan, R.O.C.

Abstract. Nigm et al. (2003, statistics 37: 527–536) proposed Bayesian method to obtain predictive interval of future ordered observation Y(j ) (r < j ≤ n) based on the right type II censored samples Y(1) < Y(2) < · · · < Y(r) from the Pareto distribution. If some of Y(1) < · · · < Y(r−1) are missing or false due to artificial negligence of typist or recorder, then Nigm et al.’s method may not be an appropriate choice. Moreover, the conditional probability density function (p.d.f.) of the ordered observation Y(j ) (r < j ≤ n) given Y(1) < Y(2) < · · · < Y(r) is equivalent to the conditional p.d.f. of Y(j ) (r < j ≤ n) given Y(r) . Therefore, we propose another Bayesian method to obtain predictive interval of future ordered observations based on the only ordered observation Y(r) , then compares the length of the predictive intervals when using the method of Nigm et al. (2003, statistics 37: 527–536) and our proposed method. Numerical examples are provided to illustrate these results. Key words: right type II censored sample, Bayesian predictive interval, pareto distribution, one-sample problem.

1. Introduction In most literatures of reliability, the exponential distribution is widely used as a model of lifetime data. This distribution is characterized by a constant failure rate, say λ. But in a population of components there could be a ubiquitous variation in λ-values because of small fluctuations in manufacturing tolerances, so that a component selected at random can be regarded as belonging to a random subpopulation (see McNolty and Doyle, 1980). The lifetime of a particular component have an exponential distribution with failure rate λ and location parameter (guarantee time) µ; and let the λ follow a Gamma distribution with scale parameter c > 0 and shape parameter a > 0. Then the failure time Y of a component selected at ∗

Author for correspondence: Jong-Wuu Wu, Department of Applied Mathematics, National Chiayi University, Chiayi City 60004, Taiwan, R.O.C. Tel.: (886)-5-2717876; Fax: (886)-5-2717869; E-mail: [email protected]

252

JONG-WUU WU ET AL.

random from such a mixed population has a Pareto distribution of the second kind (see Engelhardt et al., 1986; Johnson et al., 1994) with the probability density function (p.d.f.) is given by  ∞ ca a−1 −cλ f (y)≡ f (y|µ, a, c) = λ e dλ λe−λ(y−µ)  (a) 0  ∞ ca = λa e−λ(y−µ+c) dλ  (a) 0   a y − µ −(a+1) = , y > µ, c > 0, a > 0. (1) 1+ c c Then, the corresponding cumulative distribution function (c.d.f.) and survival function are  y −a F (y|µ, a, c) = 1 − 1 + , y > µ, c > 0, a > 0 (2) c and R(y) = 1 − F (y|µ, a, c), y > µ, c > 0, a > 0, respectively. When µ = 0, a Pareto type II distribution with parameters (c, a) is usually denoted by ParII(c, a) (sometimes, it is called Lomax distribution with parameters c and a) (see, e.g., Johnson et al., 1994). This is also a Pearson Type VI distribution. In life testing experiments, the experimenter may not always be in a position to observe the life times of all the items put on test. This may be because of time limitation and/or other restrictions (such as money and material resources, etc) on data collection. Let us suppose, for instance, that out of n items put on life test, the first r life times Y(1) < Y(2) < · · · < Y(r) have only been observed and the life times for the rest n − r components remain unobserved or missing. This type of censoring is known as right type II censoring. A few authors have studied the Bayesian predictive bounds of future observation based on the Pareto distribution as (2). Nigm and Hamdy (1987) give the predictive bounds using a Bayes approach for later order statistics of a sample from the Pareto distribution with µ = c. Arnold and Press (1989) also discuss the predictive problems from a Bayes viewpoint. Bayesian prediction bounds based on type I censoring from a finite mixture of Lomax components were obtained by AL-Hussaini et al. (2001). Nigm et al. (2003) obtained Bayesian predictive bounds based on right type II censored sample Y(1) < Y(2) < · · · < Y(r) from the ParII (c, a) distribution. Although we observe the right type II censored sample Y(1) < Y(2) < · · · < Y(r) , but since the conditional p.d.f. of the ordered observation Y(j ) (r < j ≤ n) given Y(1) < Y(2) < · · · < Y(r) is equivalent to the conditional p.d.f. of Y(j ) (r < j ≤ n) given Y(r) . Moreover, if some of Y(1) < · · · < Y(r−1) are missing or false due to artificial negligence of typist or recorder, then Nigm et al.’s method may not be an appropriate choice. So, in Section 2,

ONE-SAMPLE BAYESIAN PREDICTIVE INTERVAL

253

we shall propose another Bayesian method to obtain predictive intervals of the j th (r < j ≤ n) ordered observation Y(j ) (r < j ≤ n) based on the only ordered observation Y(r) and follow the ParII(c, a) distribution in two cases. The first case is when the scale parameter c is known and the second case is when both the shape parameter a and scale parameter c are unknown. Moreover, in Section 3, we also give numerical examples to calculate the Bayesian predictive intervals and compare the lengths of these Bayesian predictive intervals by using the method of Nigm et al. (2003) and our proposed method. Finally, some conclusions are made in Section 4. 2. One-sample Bayesian Predictive Interval Suppose that n new products have been established at a given point in time and that the survival times of the first r products of these to fail are Y(1) < Y(2) < · · · < Y(r) (right type II censoring) from the ParII (c, a) distribution. The objective is make Bayesian predictive intervals for the survival times of the remaining (n − r) products based on the only ordered observation Y(r) of Y(1) < Y(2) < · · · < Y(r) . 2.1. the first case is when the scale parameter c is known In this section, we treat the case when the scale parameter c is known. Moreover, from David (1981: p9), we also obtain that the likelihood function is given by L(a, c|Y(r) = yr )∝ [1 − R(yr )]r−1 · [R(yr )]n−r · f (yr ) ∝

a   c

 r−1  yr −1  r − 1 (−1)l exp (−Hlr a) , 1+ l c

(3)

 yr  . B(yr ) = 1 + c

(4)

l=0

where Hlr = (l + n − r + 1)ln B(yr )

and

Suppose that the prior distribution of a is adequately represented by the natural conjugate gamma (β, γ ) distribution with p.d.f. as π(a|β, γ ) ∝ a β−1 exp(−γ a), a > 0

(5)

for some appropriate choice of β > 0 and γ > 0. Combining (3) and (5), the posterior density of a, given yr , is then given by  r−1   r −1 ∗ β π (a|yr ) ∝ a (−1)l exp [− (Hlr + γ ) a] . (6) l l=0

254

JONG-WUU WU ET AL.

The conditional predictive density function of Z(s) = Y(r+s) , s = 1, 2, . . . , n − r, given a (see David, 1981) is h(zs |a, yr ) ∝ [R(yr ) − R(zs )]s−1 [R(zs )]n−r−s [R(yr )]−(n−r) f (zs ),

(7)

whereR(·) = 1 − F (·|0, a, c), f (·) and F (·|0, a, c) are given by (1) and (2), respectively. Therefore, (7) can be rewritten as s−1    aδj h(zs |a, yr ) = exp −abj D(zs ) , cB(zs )

(8)

j =0

where



δj = (−1)

j

 s −1 , j

−a

R(·) = [B(·)]

bj = n − r − s + j + 1,

B(zs ) . and D(zs ) = ln B(yr )

(9)

By forming the product of (6) and (8), we obtain the joint p.d.f. of Z(s) and a given Y(r) = yr . Integrating with respect to a, then we also obtain the conditional predictive density of Z(s) given Y(r) = yr as  ∞ ∗ f (zs |yr ) = h(zs |a, yr )π ∗ (a|yr )da, zs > yr . (10) 0

It follows from (6), (8), and (10) that, for zs > yr ,  

 r−1  s−1   δj B(zs ) −(β+2) r−1 (−1)l f ∗ (zs |yr )=A∗ γ +Hlr +bj ln , (11) l cB(zs ) B(yr ) j =0 l=0

where A∗ is a normalizing constant satisfying  ∞ f ∗ (zs |yr )dzs = 1. yr

By using (11), the conditional predictive survival function on yr is then given by ∞ ∗ f (zs |yr )dzs Pr(Z(s) > v|yr )= v∞ ∗ yr f (zs |yr )dzs   s−1 r−1 r − 1 (−1)l δj φj l (v) j =0 l=0 l   , (12) = s−1 r−1 r − 1 l (−1) δj φj l (yr ) j =0 l=0 l

255

ONE-SAMPLE BAYESIAN PREDICTIVE INTERVAL

where φj l (v) =

1 bj



γ + Hlr + bj ln

B(v) B(yr )

−(β+1) .

Therefore,  r −1 (−1)l δj∗ φj∗l (v) j =0 l=0 l   P r(Z(s) > v|yr ) = , s−1 r−1 r − 1 ∗ ∗ l (−1) δj φj l (yr ) j =0 l=0 l s−1 r−1



(13)

where

 s −1 δ j δj∗ = = (−1)j (n − r − s + j + 1)−1 , j bj

−(β+1)  B(v) ∗ φj l (v) = γ + Hlr + bj ln . B(yr )

(14)

Since if v = yr , then φj∗l (yr ) = (γ + Hlr )−(β+1) , so the denominator of (13) can be rewritten as follows:  r−1  s−1   r −1 l

j =0 l=0

=

(−1)l δj∗ φj∗l (yr )

  r−1  s−1   r −1 s −1 j

j =0 l=0

l

(−1)l+j [j + (n − r − s + 1)]−1

× (γ + Hlr )−(β+1) .

(15)

Applying the identity given in Lingappaiah (1981) (or Nigm et al., 2003), which can be proved by induction, N 

 (−1)j

j =0

   −1 N N +ε (j + ε)−1 = (N + 1) j N +1

(16)

to the above sum with N = s − 1 and ε = n − r − s + 1, we then have  r−1  s−1   r −1 j =0 l=0

l

  −1   r−1  n−r r −1 (−1)l δj∗ φj∗l (yr ) = s s l l=0 −(β+1)

×(−1) (γ + Hlr ) l

.

256

JONG-WUU WU ET AL.

Substituting in (13), we have Pr(Z(s) > v|yr ) −1     r−1  n−r r −1 −(β+1) l (−1) (γ + Hlr ) =s s l l=0  r−1  s−1   r −1 (−1)l δj∗ φj∗l (v). × l

(17)

j =0 l=0

It follows that lower and upper bounds of a two-sided 100(1 − α)% predictive interval for Z(s) = Y(r+s) , denoted by L and U are given, respectively, by 1−

α = Pr(Z(s) > L|yr ) 2 −1     r−1  r −1 n−r (−1)l (γ + Hlr )−(β+1) =s l s l=0  r−1 s−1  r −1 (−1)l δj∗ φj∗l (L), × l

(18)

j =0 l=0

α = Pr(Z(s) > U |yr ) 2 −1     r−1  n−r r −1 =s (−1)l (γ + Hlr )−(β+1) s l l=0  r−1 s−1  r −1 × (−1)l δj∗ φj∗l (U ). l j =0 l=0

We shall use the Compaq Visual Fortran V6.5 and IMSL (2000) to solve the above two equations to obtain L and U for a given 1 − α. Remark 1 (i) For s = 1, corresponding to predicting the time Z(1) ≡ Y(r+1) of the next product to fail, (17) reduces to      r−1 r − 1 B(v) −(β+1) l (−1) +(n−r)ln γ +H lr l=0 B(yr ) l   . (17a) P r(Z(1) > v|yr )= r−1 r − 1 l (γ + H )−(β+1) (−1) lr l=0 l Then the lower and upper bounds of a two-sided 100(1 − α)% predictive interval for Y(r+1) are the solutions of (17a) after equating the right-hand side of it to (1−α/2) and α/2, respectively.

257

ONE-SAMPLE BAYESIAN PREDICTIVE INTERVAL

(ii) For s = n − r, corresponding to predicting the time Z(n−r) ≡ Y(n) of the last product to fail, (17) reduces to Pr(Z(n−r) > v|yr )       n−r r−1 n−r r−1 B(v) −(β+1) l+j +1 (−1) +j ln γ +H lr j =1 l=0 B(yr ) l j   = . (18b) r−1 r − 1 l −(β+1) (−1) (γ + Hlr ) l=0 l Then the lower and upper bounds of a two-sided 100(1−α)% predictive interval for Y(n) are the solutions of (18b) after equating the right-hand side of it to (1−α/2) and α/2, respectively. 2.2. the second case is when both the scale and shape parameters are unknown In this section, we discuss the Bayesian approach to prediction when the underlying distribution has the ParII(c, a) distribution, both the shape and scale parameters are unknown. For simplicity, we shall use the reparametrization k = 1/c. Then (1) can be rewritten as f (y) ≡ f (y|a, k) = ak(1 + ky)−(a+1) ,

y > 0, a > 0, k > 0

(19)

and the likelihood function (3) becomes −1

L(a, k|yr ) ∝ (ak)(1 + kyr )

 r−1   r −1 l

l=0

(−1)l exp(−aHlr ),

(20)

where Hlr = (l + n − r + 1) ln B(yr )

and

B(yr ) = (1 + kyr ) .

(21)

Now, we suppose that the joint prior density of a and k is given by π(a, k) = π1 (a|k)π2 (k),

(22)

where π1 (a|k) ∼ Gamma(δ, k)

and

π2 (k) ∼ Gamma(β, γ ).

(23)

Then, we obtain π(a, k) ∝ a δ−1 k β+δ−1 exp [−k(a + γ )] .

(24)

258

JONG-WUU WU ET AL.

By forming the product of (20) and (24), we obtain that the posterior density of a and k is given by ∗

π (a, k|yr ) ∝ a k

δ β+δ

−1

(1 + kyr )

 r−1   r −1 l

l=0

(−1)l exp [−a(k + Hlr ) − kγ ] . (25)

Next, the predictive density function of Z(s) given Y(r) = yr is obtained from (8) and (25), by observing in (8) that 1/c = k, B(·) and D(·) are functions of k, as follows  ∞ ∞ ∗∗ f (zs |yr ) = h(zs |a, k, yr )π ∗ (a, k|yr )da dk 0



0

 r−1  s−1   r −1 j =0 l=0

l

 l

(−1) δj

∞ ∞

0

0

(1 + kyr )−1 δ+1 β+δ+1 a k B(zs )

  × exp −a(k + Hlr + bj D(zs )) − kγ da dk =A

∗∗

 r−1  s−1   r −1 j =0 l=0

×

l

 l

(−1) δj



0

k β+δ+1 e−kγ B(zs )

(1 + kyr )−1 dk, zs > yr , [k + Hlr + bj D(zs )]δ+2

(26)

where A∗∗ is the normalizing constant satisfying  ∞ f ∗∗ (zs |yr )dzs = 1 yr

and



1 + kzs B(zs ) = ln . D(zs ) = ln B(yr ) 1 + kyr

(27)

Then, by using (26), the conditional predictive survival function on yr is given by ∞ ∗∗ f (zs |yr )dzs Pr(Z(s) >v|yr ) = v∞ ∗∗ yr f (zs |yr )dzs   s−1 r−1 r − 1 (−1)l δj Qj l (v) j =0 l=0 l   , (28) = s−1 r−1 r − 1 l (−1) δj Qj l (yr ) j =0 l=0 l

ONE-SAMPLE BAYESIAN PREDICTIVE INTERVAL

where



Qj l (v) =



0 

(1 + kyr )−1 k β+δ+1 e−kγ ∞

× v

1 1 dz s dk B(zs ) [k + Hlr + bj D(zs )]δ+2

Ij l (v) = , (δ + 1)bj where



Ij l (v) =



0

259

(29)

(1 + kyr )−1 k β+δ e−kγ [k + Hlr + bj D(v)]−(δ+1) dk.

Therefore,

 r −1 (−1)l δj∗ Ij l (v) j =0 l=0 l   , Pr(Z(s) > v|yr ) = s−1 r−1 r − 1 ∗ l (−1) δj Ij l (yr ) j =0 l=0 l s−1 r−1

(30)



(31)

where δj∗ = δj /bj and Ij l (v) is as given by (30). We shall use the Compaq Visual Fortran V6.5 and IMSL (2000) to find v such that Pr(Z(s) > v|yr ) = 1 − α, which is used in determining prediction bounds for Z(s) (s = 1, 2, . . . , n − s). Remark 2 (i) For s = 1, corresponding to predicting the time Z(1) ≡ Y(r+1) of the next product to fail, (31) reduces to   r−1 r − 1 (−1)l I0l (v) l=0 l   , (32) Pr(Z(1) > v|yr ) = r−1 r − 1 l (−1) I0l (yr ) l=0 l where



(1 + kyr )−1 k β+δ e−kγ [k + Hlr + (n − r)D(v)]−(δ+1) dk, 0 ∞ I0l (yr ) = 0 (1 + kyr )−1 k β+δ e−kγ [k + Hlr ]−(δ+1) dk.

I0l (v) =

(33)

(ii) For s = n − r, corresponding to predicting the time Z(n−r) ≡ Y(n) of the last product to fail, (31) reduces to     n−r r−1 r − 1 l+j −1 n − r (−1) Ij −1l (v) j =1 l=0 l j     Pr(Z(n−r) >v|yr )= , n−r r−1 r − 1 l+j −1 n − r I (y ) (−1) j −1l r j =1 l=0 l j (34)

260

JONG-WUU WU ET AL.

where, for j = 1, 2, . . . , n − r, ∞ Ij −1l (v) = 0 (1 + kyr )−1 k β+δ e−kγ [k + Hlr + j D(v)]−(δ+1) dk, Ij −1l (yr ) = I0l (yr ). Since

n−r

 j −1

j =1 (−1)

n−r j

 = 1, then    r −1 l+j −1 n − r (−1) Ij −1l (v) l=0 l j   , r−1 r − 1 l (−1) I0l (yr ) l=0 l (36)

n−r r−1 Pr(Z(n−r) > v|yr ) =

j =1

(35)



where Ij −1l (v) and I0l (yr ) are given by (35). 3. Numerical Examples In this section, we use two numerical examples to illustrate the proposed Bayesian method based on Y(r) and the method of Nigm et al. (2003) based on Y(1) < Y(2) < · · · < Y(r) for the predictive intervals. Moreover, we also compare the lengths of the predictive intervals when using the method of Nigm et al. (2003) and our proposed method. example 1: the scale parameter c is known In this example, we consider the data reported by Nigm et al. (2003: p535 Section 3.1). The data proposed that n = 20 items are put on test simultaneously and their ordered failure times are as follows: 0.0009, 0.0040, 0.0142, 0.0221, 0.0261, 0.0418, 0.0473, 0.0834, 0.1091, 0.1252, 0.1404, 0.1498, 0.1750, 0.2031, 0.2099, 0.2168, 0.2918, 0.3465, 0.4035, 0.6143.

Moreover, Nigm et al. (2003) have assumed that the failure time of items is distributed according to the Pareto distribution as (2) with µ = 0, c = 0.5 and prior information about a suggested that β = 4.0 and γ = 0.5 are appropriate for the prior density given in (5). In addition, they also suppose that this test is terminated when the first 15(= r) of the ordered lifetimes are available. In order to compare length of predictive intervals for Z(s) ≡ Y(r+s) by using our proposed method based on Y(r) and the method of Nigm et al. (2003) based on Y(1) < Y(2) < · · · < Y(r) , we use two methods to find 95% predictive intervals for Z(s) ≡ Y(r+s) , s = 1, 2, . . . , n − r. Here, n = 20, r = 15, then these results are listed in Table I.

261

ONE-SAMPLE BAYESIAN PREDICTIVE INTERVAL

Table I. Predictive intervals and length of predictive intervals for Y(r+s) based on our proposed method and Nigm et al.’s method when n = 20, r = 15

s

predictive intervals of our proposed method

length L1s of predictive interval

predictive intervals of Nigm et al.’s method

1 2 3 4 5

(0.21074, (0.21874, (0.23562, (0.26432, (0.31889,

0.14990 0.27569 0.46014 0.82454 2.18045

(0.21078, (0.21916, (0.23691, (0.26720, (0.32499,

0.36064) 0.49443) 0.69576) 1.08886) 2.49934)

0.36576) 0.50396) 0.71261) 1.12263) 2.61619)

length L2s of predictive interval 0.15498 0.28480 0.47570 0.85543 2.29120

Note: L1s and L2s are lengths of predictive intervals for our proposed method and Nigm et al.’s method, s = 1, 2, . . . , 5.

From Table I, we can see that our proposed method is better than Nigm et al.’s method by obtaining a shorter predictive interval length for predicting Y(r+s) , since 0.000508 ≤ L2s − L1s ≤ 0.11075, s = 1, 2, . . . , 5, r = 15, where L1s and L2s are length of predictive intervals for Y(r+s) based on our proposed method and Nigm et al.’s method, respectively. example 2: the scale parameter c and shape parameter a are unknown In this example, we first consider the data reported by Nigm et al. (2003: p535 Section 3.2). Although the data has a false ordered observation 0.1952 due to artificial negligence of typist, but it does not affect our proposed method. So, based on our proposed method, we obtain that a two-sided 95% predictive interval for Y(16) and Y(20) are (0.19297, 0.36128) and (0.31351, 2.47016), respectively. Further, we also see that our proposed method is better than Nigm et al.’s method by obtaining a shorter predictive interval length for Y(16) and Y(20) since a two-sided 95% predictive interval for Y(16) and Y(20) based on Nigm et al.’s method are (0.19304, 0.37278) and (0.32119, 3.26683), respectively. Second, we also consider another data proposed that n = 20 items are put on test simultaneously and their ordered failure times are as follows: 0.0098, 0.0258, 0.0376, 0.0661, 0.0684, 0.0849, 0.0859, 0.1112, 0.1363, 0.1447, 0.1661, 0.1768, 0.1904, 0.1963, 0.2369, 0.2463, 0.2980, 0.3619, 0.3726, 0.4129.

Moreover, we assumed that the failure time of items is distributed according to the Pareto distribution as (2) with µ = 0, the scale parameter c and shape parameter a are unknown and prior information about k = 1/c and a

262

JONG-WUU WU ET AL.

Table II. Predictive intervals and length of predictive intervals for Y(r+s) based on our proposed method and Nigm et al.’s method when n = 20, r = 15

s

predictive intervals of our proposed method

length L1s of predictive intervals

predictive intervals of Nigm et al.’s method

1 2 3 4 5

(0.23804, (0.24885, (0.27150, (0.30956, (0.38045,

0.20112 0.36782 0.61128 1.09219 2.87143

(0.23810, (0.24948, (0.27338, (0.31358, (0.38845,

0.43916) 0.61667) 0.88278) 1.40175) 3.25188)

0.44791) 0.63239) 0.90858) 1.44669) 3.36134)

length L2s of predictive intervals 0.20981 0.38291 0.63520 1.13311 2.97289

Note: L1s and L2s are lengths of predictive intervals for our proposed method and Nigm et al.’s method, s = 1, 2, . . . , 5.

suggested that β = 2.3, γ = 2.0, and δ = 4.0 for the joint prior density given in (24). In addition, they also suppose that this test is terminated when the first 15(=r) of the ordered lifetimes are available. In order to compare length of predictive intervals for Z(s) ≡ Y(r+s) by using our proposed method based on Y(r) and the method of Nigm et al. (2003) based on Y(1) < Y(2) < · · · < Y(r) , we use two methods to find 95% predictive intervals for Z(s) ≡ Y(r+s) , s = 1, 2, . . . , n − r. Here, n = 20, r = 15, then these results are listed in Table II. From Table II, we can see that our proposed method is better than Nigm et al.’s method by obtaining a shorter predictive interval length for predicting Y(r+s) , since 0.00869 ≤ L2s − L1s ≤ 0.10146, s = 1, 2, . . . , 5, r = 15, where L1s and L2s are length of predictive intervals for Y(r+s) based on our proposed method and Nigm et al.’s method, respectively. 4. Conclusions Based on the only ordered observation Y(r) of the right type II censored sample Y(1) < Y(2) < · · · < Y(r) in a sample of size n from the Pareto distribution with c.d.f. as (2) and µ = 0, Bayesian predictive bounds for the remaining lifetimes Y(r+s) , s = 1, 2, . . . , n − r are derived in two cases, the first is when the scale parameter c is known and the second is when the scale parameter c and shape parameter a are unknown. From these results of numerical examples, we find that the proposed method is not worse than Nigm et al.’s method, although we use the only ordered observation Y(r) . Furthermore, if some of Y(1) < · · · < Y(r−1) are missing or false due to artificial negligence of typist or recorder, then our proposed method is a more suitable choice than Nigm et al.’s method.

ONE-SAMPLE BAYESIAN PREDICTIVE INTERVAL

263

Finally, we can also find that an one-sided 100(1−α)% predictive interval (Y(r) , U ) for Z(s) ≡ Y(r+s) is obtained by solving the equation Pr(Z(s) > U |yr ) = α, s = 1, 2, . . . , n − r. Acknowledgement This research was partially supported by the National Science Council, R.O.C. (Plan No.: NSC93-2118-M-032-008). References AL-Hussaini, E. K., Nigm, A. M. & Jaheen, Z. F. (2001). Bayesian prediction based on finite mixtures of Lomax components model and type I censoring. Statistics 35(3): 259–268. Arnold, B. C. & Press, S. J. (1989). Bayesian estimation and prediction for Pareto data. Journal of American Statistical Association 84: 1079–1084. Compaq Visual Fortran, Professional Edition V6.5 Intel Version and IMSL (2000). Compaq Computer Corporation. David, H. A. (1981). Order Statistics, 2nd edn. New York: Wiley Inc. Engelhardt, M., Bain, L. J. & Shiue, W. K. (1986). Statistical analysis of a compound exponential failure model. Journal Statistical Computation and Simulation 23: 229–315. Johnson, N. L., Kotz, S. & Balakrishnan, N. (1994). Continuous Univariate Distribution, Vol. 1. New York: Wiley Inc. Lingappaiah, G. S. (1981). Sequential life testing with spacings, exponential model. IEEE Transactions on Reliability R-30(4): 370–374. McNolty, F. & Doyle, J. (1980). Hansen E. Properties of the mixed exponential failure process, Technometrics 22: 555–565. Nigm, A. M., Al-Hussaini, E. K. & Jaheen, Z. F. (2003). Bayesian one-sample prediction of future observations under Pareto distribution. Statistics 37: 527–536. Nigm, A. M. & Hamdy, H. I. (1987). Bayesian prediction bounds for the Pareto lifetime model.Communication in Statistics – Theory and Methods 16: 1761–1772.