Bayesian Inference of Generalized Exponential Distribution Based on ...

4 downloads 0 Views 139KB Size Report
Jun 24, 2013 - parameters of the generalized exponential distribution (GED) using lower record values. The maximum likelihood estimates (MLE) and the ...
This article was downloaded by: [115.253.176.91] On: 06 October 2013, At: 10:26 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

American Journal of Mathematical and Management Sciences Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/umms20

Bayesian Inference of Generalized Exponential Distribution Based on Lower Record Values a

b

c

Sanku Dey , Tanujit Dey , Mahdi Salehi & Jafar Ahmadi

c

a

Department of Statistics , St. Anthony’s College , Shillong , 793001 , Meghalaya , India b

Department of Mathematics , College of William & Mary , Williamsburg , Virginia , 23185 , USA c

Department of Statistics and Ordered and Spatial Data Center of Excellence , Ferdowsi University of Mashhad , P.O. Box 1159, Mashhad , 91775 , Iran Published online: 24 Jun 2013.

To cite this article: Sanku Dey , Tanujit Dey , Mahdi Salehi & Jafar Ahmadi (2013) Bayesian Inference of Generalized Exponential Distribution Based on Lower Record Values, American Journal of Mathematical and Management Sciences, 32:1, 1-18 To link to this article: http://dx.doi.org/10.1080/01966324.2013.788391

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness,

or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.

Downloaded by [115.253.176.91] at 10:26 06 October 2013

This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

American Journal of Mathematical and Management Sciences, 32:1–18, 2013 C Taylor & Francis Group, LLC Copyright  ISSN: 0196-6324 print / 2325-8454 online DOI: 10.1080/01966324.2013.788391

Downloaded by [115.253.176.91] at 10:26 06 October 2013

BAYESIAN INFERENCE OF GENERALIZED EXPONENTIAL DISTRIBUTION BASED ON LOWER RECORD VALUES SANKU DEY,1 TANUJIT DEY,2 MAHDI SALEHI,3 and JAFAR AHMADI3 1 Department of Statistics, St. Anthony’s College, Shillong, 793001, Meghalaya, India 2 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23185, USA 3 Department of Statistics and Ordered and Spatial Data Center of Excellence, Ferdowsi University of Mashhad, P.O. Box 1159, Mashhad, 91775, Iran

SYNOPTIC ABSTRACT This article addresses the problem of frequentist and Bayesian estimation of the parameters of the generalized exponential distribution (GED) using lower record values. The maximum likelihood estimates (MLE) and the Bayes estimates based on lower records are derived for the unknown parameters of the GED. We consider the Bayes estimators of the unknown parameters under the assumption of gamma priors on both the shape and the scale parameters. The Bayes estimators cannot be obtained in explicit forms. The Bayesian estimation of the parameters of the GED has been studied with respect to both symmetric and asymmetric loss functions. We have also derived the Bayes interval of this distribution and discussed the Bayesian prediction intervals of the future record values based on the observed record values. Monte Carlo simulations are performed to compare the performances of the proposed methods, and one dataset has been analyzed for illustrative purposes. Key Words and Phrases: generalized exponential distribution, maximum likelihood estimator, bayes estimators, general entropy loss function, Bayes prediction, predictive interval.

1. Introduction The generalized or exponentiated exponential distribution has a wide range of applications. A comprehensive amount of research has been done over the years from both frequentist and Bayesian perspectives. For cumulative development and review of this Address correspondence to Sanku Dey, Department of Statistics, St. Anthony’s College, Shillong-793001, Meghalaya, India. E-mail: sanku [email protected]

1

2

S. Dey et al.

distribution, which includes parameter estimation, prediction, and so forth, we refer readers to Gupta and Kundu (1999, 2001a, 2001b), Raqab (2002), Raqab and Ahsanullah (2001), Zheng (2002), and the references cited therein. In this article we engage in using two-parameter generalized exponential distribution (GED). To define this distribution formally, a random variable X is said to follow a GED (notationally X ∼ G E (α, λ)) if the probability density function (PDF) of X is

Downloaded by [115.253.176.91] at 10:26 06 October 2013

f (x; α, λ) = αλe −λx (1 − e −λx )α−1 ; x ≥ 0, α, λ > 0,

(1)

where λ and α are the scale and shape parameters, respectively. The corresponding cumulative distribution function (CDF) is given by F (x; α, λ) = (1 − e −λx )α ; x ≥ 0.

(2)

The main contribution of the article is to present both frequentist and Bayesian methodology for estimating unknown parameters of the GED based on lower records and prediction of future observations using current data. Record values are of great significance in many real-life situations such as in industry, weather, seismology, athletics, economics, and life-testing events. Record values and their basic properties were first introduced by Chandler (1952). Feller (1966) also cited some examples of record values with respect to gambling problems. Theory of record values and their distribution properties have been extensively studied in the literature. See Ahsanullah (1988, 1995), Arnold and Balakrishnan (1989), Arnold, Balakrishnan and Nagaraja (1992, 1998), Nevzorov (1987), and Kamps (1995) for reviews on various developments in the area of records. Let X1 , X2 , . . . be a sequence of independent and identically distributed (iid) random variables with CDF F (x) and PDF f (x). Let Yn = min{X1 , X2 , . . . , Xn }, n ≥ 1; then, the observation X j is a lower record value of {Xn , n ≥ 1} if it is smaller than all the preceding observations in other words, if Y j < Y j −1 , j > 1. There has been a significant amount of research done in statistical inference of several distributions based on lower record values. For GED, Jaheen (2004) obtained the empirical Bayes estimate of the shape parameter based on lower record values

Downloaded by [115.253.176.91] at 10:26 06 October 2013

On Generalized Exponential Distribution

3

with known scale parameters. Sarhan and Tadj (2008) presented the Bayesian inferences of the unknown parameter(s) of the GE distribution based on upper record values. They studied the Bayesian estimation of the parameters of GE with respect to quadratic loss function using uniform priors for both parameters. We use the maximum likelihood estimation (MLE) method as a part of frequentist methodology for parameter estimation in Section 2. The asymptotic confidence intervals based on the observed Fisher’s information matrix is also discussed in that section. Next, we consider Bayesian estimation of the unknown parameters in Section 3. The Bayesian inference mainly depends on two features: choice of prior distribution of the parameters and the loss function to be used for Bayesian computations. In this article, we use gamma priors for both scale and shape parameters, and they are assumed to be independent of each other. For Bayesian inference, a frequent choice of loss function is a squared error loss function, because it is symmetric and easy to implement and tractable. An asymmetric loss function is also found useful in many situations. Here we use a more flexible loss function: a general entropy loss function (GELF), because different choices of parameter value involved in the loss function can produce both symmetric and asymmetric loss functions. Details are presented in Section 3.1. The joint posterior distribution is complicated, and thus, the posterior sampling is not straightforward to implement. Here we propose a Markov Chain Monte Carlo (MCMC) technique, which involves a Metropolis–Hasting (MH) algorithm for posterior sampling. Besides Bayes estimates, we also obtain a twosided Bayes probability interval as a Bayesian counterpart of the asymptotic confidence intervals in Section 3.2. Another important problem in life-testing experiments is the prediction of unknown observation based on currently available samples, sometimes referred to as informative samples. The prediction of future record value based on given records is a useful research component involved in many applications. Section 4 introduces the Bayes prediction distribution for future record value given the informative records. Besides estimating the future record, we also obtain predictive interval, which is quite convenient in statistical applications. Section 5 describes numerical experiments via both simulation study and a real-life data application. Section 6 contains a brief conclusion.

4

S. Dey et al.

2. Maximum Likelihood Estimation of α and λ

Downloaded by [115.253.176.91] at 10:26 06 October 2013

This section discusses the process of obtaining the MLE of the parameters α and λ based on lower record values for the GED. Suppose we observe n lower record values X = {XL(1) , XL(2) , . . . , XL(n) } arising from a sequence of iid G E (α, λ) with PDF (1). Arnold et al. (1998) gives the likelihood function as L(α, λ|x) = f (xL(n) ; α, λ)

n−1  i=1

f (xL(i) ; α, λ) . F (xL(i) ; α, λ)

(3)

Substituting (1) and (2) in (3), we get n n α log(1−e

L(α, λ|x) = α λ e

−λxL (n) )

n  i=1

e −λxL(i) . (1 − e −λxL(i) )

(4)

The log-likelihood function is (α, λ|x) = n(log α + log λ) − λ

n 

xL(i) + α log(1 − e −λxL(n) )

i=1



n 

log(1 − e −λxL(i) ).

(5)

i=1

The resulting normal equations are given in (6) and (7), respectively: n ∂ = + log(1 − e −λxL(n) ) = 0, ∂α α

(6)

 xL(i) e −λxL(i) n  ∂ xL(n) e −λxL(n) = − − = 0. xL(i) + α ∂λ λ i=1 (1 − e −λxL(n) ) i=1 (1 − e −λxL(i) ) n

n

(7)

On Generalized Exponential Distribution

5

Solving equations (6) and (7), we get (8) and (9): αˆ = − n  i=1

ˆ L(n) log(1 − e −λx )

,

(8)

n xL(n) e −λxL(n) n xL(i) = − + ˆ L(n) ˆ L(n) λˆ (1 − e −λx ) log(1 − e −λx ) ˆ

− Downloaded by [115.253.176.91] at 10:26 06 October 2013

n

n ˆ  n xL(i) e −λxL(i) i=1

ˆ L(i) (1 − e −λx )

.

(9)

To obtain the MLEs of α and λ, we can see that the analytic solution of the Equations (8) and (9) are not possible. One can use the Newton–Raphson type of method to get the MLE of λ, namely λˆ MLE , from (9). Once λˆ MLE is obtained, it can be substituted with the Equation (8) to get the MLE of α, namely αˆ MLE . Because the MLE of the vector of unknown parameters θ = (α, λ) cannot be derived in closed form, it is not easy to derive the exact distributions of the MLEs and, hence, we cannot get the exact bounds of the parameters. The intent is to use the large-sample approximation. It is known that the asymptotic distribution of the MLE θˆ is (θˆ − θ ) → N2 (0, I −1 (θ )) (see Lawless, 1982), where I −1 (θ ), and the inverse of the observed information matrix of the unknown parameters θ = (α, λ) can be written as ⎞ −1 ∂ 2 ∂ 2 

⎜ − ∂α 2 − ∂α∂λ ⎟



I −1 (θ ) = ⎜ ⎝ ∂ 2 ∂ 2  ⎠

− 2

− ∂λ∂α ∂λ (α,λ)=(ˆ αMLE ,λˆ MLE ) cov(ˆ αMLE , λˆ MLE ) var(ˆ αMLE ) . = cov(λˆ MLE , αˆ MLE ) var(λˆ MLE ) ⎛

6

S. Dey et al.

The elements of I (θ ) are as follows: ∂ 2 n =− 2 2 ∂α α 2 xL(n) e −λxL(n) ∂ 2 ∂  = = ∂α∂λ log(1 − e −λxL(n) ) ∂λ∂α

Downloaded by [115.253.176.91] at 10:26 06 October 2013

n 2 2  e −λxL(n) xL(i) e −λxL(i) xL(n) ∂ 2 n = − − α + . ∂λ2 λ2 (1 − e −λxL(n) )2 i=1 (1 − e −λxL(i) )2

Using the above elements, one can derive the approximate 100(1 − τ )% confidence intervals of the parameters θ = (α, λ) as follows

αˆ MLE ± z τ/2 var(ˆ αMLE ) and  ˆλMLE ± z τ/2 var(λˆ MLE ), where z τ/2 is the upper (τ/2) th percentile of the standard normal distribution. 3. Bayesian Estimation The goal of this section is to obtain Bayesian estimates of the unknown parameters of the GED. 3.1 Prior and Loss Function In many practical situations, the information about the shape and scale of the sampling distribution is available in an independent manner. Because both α and λ are nonnegative, we assume two independent gamma priors for α and λ. Notice that they are not the conjugate priors; because both α and λ are unknown, if λ is known, a gamma distribution is the conjugate prior for α. The

On Generalized Exponential Distribution

7

priors for the parameters α and λ: g(α) ∝ α d−1 e −c α ; α > 0 b−1 −aλ

Downloaded by [115.253.176.91] at 10:26 06 October 2013

g(λ) ∝ λ

e

; λ > 0.

(10) (11)

Here all the hyperparameters a, b, c , and d are known and nonnegative. Because of its flexible nature, assumption of independent gamma priors on the shape and scale parameters for two-parameter lifetime distribution is reasonable. The Jeffrey’s (noninformative) prior is a special case of it. Independent gamma priors have been used in the Bayesian analysis of Weibull distribution (Berger & Sun, 1993; Kundu, 2008). For Bayesian analysis of the gamma distribution, Son and Oh (2006) assumed the gamma prior on the scale parameter and independent noninformative prior on the shape parameter, which is a special case of the gamma distribution. Based on the above priors, we obtain Bayes estimates and the corresponding predictive future observation. The loss function we considered for Bayes estimation is the general entropy loss function of the form  ˆ  ˆ p θ θ ˆ θ) = w −1 ; − p ln L(θ, θ θ

w > 0; p = 0. (12)

It was proposed by Calabria and Pulcini (1996) and its minimum occurs at θˆ = θ . Because the value of w does not play any role on the optimization of the loss function, so without loss of generality we assume w = 1. This loss function is a generalization of the entropy loss function used by several authors. For example, Dey and Liu (1992) used it with the shape parameter p = 1. If we replace ˆ ) by (θˆ − θ ), we get the linear exponential (LINEX) loss ln(θ/θ function of the form w [exp(p (θˆ − θ )) − p (θˆ − θ ) − 1] proposed by Zellner (1986). Following Calabria and Pulcini (1996), the Bayes estimators for the parameters α and λ of the GED under general entropy loss function may be defined as αˆ GB = [E (α −p |x)]− p , 1

(13)

8

S. Dey et al.

and λˆ GB = [E (λ−p |x)]− p , 1

(14)

Downloaded by [115.253.176.91] at 10:26 06 October 2013

respectively, provided E (α −p |x) and E (λ−p |x) exist and are finite. The proper choice for p is a challenging task for an analyst because it reflects the asymmetry of the loss function in a given practical situation. Calabria and Pulcini (1996) discussed how to find a solution for such a type of problem. Remarks. 1. When p = 1, the Bayes estimators (13) and (14) coincide with the Bayes estimators under the weighted squared-error loss function of the form (θˆ − θ )2 /θ . The corresponding Bayes estimator of θ is given by θˆWSB = [E θ −1 |x]−1 . 2. When p = −1, the Bayes estimators (13) and (14) coincide with the Bayes estimators under squared error loss function of the form (θˆ − θ )2 . The corresponding Bayes estimator is given by θˆSB = [E (θ |x)]. 3. When p = −2, the Bayes estimators (13) and (14) coincide with the Bayes estimators under precautionary loss function of the form (θˆ − θ )2 /θˆ . The corresponding Bayes estimator is of 1 the form θˆPB = [E (θ 2 |x)] 2 . 3.2 Posterior Sampling For Bayesian estimation, consider MCMC implementation for the posterior sampling for which the joint posterior distribution is needed. Combining the prior densities (10) and (11) with the likelihood function (4), and by using Bayes theorem, the joint posterior distribution is given by π (α, λ|x) =

α 1 n+d−1 n+b−1 −aλ−αc  1 − e −λxL(n) α λ e J0 ×

n  i=1

e −λxL(i) , (1 − e −λxL(i) )

(15)

On Generalized Exponential Distribution

9

where ∞ ∞

 J0 = 0

×

n  i=1

Downloaded by [115.253.176.91] at 10:26 06 October 2013

α n+d−1 λn+b−1 e −aλ−αc (1 − e −λxL(n) )α

0

e −λxL(i) dαdλ. (1 − e −λxL(i) )

Integrating the joint posterior distribution with respect to the other parameter, the marginal posterior PDFs of α and λ are given by π1 (α|x) ∝ α

n+d−1 −αc





e

e −aλ λn+b−1 (1 − e −λxL(n) )α

0

×

n  i=1

e −λxL(i) dλ, (1 − e −λxL(i) )

(16)

and π2 (λ|x) ∝ λn+b−1 e −aλ

n  i=1





×

e −λxL(i) (1 − e −λxL(i) )

α n+d−1 e −αc (1 − e −λxL(n) )α dα.

(17)

0

Because the marginal posterior PDFs of α and λ are not in closed form, the popular methods such as the Gibbs sampler will not work for posterior sampling. Instead, we use a more general and extremely powerful Metropolis–Hastings (MH) algorithm for posterior sampling. This algorithm also constructs a Markov Chain, but does not concern the full conditional distribution. This algorithm uses a proposal distribution; the choice of proposal distribution is very flexible, but the chain generated by this choice must be satisfied under some regularity conditions such as irreducibility, positive recurrence, and periodicity. A proposal distribution with the same support set as target distribution will usually satisfy these regularity conditions. For our purpose, we use a Chi-square distribution as the proposal distribution. The details

10

S. Dey et al.

of the MH algorithm are available in any standard Bayesian computation textbook. 3.3 Bayes Estimates Under General Entropy Loss Function Once the posterior samples are generated by using MH algorithm, the Bayes estimates under the general entropy loss (GEL) function can be calculated as αˆ Bayes = [E (α −p |x)]− p

(18)

λˆ Bayes = [E (λ−p |x)]− p ,

(19)

Downloaded by [115.253.176.91] at 10:26 06 October 2013

1

and 1

respectively, provided E (α −p |x) and E (λ−p |x) exist and are finite. One can also find Bayes estimates by using

αˆ Bayes

 − p1 Jα = J0

and

 − p1 ˆλBayes = Jλ J0

where ∞ ∞

 Jα = 0

×

n  i=1

Jλ = 0

×

−λxL(n)

)

−λxL(n)

)

0

e −λxL(i) dαdλ (1 − e −λxL(i) )

∞ ∞



α n+d−p −1 λn+b−1 e −aλ e −α(c −ln(1−e

α n+d−1 λn+b−p −1 e −aλ e −α(c −ln(1−e

0 n  i=1

e −λxL(i) dαdλ. (1 − e −λxL(i) )

None of these have closed form, and some optimization technique is needed to evaluate them numerically.

11

On Generalized Exponential Distribution

3.4 Two-Sided Bayes probability interval

Downloaded by [115.253.176.91] at 10:26 06 October 2013

Here we provide a two-sided Bayesian probability interval (TBPI) as opposed to the confidence intervals. The Bayesian method for interval estimation is much more direct than the frequentist method based on confidence interval. Once the marginal posterior distribution of α is obtained, a symmetric 100(1 − γ )% TBPI estimate of α, denoted by [αL , αU ], can be obtained by solving the following two equations (for details, see Martz and Waller, 1982, p. 208–209): 

αL 0

γ π1 (w |x)dw = 2

 and

∞ αU

π1 (w |x)dw =

γ 2

(20)

for the limits αL and αU . Similarly, a symmetric 100(1 − γ )% TBPI estimate of λ, denoted by [λL , λU ], can be obtained by solving the following two equations: 

λL 0

γ π2 (w |x)dw = 2

 and

∞ λU

π2 (w |x)dw =

γ 2

(21)

for the limits λL and λU . Numerical computation is needed to solve for the limits of the parameters. 4. Bayes Prediction of Future Records Predicting future record values is a challenge of notable interest. Several authors have studied prediction problems on record values: Dunsmore (1983), who considered the problem of finding the tolerance regions of records as well as the Bayes predictive distribution based on exponential distribution; Arnold et al. (1998), who studied the prediction of future record values related to rain or snow; and Berred (1998), who studied the magnitude of the future record value based on a given sample of records. In this article, we are introducing the Bayes predictive distribution together with the Bayes prediction bounds for lower records using a generalized exponential distribution as well as a more generalized loss function. Suppose that we have n lower records XL(1) = x1 , XL(2) = x2 , . . . , XL(n) = xn from the GED. Based on such a record sample,

12

S. Dey et al.

Bayesian prediction can be obtained for the s-th lower record value, 1 < n < s . Let Y = XL(s ) be the s-th lower record value. The conditional PDF of Y , given the parameter θ = (α, λ), is given (see Ahsanullah, 1995) by f (y |α, λ) =

[H (y ) − H (xn )]s −n−1 f (y ) ; 0 < y < xn < ∞,

(s − n) F (xn )

Downloaded by [115.253.176.91] at 10:26 06 October 2013

(22) where H (·) = − log F (·). Using the PDF (1) and the CDF (2) of the GE distribution, (22) becomes f (y |α, λ) =

α s −n λe −λy (1 − e −λy )α−1 (1 − e −λxn ) log (1 − e −λxn )α (s − n) (1 − e −λy )

0 < y < xn < ∞.

s −n−1

; (23)

The Bayes predictive density function of Y = XL(s ) can be obtained (see Arnold et al., 1998) given the past n lower records as ∗

∞ ∞



f (y |x) = 0

f (y |α, λ) π (α, λ|x)dαdλ,

(24)

0

where f (y |α, λ) and π (α, λ|x) are given, respectively, as in (23) and (15). The Bayes point estimate of the s-th lower record value under the general entropy loss function is given by y ∗ = [E (y −p |x)]−1/p , x where E (y −p |x) = 0 n y −p f ∗ (y |x)dy . The Bayesian prediction bounds for Y = XL(s ) are obtained by evaluating Pr(Y ≥ η|x), for some given value of η. It follows x from Pr(Y ≥ η|x) = η n f ∗ (y |x)dy . The 100(1 − γ )% predictive interval for Y = XL(s ) is obtained by evaluating both the lower L(x) and the upper U (x) limits that satisfy Pr(Y ≥ L(x)|x) = 1 − γ /2 and Pr(Y ≥ U (x)|x) = γ /2. Similar to the previous section, the lower and upper limits of the future record value can be computed numerically.

On Generalized Exponential Distribution

13

5. Numerical Experiments and Data Analysis

Downloaded by [115.253.176.91] at 10:26 06 October 2013

In this section we present some experimental results obtained by examining the behavior of the proposed method for different sizes of lower record values. 1. We considered three different sample sizes; n = 5, 10, 15, and used α = 1 and λ = 2. 2. We computed MLEs of the unknown parameters of the GE distribution as well as the confidence intervals, using the method described in Section 2. 3. All results of the simulation study are based on 1,000 iterations. The performances of the estimates are compared with respect to the average biases and the mean squared errors (MSE). 4. For Bayesian computation, we used informative priors for both α and λ; the hyperparameters values chosen for (10) and (11) were a = c = 4 and b = d = 1. We used three different values for the p = −2, −1, 1 in the general entropy loss function (12) to get three different Bayes estimates under three different loss functions. As mentioned in the Remarks earlier, these three different choices allowed us to compare the performance of the Bayes estimate under both symmetric and asymmetric loss functions. 5. The MCMC posterior samples were generated from both (16) and (17) using the MH algorithm. For computing Bayes estimates, we used 10,000 samples after 5000 burn-in samples. The acceptance ratio for λ was around 69% and 82% for α, averaged over 1000 iterations. This ensured that the choice of proposal distribution worked reasonably well in posterior sampling. 6. To investigate the convergence of the MCMC sampling via MH algorithm, we used the Gelman–Rubin multiple sequence diagnostics. For computation, we used R package coda. For each case of α and λ, we ran two different chains with two distinct starting values of the Monte Carlo samples. Then we obtained two potential scale reduction factor (psrf) values for α and λ. If psrf values are close to 1, we say that samples converge to the stationary distribution. For both the cases, using 10,000 samples, we obtained psrf value equal to 1, which was sufficient for the convergence of the MCMC sampling procedure.

14

S. Dey et al.

TABLE 1 Average bias of the estimates of α and λ and their associated MSEs (in parentheses) n

 αMLE

Downloaded by [115.253.176.91] at 10:26 06 October 2013

5 0.6894 (0.6210) 10 0.5408 (0.5702) 15 0.4215 (0.4789)

 αWSB

 αSB

 αPL

 λMLE

 λWSB

 λSB

 λPL

0.4262 (0.1931) 0.1165 (0.0348) 0.0962 (0.0236)

0.5394 (0.2987) 0.2294 (0.0732) 0.1521 (0.0461)

0.3719 (0.1520) 0.1084 (0.0317) 0.0611 (0.0204)

0.8546 (2.0745) 0.6589 (1.8667) 0.4287 (1.5482)

0.6337 (0.4507) 0.5423 (0.3425) 0.2956 (0.1745)

0.6678 (0.5143) 0.5948 (0.3812) 0.3126 (0.2549)

0.6128 (0.4378) 0.5149 (0.3257) 0.2106 (0.1342)

Note: The subscript WSB stands for the Bayes estimate under weighted squared error loss function; SB represent the squared error loss function, PL stands for the precautionary loss function.

Tables 1 & 2 summarize the results from the simulation study. Table 1 shows that the performance of the Bayes estimates is better than those of the MLEs in terms of the bias and MSE. The Bayes estimates under precautionary loss function (which is an asymmetric loss function) exhibit better performance than when under symmetric loss functions. Table 2 represents the average length and coverage percentages of the confidence intervals and the two-sided Bayes probability intervals. Note that the average length of the TBPIs is shorter than those of the CIs with competing coverage. Example 1. For illustration of the Bayes prediction, we create a synthetic dataset of five lower record values drawn from the GED with α = 1 and λ = 2. The record values are 1.118904, 0.7240964, TABLE 2 Average length of confidence interval (CI) and two-sided Bayes probability interval (TBPI) of the MLEs and the Bayes estimators of α and λ together with the 95% coverage probabilities (presented in the parentheses) based on three different sample sizes n 5 10 15

CIα

TBPIα

CIλ

TBPIλ

2.1557 (93.8%) 1.7861 (95.8%) 1.7099 (96.1%)

1.2470 (94.1%) 1.0301 (95.9%) 0.8225 (96.3%)

4.0042 (97.5%) 2.7136 (97.6%) 2.3245 (97.5%)

3.6286 (96.9%) 2.4513 (97.2%) 2.0998 (97.3%)

On Generalized Exponential Distribution

15

Downloaded by [115.253.176.91] at 10:26 06 October 2013

0.6497, 0.5960404, 0.455782. We use the same informative prior (as used before) with the hyperparameters a = c = 4, b = d = 1 for Bayesian prediction. We are interested in predicting the sixth lower record value based on the five lower record values as well as the 95% predictive interval of the sixth record value. The predictive value is 0.2458 under weighted squared error loss function; 0.2803 under squared error loss function, and 0.3162 under precautionary loss function. The 95% predictive interval for the sixth lower record is (0.1094, 0.4425). Example 2. Consider the dataset from Linhart and Zucchini (1986) that represents the failure times of the air-conditioning system of an airplane: 23, 261, 87, 7, 120, 14, 62, 47, 225, 71, 246, 21, 42, 20, 5, 12, 120, 11, 3, 14, 71, 11, 14, 11, 16, 90, 1, 16, 52, 95. Gupta and Kundu (2003) showed that GED fits this data reasonably well. From this complete dataset, take a set of lower record values of size 5 for illustration: 23, 7, 5, 3, 1. 1. The MLEs of α and λ are 1.9489 and 0.0800, respectively. The 95% confidence intervals of α and λ are (0.2405, 3.6574), and (0.0205, 0.1394), respectively. 2. For Bayes estimation, because there is no prior information available for the dataset, one can use the noninformative prior by choosing all hyperparameter values equal to zero. As in the simulation study, we use an informative prior with the hyperparameters (a = c = 4, b = d = 1). The Bayes estimates are based on 10,000 MCMC samples after 5000 burn-in samples. 3. The Bayes estimate of α, using weighted squared error loss function, squared error loss function, and precautionary loss function are 0.9436, 0.5419, and 1.1374, respectively. The 95% two-sided Bayes probability interval of α is (0.1877, 2.2578). 4. The Bayes estimate of λ, using weighted squared error loss function, squared error loss function, and precautionary loss function are 0.0710, 0.0609, and 0.0754, respectively. The 95% two-sided Bayes probability interval of λ is (0.0324, 0.1149). 6. Conclusion This article describes the estimation of the unknown parameters of GED under lower record values. We have compared the MLEs

Downloaded by [115.253.176.91] at 10:26 06 October 2013

16

S. Dey et al.

and different Bayes estimators with respect to the average biases and the mean squared errors. We have also compared the confidence intervals attained using asymptotic distribution of the MLEs and the two-sided Bayes probability intervals obtained from the posterior distribution functions. The simulation study shows that Bayes estimates performs better than the MLEs with regard to both biases and mean squared errors. The two-sided Bayes probability intervals are also of shorter length with competitive coverage percentages of the true parameters than the confidence intervals. The computation of maximum likelihood estimates requires solving optimization problems. There are several optimization techniques available, but sometimes it is extremely challenging to agree on an effective and robust technique for the problem in hand. Bayesian technique can be a better alternative to evade such circumstances. In addition to parameter estimation, we have considered Bayes prediction for the future lower record together with the predictive interval. For a numerical example, we have considered informative priors. Because Bayesian inference is sensitive to the choice of hyperparameters with respect to informative priors, one needs to be cautious when choosing proper values for hyperparameters. Acknowledgments The authors would like to thank the referees and the editor who helped to substantially improve this article. References Ahsanullah, M. (1988). Introduction to record statistics. Needham Heights, MA, USA: Ginn Press. Ahsanullah, M. (1995). Record statistics. Commack, NY: Nova Science Publishers. Arnold, B. C., & Balakrishnan, N. (1989). Relations, bounds and approximations for order statistics, Lecture Notes in Statistics (Vol. 53). New York, NY: SpringerVerlag. Arnold, B. C., Balakrishnan, N., & Nagaraja, H. N. (1992). A first course in order statistics. New York, NY: John Wiley & Sons, Inc. Arnold, B. C., Balakrishnan, N., Nagaraja, H. N. (1998). Records. New York, NY: John Wiley & Sons, Inc.

Downloaded by [115.253.176.91] at 10:26 06 October 2013

On Generalized Exponential Distribution

17

Berger, J. O., & Sun, D. (1993). Bayesian analysis for the Poly-Weibull distribution. Journal of the American Statistical Association, 88, 1412–1418. Berred, A. M. (1998). Prediction of record values. Communications in Statistics— Theory and Methods, 27, 2221–2240. Calabria, R., & Pulcini, G. (1996). Point estimation under asymmetric loss functions for left-truncated exponential samples. Communications in Statistics— Theory and Methods, 25(3), 585–600. Chandler, K. N. (1952). The distribution and frequency of record values. Journal of the Royal Statistical Society: Series B, 14(2), 220–228. Dey, D. K., & Liu, P. L. (1992). On comparison of estimators in a generalized life model. Microelectronics Reliability, 32, 207–221. Dunsmore, I. R. (1983). The future occurrence of records. Annals of the Institute of Statistical Mathematics, 35, 267–277. Feller, W. (1966). An introduction to probability theory and its applications (Vol. 2). New York, NY: John Wiley & Sons, Inc. Gupta, R. D., & Kundu, D. (1999). Generalized exponential distribution. Australian & New Zealand Journal of Statistics, 41, 173–188. Gupta, R. D., & Kundu, D. (2001a). Exponentiated exponential distribution, an alternative to gamma and Weibull distributions. Biometrical Journal, 43(1), 117–130. Gupta, R. D., & Kundu, D. (2001b). Generalized exponential distributions: Different methods of estimation. Journal of Statistical Computational and Simulation, 69(4), 315–338. Gupta, R. D., & Kundu, D. (2003). Discriminating between Weibull and generalized exponential distributions. Computational Statistics and Data Analysis, 43, 179–196. Jaheen, Z. F. (2004). Empirical Bayes inference for generalized exponential distribution based on records. Communications in Statistics—Theory and Methods, 33(8), 1851–1861. Kamps, U. (1995). A concept of generalized order statistics. Journal of Statistical Planning and Inference, 48, 1–23. Kundu, D. (2008). Bayesian inference and reliability sampling plan for Weibull distribution. Technometrics, 50, 144–154. Lawless, J. F. (1982). Statistical models and methods for lifetime data. New York, NY: John Wiley & Sons, Inc. Linhart, H., & Zucchini, W. (1986). Model selection. New York, NY: Wiley. Martz, H. F., & Waller, R. A. (1982). Bayesian reliability analysis. New York, NY: Wiley. Nevzorov, B. V. (1987). Records. Probability Theory and Applications, 32(2), 201– 228. Raqab, M. Z. (2002). Inference for generalized exponential distribution based on record statistics. Journal of Statistical Planning and Inference, 104, 339–350. Raqab, M. Z., & Ahsanullah, M. (2001). Estimation of the location and scale parameters of the generalized exponential distribution based on order statistics. Journal of Statistical Computation and Simulations, 69, 109–124.

18

S. Dey et al.

Downloaded by [115.253.176.91] at 10:26 06 October 2013

Sarhan, M. A., & Tadj, L. (2008). Inference using record values from generalized exponential distribution with application. Bulletin of Statistics & Economics, 2, 72–85. Son, Y. S., & Oh, M. (2006). Bayes estimation of the two-parameter gamma distribution. Communication in Statistics—Simulation and Computation, 35, 285–293. Zellner, A. (1986). Bayesian estimation and prediction using asymmetric loss functions. Journal of the American Statistical Association, 81, 446–451. Zheng, G. (2002). Fisher information matrix in type-II censored data from exponentiated exponential family. Biometrical Journal, 44, 353–357.

Suggest Documents