Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2014, Article ID 198951, 10 pages http://dx.doi.org/10.1155/2014/198951
Research Article The Beta-Lindley Distribution: Properties and Applications Faton Merovci1 and Vikas Kumar Sharma2 1 2
Department of Mathematics, University of Prishtina βHasan Prishtinaβ, 10000 PrishtinΒ¨e, Kosovo Department of Statistics, Banaras Hindu University, Varanasi 221005, India
Correspondence should be addressed to Faton Merovci;
[email protected] Received 2 May 2014; Revised 18 July 2014; Accepted 18 July 2014; Published 17 August 2014 Academic Editor: Zhihua Zhang Copyright Β© 2014 F. Merovci and V. K. Sharma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We introduce the new continuous distribution, the so-called beta-Lindley distribution that extends the Lindley distribution. We provide a comprehensive mathematical treatment of this distribution. We derive the moment generating function and the rth moment thus, generalizing some results in the literature. Expressions for the density, moment generating function, and rth moment of the order statistics also are obtained. Further, we also discuss estimation of the unknown model parameters in both classical and Bayesian setup. The usefulness of the new model is illustrated by means of two real data sets. We hope that the new distribution proposed here will serve as an alternative model to other models available in the literature for modelling positive real data in many areas.
1. Introduction In many applied sciences such as medicine, engineering, and finance, amongst others, modelling and analysing lifetime data are crucial. Several lifetime distributions have been used to model such kinds of data. The quality of the procedures used in a statistical analysis depends heavily on the assumed probability model or distributions. Because of this, considerable effort has been expended in the development of large classes of standard probability distributions along with relevant statistical methodologies. However, there still remain many important problems where the real data does not follow any of the classical or standard probability models. Some beta-generalized distributions were discussed in recent years. Eugene et al. [1], Nadarajah and Gupta [2], Nadarajah and Kotz [3], and Nadarajah and Kotz [4] proposed the beta-normal, beta-Gumbel, beta-Frchet, and betaexponential distributions, respectively. Jones [5] discusses this general beta family motivated by its order statistics and shows that it has interesting distributional properties and potential for exciting statistical applications. Recently, Barreto-Souza et al. [6] proposed the betageneralized exponential distribution, Pescim et al. [7] introduced the beta-generalized half-normal distribution, and
Cordeiro et al. [8] defined the beta-generalized Rayleigh distribution with applications to lifetime data. In this paper, we present a new generalization of Lindley distribution called the beta-Lindley distribution. The Lindley distribution was originally proposed by Lindley [9] in the context of Bayesian statistics, as a counter example of fudicial statistics. Definition 1. A random variable π is said to have the Lindley distribution with parameter π if its probability density is defined as π (π₯, π) =
π2 (1 + π₯) πβππ₯ , π+1
π₯ > 0, π > 0.
(1)
The corresponding cumulative distribution function (CDF) is πΉ (π₯) = 1 β
π + 1 + ππ₯ βππ₯ π , π+1
π₯ > 0, π > 0.
(2)
Ghitany et al. [10] have discussed the various statistical properties of Lindley distribution and shown its applicability over the exponential distribution. They have found that the Lindley distribution performs better than exponential
2
Journal of Applied Mathematics
model. One of the main reasons to consider the Lindley distribution over the exponential distribution is its time dependent/increasing hazard rate. Since last decade, Lindley distribution has been widely used in different setup by many authors. The rest of the paper has been organized as follows. In Section 2, we introduced the beta-Lindley distribution and demonstrated its flexibility showing the wide variety of shapes of the density, distribution, and hazard rate functions. The moments and order statistics from the beta-Lindley distribution are derived in Sections 3 and 4, respectively. In Section 5, the maximum likelihood and least square estimators as well as Bayes estimators of the parameters are constructed for estimating the unknown parameters of the beta-Lindley distribution. For demonstrating the applicability of proposed distribution, two real data sets are considered in Section 6. Simulation algorithm is also provided in Section 6 to generate the random sample from beta-Lindley distribution. The paper is then concluded in Section 7.
2. Beta-Lindley Distribution
πΌ
πΊ (π₯) =
(1 β ((π + 1 + ππ₯)/(π + 1))πβππ₯ ) πΌπ΅ (πΌ, π½)
πΉ(π₯) 1 π‘πΌβ1 (1 β π‘)π½β1 ππ‘, πΊ (π₯) = β« π΅ (πΌ, π½) 0
0 < πΌ, π½ < β. (3)
The corresponding probability density function for πΊ(π₯) is given by 1 [πΉ (π₯)]πΌβ1 [1 β πΉ (π₯)]π½β1 π (π₯) , π΅ (πΌ, π½)
If the parameter π½ > 0 is real noninteger, we have πΊ (π₯) =
Ξ (πΌ + π½) Ξ (πΌ) Γβ
(β1)π [1 β ((π + 1 + ππ₯) / (π + 1)) πβππ₯ ]
π‘πΌβ1 (1 β π‘)π½β1 ππ‘,
.
Lemma 2. When πΌ = π½ = 1, the BL in (6) reduces to the Lindley distribution in (1) with parameter π. Lemma 3. When π½ = 1, the BL in (6) reduces to the generalized Lindley distribution πΊπΏπ·(πΌ, π) proposed by Nadarajah et al. [12]. Lemma 4. The limit of beta-Lindley density as π₯ β β is 0 and the limit as π₯ β 0 is 0. Proof. It is straightforward to show the above from the betaLindley density in (6).
The reliability function π
(π‘), which is the probability of an item not failing prior to some time π‘, is defined by π
(π‘) = 1 β πΉ(π‘). The reliability function of the beta-Lindley distribution is given by πΌ
(5)
π
(π‘, π, πΌ, π½) = 1 β
(1 β ((π + 1 + ππ₯) / (π + 1)) πβππ₯ ) πΌπ΅ (πΌ, π½)
π₯ > 0. Γ 2 πΉ1 (πΌ, 1 β π½; πΌ + 1; 1 β
The PDF of the new distribution is given by π (π₯) =
(8)
πΌ+π
Ξ (π½ β π) (π + π) π!
π=0
βππ₯
1β((π+1+ππ₯)/(π+1))π 1 β« π΅ (πΌ, π½)
π + 1 + ππ₯ βππ₯ π ). π+1
(4)
where π(π₯) = ππΊ(π₯)/ππ₯ is the parent density function and π΅(πΌ, π½) = (Ξ(πΌ)Ξ(π½))/(Ξ(πΌ + π½)) is beta function. We now introduce the three-parameter beta-Lindley (BL) distribution by taking πΊ(π₯) in (3) to be the CDF (2). The CDF of the BL distribution is then πΊ (π₯) =
(7)
Γ 2 πΉ1 (πΌ, 1 β π½; πΌ + 1; 1 β
β
Let πΉ(π₯) denote the cumulative distribution function (CDF) of a random variable π, and then the cumulative distribution function for a generalized class of distribution for the random variable π, as defined by Eugene et al. [1], is generated by applying the inverse CDF to a beta distributed random variable to obtain
π (π₯) =
Figure 1(a) illustrates some of the possible shapes of the PDF of the beta-Lindley distribution for selected values of the parameters πΌ, π½, and π, respectively. The CDF (5) can be expressed in terms of the hypergeometric function (see Cordeiro and Nadarajah [11]) in the following way:
π + 1 + ππ₯ βππ₯ π ). π+1 (9)
π2 (π + 1 + ππ₯)π½β1 (1 + π₯) πβππ½π₯ π΅ (πΌ, π½) (π + 1)π½ Γ [1 β
π + 1 + ππ₯ βππ₯ πΌβ1 π ] . π+1
(6)
The other characteristic of interest of a random variable is the hazard rate function defined by β(π‘) = π(π‘)/(1 β πΉ(π‘)), which is an important quantity characterizing life phenomenon. It can be loosely interpreted as the conditional probability of
Journal of Applied Mathematics
3 1.2
2.0
H(x, 0.2, 0.1, 0.5)
1.0
f(x)
1.5 1.0 0.5
0.8 0.6 0.4 0.2
0.0
0.0
0.0
0.5
1.5
1.0 x
2.0
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
x πΌ = 2.5 π½ = 2.5 π = 2.5 πΌ = 2.5 π½ = 2.5 π = 1.5
πΌ = 1.5 π½ = 1.5 π = 2.1 πΌ = 3.5 π½ = 3.5 π = 3.5 πΌ = 1.5 π½ = 1.1 π = 2.1
(b) Hazard function of BL
(a) PDF of BL
Figure 1: The PDFβs (a) and hazardβs function (b) of various BL distributions.
failure, given that it has survived to time π‘. The hazard rate function for the beta-Lindley random variable is given by β (π‘, π, πΌ, π½) = (
Γ (Ξ (πΌ β π) Ξ (π β 1) Ξ (πΌ + π½ β π) Γ (π + 1)π½+πβπ π!(π½ + π)
π2 (π + 1 + ππ₯)π½β1 (1 + π₯) πβππ½π₯
Γ [πΞ (π β π + πΌ + π½, (π + 1) (π½ + π)) +
βππ₯ πΌ
Γ (1 β
(1 β ((π + 1 + ππ₯) / (π + 1)) π
)
πΌπ΅ (πΌ, π½)
4. Order Statistics
β1
π + 1 + ππ₯ βππ₯ 1β π )) . π+1 (10) Figure 1(b) illustrates some of the possible shapes of the hazard function of the beta-Lindley distribution for selected values of the parameters πΌ, π½, and π, respectively.
The πth order statistic of a sample is its πth smallest value. For a sample of size π, the πth order statistic (or largest order statistic) is the maximum; that is, π(π) = max {π1 , . . . , ππ } .
Theorem 5. The πth moment πΈ(ππ ) of the beta-Lindley distributed random variable π, if πΌ > 0, π½ > 0 are real nonintegers, is given as
Γ β β ( ((β1)π+π π(π½+π)(π+1) ) π=0 π=0
(12)
The sample range is the difference between the maximum and minimum. It is clearly a function of the order statistics:
3. Moments and Generating Function
Ξ2 (πΌ + π½) Ξ (π½) ππ
1 Ξ (π β π + πΌ + π½ + 1, (π + 1) (π½ + π))] . (π½ + π) (11)
Proof. See the appendix.
Γ 2 πΉ1 (πΌ, 1 β π½; πΌ + 1;
β β
) )
π΅ (πΌ, π½) (π + 1)π½ π + 1 + ππ₯ βππ₯ πΌβ1 Γ[1 β π ] ) π+1
πΈ (ππ ) =
πΌ+π½+πβπ β1
range {π1 , . . . , ππ } = π(π) β π(1) .
(13)
We know that if π(1) β€ β
β
β
β€ π(π) denotes the order statistic of a random sample π1 , . . . , ππ from a continuous population with CDF πΉπ (π₯) and PDF ππ(π₯), then the PDF of π(π) is given by ππ(π) (π₯) =
π! πβ1 πβπ π (π₯) (πΉπ (π₯)) (1 β πΉπ (π₯)) , (π β 1)! (π β π)! π (14)
4
Journal of Applied Mathematics
for π = 1, . . . , π. The PDF of the πth order statistic for the betaLindley distribution is given by π2 (π + 1 + ππ₯)π½β1 (1 + π₯) πβππ½π₯ π! ππ(π) (π₯) = (π β 1)! (π β π)! π΅ (πΌ, π½) (π + 1)π½ Γ [1 β
π + 1 + ππ₯ βππ₯ π ] π+1
Now, setting
π
ππ (πΌ + π½) β ππ (πΌ) + β ln (1 β π=1
(1 β ((π + 1 + ππ₯) / (π + 1)) πβππ₯ )
π + 1 + ππ₯ βππ₯ π )) π+1
π
π=1
π=1
π
Γβ
(1 β ((π + 1 + ππ₯) / (π + 1)) π
)
π=1 (π
πβππ₯ π₯π ((π + 1) (π + 1 + ππ₯π ) β 1)
+ 1)2 (1 β ((π + 1 + ππ₯π ) / (π + 1)) πβππ₯π )
π + 1 + ππ₯ βππ₯ 1β π )) π+1
πβπ
. (15)
5. Estimation 5.1. Maximum Likelihood Estimates. The maximum likelihood estimates, MLEs, of the parameters that are inherent within the beta-Lindley distribution function are obtained as follows. The likelihood function of the observed sample π₯ = {π₯1 , π₯2 , . . . , π₯π } of size π drawn from the density (6) is Μ defined as π
π
] πβππ½ βπ=1 π₯π π΅(πΌ, π½)(π + 1)π½ π
π½β1
Γ β (1 + π₯π ) (π + 1 + ππ₯π )
(1 β
π=1
π + 1 + ππ₯π βππ₯π π ). π+1 (16)
The corresponding log-likelihood function is given by β = ln πΏ = π (2 log (π) β log Ξ (πΌ) β log Ξ (π½) + log Ξ (πΌ + π½) β π½ log (π + 1)) π
+ β log (1 + π₯π ) + (π½ β 1) log (π + 1 + ππ₯π ) π=1
π
π
π=1
π=1
β ππ½βπ₯π + (πΌβ1) β log (1β
= 0, (19)
πΌπ΅ (πΌ, π½)
Γ 2 πΉ1 (πΌ, 1 β π½; πΌ + 1;
π2
π
π π ππ½ 1 + π₯π 2π β π½ βπ₯π + (πΌ β 1) β + (π½ β 1) β π π+1 π=1 π + 1 + π π₯π π=1
πβ1
βππ₯ πΌ
πΏ=[
(π + 1 + π π₯π ) πβπ π₯π ) = 0, π+1
+ β log (π + 1 + ππ₯π ) β πβπ₯π = 0,
Γ 2 πΉ1 (πΌ, 1 β π½; πΌ + 1;
Γ (1 β
(18)
ππ (πΌ + π½) β ππ (π½) β π log (π + 1)
πΌπ΅ (πΌ, π½)
1β
π ln πΏ = 0, ππ
we have
πΌβ1
πΌ
Γ(
π ln πΏ = 0, ππ½
π ln πΏ = 0, ππΌ
π + 1 + ππ₯π βππ₯π π ). π+1 (17)
Μ π) Μ of where π(β
) is digamma function. The MLEs (Μ πΌ, π½, (πΌ, π½, π), respectively, are obtained by solving this nonlinear system of equations. It is usually more convenient to use nonlinear optimization algorithms such as the quasi-Newton algorithm to numerically maximize the sample likelihood function given in (16). Applying the usual large sample Μ = (Μ Μ π) Μ can be treated as being approximation, the MLE π πΌ, π½, Μ and varianceapproximately trivariate normal with mean π covariance matrix equal to the inverse of the expected information matrix; that is, Μ β π) σ³¨β π (0, ππΌβ1 (π)) , βπ ( π 3
(20)
Μ where πΌβ1 (π) is the limiting variance-covariance matrix of π. The elements of the 3 Γ 3 matrix πΌ(π) can be estimated by Μ = ββ | , π, π β {1, 2, 3}. πΌππ (π) π π π π π=π Μ The elements of the Hessian matrix corresponding to the β function in (17) are given in the appendix. Approximate two-sided 100(1 β πΌ)% confidence intervals for πΌ, π½ and for πΎ are, respectively, given by β1 Μ (π), πΌΜ Β± π§πΌ/2 βπΌ11
β1 Μ π½Μ Β± π§πΌ/2 βπΌ22 (π),
β1 Μ πΜ Β± π§πΌ/2 βπΌ33 (π),
(21)
where π§πΌ is the upper πΌth quantile of the standard normal distribution. Using π
, we can easily compute the Hessian matrix and its inverse and hence the standard errors and asymptotic confidence intervals. We can compute the maximized unrestricted and restricted log-likelihood functions to construct the likelihood ratio (LR) test statistic for testing on some of the beta-Lindley submodels. For example, we can use the LR test statistic to check whether the beta-Lindley distribution for a given data set is statistically superior to the Lindley distribution.
Journal of Applied Mathematics
5
In any case, hypothesis tests of the type π»0 : π = π0 versus π»0 : π =ΜΈ π0 can be performed using a LR test. In this case, the LR test statistic for testing π»0 versus π»1 is Μ π₯) β β(πΜ ; π₯)), where πΜ and πΜ are the MLEs under π = 2(β(π; 0 0 π»1 and π»0 , respectively. The statistic π is asymptotically (as π β β) distributed as ππ2 , where π is the length of the parameter vector π of interest. The LR test rejects π»0 if 2 2 π > ππ;πΎ , where ππ;πΎ denotes the upper 100πΎ% quantile of the ππ2 distribution. 5.2. Least Squares Estimators. In this section, we provide the regression based method estimators of the unknown parameters of the beta-Lindley distribution, which was originally suggested by Swain et al. [13] to estimate the parameters of beta distributions. It can be used in some other cases also. Suppose π1 , . . . , ππ is a random sample of size π from a distribution function πΊ(β
) and suppose π(π) , π = 1, 2, . . . , π, denotes the ordered sample. The proposed method uses the distribution of πΊ(π(π) ). For a sample of size π, we have πΈ (πΊ (π(π) )) =
π , π+1
Cov (πΊ (π(π) ) , πΊ (π(π) )) =
π (πΊ (π(π) )) =
π (π β π + 1) 2
(π + 1) (π + 2)
π (π β π + 1) , (π + 1)2 (π + 2)
,
for π < π; (22)
see Johnson et al. [14]. Using the expectations and the variances, the least squares methods can be used. Obtain the estimators by minimizing π
β (πΊπ(π) β
π=1
π 2 ), π+1
(23)
πΌ
β[ π=1
(1 β ((π + 1 + ππ₯) / (π + 1)) πβππ₯ ) πΌπ΅ (πΌ, π½)
[
Γ 2 πΉ1 (πΌ, 1 β π½; πΌ + 1; 1 β
π1 (π) β πβ1 ,
π2 (πΌ) β π1β1 ,
π > 0;
π3 (π½) β π2β1 ,
π + 1 + ππ₯ βππ₯ π ) π+1
(24)
2
π ] β π+1 ]
with respect to πΌ, π½, and π. 5.3. Bayes Estimation. In this section, we developed the Bayes procedure for the estimation of the unknown model parameters based on observed sample π₯ from beta-Lindley Μ distribution. In addition to having a likelihood function, the Bayesian needs a prior distribution for parameter, which quantifies the uncertainty about parameter prior to having
0 < πΌ < π1 ;
0 < π½ < π2 . (25)
It is to be noticed that the choices of π1 and π2 are unimportant and we can simply take π2 (πΌ) β 1,
π3 (π½) β 1.
(26)
Thus, the joint posterior distribution of π, πΌ, and π½ is given by
π (πΌ, π½, π | π₯) = πΎ Μ
π2πβ1 exp (βππ½ βππ=1 π₯π ) π΅π (πΌ, π½) (1 + π)ππ½
π
π½β1
Γ β [(1 + π + ππ₯π ) π=1
Γ(1 β
with respect to the unknown parameters. Therefore, in case of BL distribution, the least squares estimators of πΌ, π½, and π, say πΌΜLSE , π½ΜLSE , and πΜLSE , respectively, can be obtained by minimizing π
data. In many situations, existing knowledge may be difficult to summarise in the form of an informative prior. In such case, it is better to consider the noninformative prior for Bayesian analysis (for more details on the use of noninformative prior, see [15]). We take the noninformative priors ([16]) for π, πΌ, and π½ of the following forms:
1 + π + ππ₯π βππ₯π πΌβ1 π ) ], 1+π (27)
where πΎ is the normalizing constant. Under square error loss, the Bayes estimates of π, πΌ, and π½ are the means of their marginal posteriors and defined as πΜπ΅ = β« β« β« ππ (πΌ, π½, π | π₯) ππ½ ππΌ ππ, Μ π πΌ π½
(28)
πΌΜπ΅ = β« β« β« πΌπ (πΌ, π½, π | π₯) ππ½ ππ ππΌ, Μ πΌ π π½
(29)
π½Μπ΅ = β« β« β« π½π (πΌ, π½, π | π₯) ππ ππΌ ππ½, Μ π½ πΌ π
(30)
respectively. It is not easy to calculate Bayes estimates through (28), (29), and (30) and so the numerical approximation techniques are needed. Therefore, we proposed the use of Monte Carlo Markov Chain (MCMC) techniques, namely, Gibbs sampler and Metropolis Hastings (MH) algorithm; see [17β19]. Since the conditional posteriors of the parameters cannot be obtained in any standard forms, we, therefore, used a hybrid MCMC strategy for drawing samples from the joint posterior of the parameters. To implement the Gibbs
6
Journal of Applied Mathematics
algorithm, the full conditional posteriors of πΌ, π½, and π are given by π1 (πΌ | π½, π, π₯) β Μ
Ξπ (πΌ + π½) π 1 + π + ππ₯π βππ₯π πΌβ1 (1 β π ) , β Ξπ (πΌ) π=1 1+π
Γ β(1 + π + ππ₯π )
π½β1
πΌ(πβ +[(1βπ)π]) β πΌ(πβ ) = π½(πβ +[(1βπ)π]) β π½(πβ ) =
Ξπ (πΌ + π½) exp (βππ½ βππ=1 π₯π ) π2 (π½ | πΌ, π, π₯) β Ξπ (πΌ) Μ (1 + π)ππ½ π
(π½(πβ ) , π½(πβ +[(1βπ)π]) ), and (π(πβ ) , π(πβ +[(1βπ)π]) ), respectively, where πβ is chosen such that
π(πβ +[(1βπ)π]) β π(πβ ) =
π2πβ1 exp (βππ½ βππ=1 π₯π ) (1 + π)ππ½
π½β1
Γ β [(1 + π + ππ₯π ) π=1
Γ (1 β
1 + π + ππ₯π βππ₯π πΌβ1 π ) ]. 1+π (32)
The simulation algorithm we followed is given by the following. Step 1. Set starting points, say πΌ(0) , π½(0) , and π(0) , then at πth stage. Step 2. Using MH algorithm, generate πΌπ βΌ π1 (πΌ | π½(πβ1) , π(πβ1) , π₯). Μ Step 3. Using MH algorithm, generate π½π βΌ π2 (π½ | πΌπ , π(πβ1) , π₯). Μ Step 4. Using MH algorithm, generate ππ βΌ π3 (π | πΌπ , π½π , π₯). Μ Step 5. Repeat steps 2β4, π(= 20000) times to get the samples of size π from the corresponding posteriors of interest. Step 6. Obtain the Bayes estimates of πΌ, π½, and π using the following formulae: πΌΜπ΅ =
π
1 β πΌ, π β π0 π=π +1 π
π½Μπ΅ =
0
π
1 β π½, π β π0 π=π +1 π 0
(33) πΜπ΅ =
min
(π½(π+[(1βπ)π]) β π½(π) ) ,
min
(π(π+[(1βπ)π]) β π(π) ) .
1β€πβ€πβ[(1βπ)π] 1β€πβ€πβ[(1βπ)π]
(34) (31)
π
(πΌ(π+[(1βπ)π]) β πΌ(π) ) ,
,
π=1
π3 (π | πΌ, π½, π₯) β Μ
min
1β€πβ€πβ[(1βπ)π]
π
1 β π, π β π0 π=π0 +1 π
respectively, where π0 (β 5000) is the burn-in period of the generated Markov chains. Step 7. Obtain the 100 Γ (1 β π)% HPD credible intervals for πΌ, π½, and π by applying the methodology of [20]. The HPD credible intervals for πΌ, π½, and π are (πΌ(πβ ) , πΌ(πβ +[(1βπ)π]) ),
π₯.
Here, [π₯] denotes the largest integer less than or equal to
Note that there have been several attempts made to suggest the proposal density for the target posterior in the implementation of MH algorithm. By reparameterizing the posterior on the entire real line, [16, 21] have suggested to use the normal approximation of the posterior as a proposal candidate in MH algorithm. Alternatively, it is also realistic to have the thought of using the truncated normal distribution without reparameterizing the original parameters. Therefore, we proposed the use of the truncated normal distribution as the proposal kernel to the target posterior.
6. Application 6.1. Real Data Applications. In this section, we use two real data sets to show that the beta-Lindley distribution can be a better model than one based on the Lindley distribution. The description of the data is as follows. Data Set 1. The data set 1 represents an uncensored data set corresponding to remission times (in months) of a random sample of 128 bladder cancer patients reported by Lee and Wang [22]. Data Set 2. The data set 2 represents the survival times (in days) of 72 guinea pigs infected with virulent tubercle bacilli, observed and reported by Bjerkedal [23]. The survival times of 72 guinea pigs are as follows. Μ β1 of the MLEs under The variance-covariance matrix πΌ(π) the beta-Lindley distribution for data set 1 is computed as 0.213 β0.019 0.530 (β0.019 0.004 β0.120) . 0.530 β0.120 3.131
(35)
Thus, the variances of the MLE of πΌ, π½, and π are Μ = 0.004, and var(π) Μ = 3.131. var(Μ πΌ) = 0.213, var(π½) Therefore, 95% confidence intervals for πΌ, π½, and π are [0.435, 2.245], [0, 0.198], and [0, 5.330], respectively. In order to compare the two distribution models, we consider criteria like β2β, AIC, and CAIC for the data set. The better distribution corresponds to smaller β2β, AIC, and AICC values. The LR test statistic to test the hypotheses π»0 : π = π = 1 versus π»1 : π =ΜΈ 1β¨π =ΜΈ 1 for data set 1 is π = 13.436 > 5.991 = 2 , so we reject the null hypothesis. π2;0.05
Journal of Applied Mathematics
7
Table 1: The ML estimates, standard error, log-likelihood, LSE estimates, AIC, and CAIC for Data Set 1. Model Lindley Beta-exponential
Beta-Lindley
ML estim. πΜ = 0.196 Μπ = 0.116 πΌΜ = 1.149 π½Μ = 0.997 πΌΜ = 1.340 π½Μ = 0.065 πΜ = 1.861
St. err. 0.012 0.674 0.340 0.194 0.461 0.068 1.769
LL β419.529
LSE estim. 0.229 0.195 1.764 0.677 1.803 0.087 1.630
β413.189
β412.802
β2LL 839.04
AIC 841.06
CAIC 841.091
826.378
832.378
832.571
825.604
831.604
831.797
β2LL 213.857
AIC 215.857
CAIC 215.942
188.334
194.334
194.646
187.942
193.942
194.294
Table 2: The ML estimates, standard error, log-likelihood, AIC, and CAIC for Data Set 2. Model Lindley Beta-exponential
Beta-Lindley
ML estim. πΜ = 0.868 Μπ = 0.736 πΌΜ = 3.345 π½Μ = 1.708 πΌΜ = 3.005 π½Μ = 0.949 πΜ = 1.462
St. err. 0.076 1.357 1.056 3.877 1.006 0.924 0.981
βLL 106.928 94.167
93.971
Μ β1 of the MLEs under The variance-covariance matrix πΌ(π) the beta-Lindley distribution for data set 2 is computed as 1.013 β0.734 0.845 (β0.734 0.854 β0.897) . 0.845 β0.897 0.964
PSE estim. 0.855 0.741 2.966 1.521 2.588 0.943 1.370
Table 3: Bayes estimates and 95% HPD credible intervals for the model parameters based on real data sets. Data
(36)
Thus, the variances of the MLE of πΌ, π½, and π are var(Μ πΌ) = Μ = 0.854, and var(π) Μ = 0.964. Therefore, 95% 1.013, var(π½) confidence intervals for πΌ, π½, and π are [1.033, 4.976], [0, 2.76], and [0, 3.384], respectively. The LR test statistic to test the hypotheses π»0 : π = π = 1 versus π»1 : π =ΜΈ 1 β¨ π =ΜΈ 1 for data set 2 is π = 25.915 > 5.991 = π2;2 0.05 , so we reject the null hypothesis. Tables 1 and 2 show parameter MLEs to each one of the two fitted distributions for data sets 1 and 2, and Tables 1 and 2 show the values of β2 log(πΏ), AIC, and AICC. The values in Tables 1 and 2 indicate that the beta-Lindley is a strong competitor to another distribution used here for fitting data set 1 and data set 2. A density plot compares the fitted densities of the models with the empirical histogram of the observed data (Figures 2(a) and 2(b)). The fitted density for the beta-Lindley model is closer to the empirical histogram than the fits of the Lindley models. The Bayes estimates and the corresponding HPD credible intervals for the parameters πΌ, π½, and π are summarised in Table 3. 6.2. Simulated Data. In this subsection, we provided an algorithm to generate a random sample from the beta-Lindley distribution for the given values of its parameters and sample size π. The simulation process consists of the following steps.
Data 1
Data 2
Data 3
Parameter πΌ π½ π πΌ π½ π πΌ π½ π
Estimate 1.198824 0.136014 1.283892 2.933191 1.004611 1.458197 2.581740 0.501017 2.886923
Bayes Lower 0.531175 0.002155 0.000737 1.604314 0.000559 0.343411 0.941880 0.004933 0.074607
Upper 1.835345 0.274293 2.544082 4.132534 1.842176 2.517247 4.127841 1.110172 5.401492
Step 1. Set π, and Ξ = (π, πΌ, π½). Step 2. Set initial value π₯0 for the random starting. Step 3. Set π = 1. Step 4. Generate π βΌ Uniform (0, 1). Step 5. Update π₯0 by using Newtonβs formula such as πΊ (π₯) β π σ΅¨σ΅¨σ΅¨σ΅¨ )σ΅¨σ΅¨ . π₯β = π₯0 β ( Ξ σ΅¨σ΅¨π₯=π₯0 πΞ (π₯)
(37)
Step 6. If |π₯0 β π₯β | β€ π (very small, π > 0 tolerance limit), then π₯β will be the desired sample from πΉ(π₯). Step 7. If |π₯0 β π₯β | > π, then set π₯0 = π₯β and go to step 50. Step 8. Repeat steps 40β70, for π = 1, 2, . . . , π, and obtain π₯1 , π₯2 , . . . , π₯π .
8
Journal of Applied Mathematics 0.7 0.08
0.6 0.5 Density
Density
0.06
0.04
0.4 0.3 0.2
0.02
0.1 0.0
0.00 0
20
40 Data set 1
60
80
0
1
2
3
4
5
6
Data set 2 BL BE Lindley
BL BE Lindley (a) Data set 1
(b) Data set 2
Figure 2: Estimated densities of the models for data set 1.
Using the previous algorithm, we generated a sample of size 30 from beta-Lindley distribution for arbitrary values of
π = 1.5, πΌ = 2, and π½ = 1. The simulated sample (Data 3) is given by
0.7230, 0.9211, 1.3350, 2.6770, 0.6035, 2.5947, 3.0801, 1.5572, 1.4727, 0.3013, (0.6116, 0.5550, 1.6320, 0.9438, 1.9079, 1.1693, 1.7259, 4.5494, 0.9360, 1.9373,) . 2.9493, 0.6233, 1.5323, 0.4515, 0.7262, 0.9476, 0.1333, 0.9405, 2.3910, 0.8615 The maximum likelihood estimates and Bayes estimates with corresponding confidence/credible intervals are calculated based on the simulated sample. The MLEs of (π, πΌ, π½) are (2.72966, 2.56457, 0.43766), respectively. The asymptotic confidence intervals for (π, πΌ, π½) are obtained as (β3.5042 βΌ 0, 8.9635), (β0.6445 βΌ 0, 5.77373), and (β0.865 βΌ 0, 1.74044), respectively. For Bayes estimates and the corresponding credible intervals based on simulated data, see Table 3.
Appendix Proof of Theorem 5. One has β
πΈ (ππ ) = β« π₯π π (π₯) ππ₯ 0
=
π π΅ (πΌ, π½) (π + 1)π½
7. Conclusion Here, we propose a new model, the so-called beta-Lindley distribution which extends the Lindley distribution in the analysis of data with real support. An obvious reason for generalizing a standard distribution is that the generalized form provides larger flexibility in modelling real data. We derive expansions for the moments and for the moment generating function. The estimation of parameters is approached by the method of maximum likelihood and Bayesian; also the information matrix is derived. We consider the likelihood ratio statistic to compare the model with its baseline model. Two applications of the beta-Lindley distribution to real data show that the new distribution can be used quite effectively to provide better fits than the Lindley distribution.
Γβ«
β
0
π‘ π π + π‘ βπ½π‘ ( ) (π + 1 + π‘)π½β1 ( )π π π
Γ [1 β =
π + 1 + π‘ βπ‘ πΌβ1 π ] ππ‘ π+1
Ξ (πΌ) π΅ (πΌ, π½) ππ (π + 1)π½+π β
(β1)π π=0 Ξ (πΌ β π) π!
Γβ
β
Γ [π β« π‘π (π + 1 + π‘)πΌ+π½β1 πβ(π½+π)π‘ ππ‘ 0
(38)
Journal of Applied Mathematics
9
β
+ β« π‘π+1 (π + 1 + π‘)πΌ+π½β1 πβ(π½+π)π‘ ππ‘] ππ‘ 0
=
π2 β = ππσΈ (πΌ + π½) β ππ (π½) , ππ½2
πΌ23 =
π π 1 + π₯π π2 β π β βπ₯π , =β +β ππ½π π + 1 π=1 π + 1 + π π₯π π=1
β
(β1)π Ξ (πΌ) π(π½+π)(π+1) β π΅ (πΌ, π½) ππ π=0 Ξ (πΌ β π) (π + 1)π½+π π! β
Ξ (πΌ + π½) (β1)π (π + 1)π Ξ (πΌ + π½ β π) Ξ β 1) (π π=0
Γβ
Γ [π β«
β
π+1
+β« β
β
π‘
2
π (1 + π₯π ) ππ½ π + β (π½ β 1) β 2 2 2 π (π + 1) π=1 (π + 1 + π π₯π ) π
+ (πΌ + 1) β (πβπ π₯π π₯π (π π₯π 2 + π3 π₯π 2 + 2 π2 π₯π 2 β 2 β π₯π π=1
π
=
π2 β ππ2
= β2
π‘πβπ+πΌ+π½ πβ(π½+π)π‘ ππ‘] ,
πβπ+πΌ+π½β1 β(π½+π)π‘
π+1
πΌ33 =
π‘πβπ+πΌ+π½β1 πβ(π½+π)π‘ ππ‘
π+1
β«
πΌ22 =
ππ‘
+π3 π₯π + 3 π2 π₯π + π π₯π ))
1
β1
Γ ((βπ β 1 + πβπ π₯π π + πβπ π₯π + πβπ π₯π π π₯π ) (π + 1)2 )
πβπ+πΌ+π½
(π½ + π)
2
Γ Ξ (π β π + πΌ + π½, (π + 1) (π½ + π)) .
ββ (A.1)
π=1 (π
2
(πβπ π₯π ) π2 π₯π 2 (2 + π₯π + π + π π₯π )
π
2
+ 1)2 (βπ β 1 + πβπ π₯π π + πβπ π₯π + πβπ π₯π π π₯π )
. (.3)
So,
Conflict of Interests
Ξ2 (πΌ + π½) Ξ (π½) ππ
πΈ (ππ ) =
The authors declare that there is no conflict of interests regarding the publication of this paper.
β β
Γ β β ( ((β1)π+π π(π½+π)(π+1) ) π=0 π=0
References Γ (Ξ (πΌ β π) Ξ (π β 1) Ξ (πΌ + π½ β π) Γ (π + 1)π½+πβπ π!(π½ + π)
πΌ+π½+πβπ β1
) )
Γ [πΞ (π β π + πΌ + π½, (π + 1) (π½ + π)) +
1 Ξ (π β π + πΌ + π½ + 1, (π + 1) (π½ + π))] . (π½ + π) (A.2)
The elements of Hessian matrix. One has πΌ11 =
ππ2 β = ππσΈ (πΌ + π½) β ππσΈ (πΌ) , πΌ2
πΌ12 =
π2 β = ππσΈ (πΌ + π½) , ππΌπ½
πΌ13 =
π2 β ππΌπ π
=ββ π=1
πβπ π₯π π π₯π (2 + π₯π + π + π π₯π ) , (βπ β 1 + πβπ π₯π π + πβπ π₯π + πβπ π₯π π π₯π ) (π + 1)
[1] N. Eugene, C. Lee, and F. Famoye, βBeta-normal distribution and its applications,β Communications in Statistics: Theory and Methods, vol. 31, no. 4, pp. 497β512, 2002. [2] S. Nadarajah and A. K. Gupta, βThe beta Frechet distribution,β Far East Journal of Theoretical Statistics, vol. 14, no. 1, pp. 15β24, 2004. [3] S. Nadarajah and S. Kotz, βThe beta gumbel distribution,β Mathematical Problems in Engineering, vol. 2004, no. 4, pp. 323β 332, 2004. [4] S. Nadarajah and S. Kotz, βThe beta exponential distribution,β Reliability Engineering & System Safety, vol. 91, Article ID 689697, 2005. [5] M. C. Jones, βFamilies of distributions arising from distributions of order statistics,β Test, vol. 13, no. 1, pp. 1β43, 2004. [6] W. Barreto-Souza, A. H. S. Santos, and G. M. Cordeiro, βThe beta generalized exponential distribution,β Journal of Statistical Computation and Simulation, vol. 80, no. 2, pp. 159β172, 2010. [7] R. R. Pescim, C. G. Demtrio, G. M. Cordeiro, E. M. Ortega, and M. R. Urbano, βThe beta generalized half-normal distribution,β Computational Statistics and Data Analysis, vol. 54, no. 4, pp. 945β957, 2010. [8] G. M. Cordeiro, C. T. Cristino, E. M. Hashimoto, and E. M. M. Ortega, βThe beta generalized Rayleigh distribution with applications to lifetime data,β Statistical Papers, vol. 54, no. 1, pp. 133β161, 2013. [9] D. V. Lindley, βFiducial distributions and Bayes'theorem,β Journal of the Royal Statistical Society B, vol. 20, pp. 102β107, 1958.
10 [10] M. E. Ghitany, B. Atieh, and S. Nadarajah, βLindley distribution and its application,β Mathematics and Computers in Simulation, vol. 78, no. 4, pp. 493β506, 2008. [11] G. M. Cordeiro and S. Nadarajah, βClosed-form expressions for moments of a class of beta generalized distributions,β Brazilian Journal of Probability and Statistics, vol. 25, no. 1, pp. 14β33, 2011. [12] S. Nadarajah, H. S. Bakouch, and R. Tahmasbi, βA generalized Lindley distribution,β Sankhya B: Applied and Interdisciplinary Statistics, vol. 73, no. 2, pp. 331β359, 2011. [13] J. Swain, S. Venkatraman, and J. Wilson, βLeast squares estimation of distribution function in Johnsonβs translation system,β Journal of Statistical Computation and Simulation, vol. 29, pp. 271β297, 1988. [14] N. L. Johnson, S. Kotz, and N. Balakrishnan, Continuous Univariate Distr ibution, vol. 2, John Wiley & Sons, 2nd edition, 1995. [15] J. O. Berger, Statistical Decision Theory and Bayesian Analysis, Springer, 1985. [16] S. K. Upadhyay and A. Gupta, βA Bayes analysis of modified Weibull distribution via Markov chain MONte Carlo simulation,β Journal of Statistical Computation and Simulation, vol. 80, no. 3-4, pp. 241β254, 2010. [17] A. E. Gelfand and A. F. M. Smith, βSampling-based approaches to calculating marginal densities,β Journal of the American Statistical Association, vol. 85, no. 410, pp. 398β409, 1990. [18] W. K. Hastings, βMonte carlo sampling methods using Markov chains and their applications,β Biometrika, vol. 57, no. 1, pp. 97β 109, 1970. [19] S. P. Brooks, βMarkov chain Monte Carlo method and its application,β Journal of the Royal Statistical Society Series D: The Statistician, vol. 47, no. 1, pp. 69β100, 1998. [20] M. H. Chen and Q. M. Shao, βMonte carlo estimation of Bayesian credible and HPD intervals,β Journal of Computational and Graphical Statistics, vol. 8, no. 1, pp. 69β92, 1999. [21] S. K. Upadhyay, N. Vasishta, and A. F. M. Smith, βBayes inference in life testing and reliability via Markov chain Monte Carlo simulation,β Sankhya: The Indian Journal of Statistics A, vol. 63, no. 1, pp. 15β40, 2001. [22] E. T. Lee and J. W. Wang, Statistical Methods for Survival Data Analysis, John Wiley & Sons, New York, NY, USA, 3rd edition, 2003. [23] T. Bjerkedal, βAcquisition of resistance in Guinea pigs infected with different doses of virulent tubercle bacilli,β The American Journal of Epidemiology, vol. 72, no. 1, pp. 130β148, 1960.
Journal of Applied Mathematics
Advances in
Operations Research Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at http://www.hindawi.com International Journal of
Advances in
Combinatorics Hindawi Publishing Corporation http://www.hindawi.com
Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of Mathematics and Mathematical Sciences
Mathematical Problems in Engineering
Journal of
Mathematics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Discrete Dynamics in Nature and Society
Journal of
Function Spaces Hindawi Publishing Corporation http://www.hindawi.com
Abstract and Applied Analysis
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014