Our reference: STAPRO 6931
P-authorquery-v11
AUTHOR QUERY FORM Journal: Statistics and Probability Letters
Please e-mail or fax your responses and any corrections to: E-mail:
[email protected]
Article Number: 6931
Fax: +44 1392 285879
Dear Author, Please check your proof carefully and mark all corrections at the appropriate place in the proof (e.g., by using on-screen annotation in the PDF file) or compile them in a separate list. Note: if you opt to annotate the file with software other than Adobe Reader then please also highlight the appropriate place in the PDF file. To ensure fast publication of your paper please return your corrections within 48 hours. For correction or revision of any artwork, please consult http://www.elsevier.com/artworkinstructions.
Location in article
Query / Remark click on the Q link to go Please insert your reply or correction at the corresponding line in the proof
Q1
Please confirm that given names and surnames have been identified correctly.
Q2
An extra opening square bracket has been deleted. Please check, and correct if necessary.
Q3
‘(2.11)’ is cited here but not provided. Please check, and correct if necessary.
Q4
Please check the layout of Table 1, and correct if necessary.
Q5
Table 2 is/are not cited in the text. Please check that the citation(s) suggested by the copyeditor is/are in the appropriate place, and correct if necessary.
Q6
Please provide grant number for ‘NSERC, Canada’. Please check this box or indicate your approval if you have no corrections to make to the PDF file
Thank you for your assistance.
Page 1 of ...1...
Statistics and Probability Letters xx (xxxx) xxx–xxx
Contents lists available at ScienceDirect
Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro
On testing the coefficient of variation in an inverse Gaussian population Q1
Yogendra P. Chaubey a,∗ , Debaraj Sen a , Krishna K. Saha b ∧
∧
∧
∧
∧
∧
a
Department of Mathematics and Statistics, Concordia University, Montréal, QC H3G 1M8, Canada
b
Department of Mathematical Sciences, Central Connecticut State University, 1615 Stanley Street, New Britain, CT 06050, USA
article
abstract
info
Article history: Received 29 November 2013 Received in revised form 24 March 2014 Accepted 25 March 2014 Available online xxxx
Here we prove that the LR test for one sided hypotheses concerning the coefficient of variation in an inverse Gaussian family is the UMP invariant test under scale transformation. Some approximations to the CDF of the test statistic are investigated. © 2014 Elsevier B.V. All rights reserved.
Keywords: Maximal invariant Neyman–Pearson lemma UMP invariant test
1
2 3 4
1. Introduction The inverse Gaussian distribution has received considerable attention as a model for describing positively skewed data after the pioneering work of Tweedie (1957a,b) and the review paper by Folks and Chhikara (1978). The probability density function (pdf) of the inverse Gaussian random variable X is given by f (x|µ, λ) =
5
6 7
9
1/2
exp −
λ 2 ( x − µ) ; 2 µ2 x
x > 0, µ > 0, λ > 0,
λ x 2λ λ x F (x|µ, λ) = Φ −1 + exp Φ − +1 , x µ µ x µ
(1.1)
(1.2)
where Φ (·) denotes the cdf of a standard normal variable. This expression will be useful for later computations. The following properties are also useful in further discussion: X¯ ∼ IG(µ, nλ)
11
12
λ 2π x3
where µ is the mean of the distribution and λ is the dispersion parameter. The above density will be denoted by IG(µ, λ). The corresponding cumulative distribution function (cdf) is conveniently written as
8
10
where X¯ = n
n −1
i=1
and λ V ∼ χn2−1 , Xi is the sample mean and V =
( 1/λ) = V /(n − 1).
13
∗
Corresponding author. E-mail address:
[email protected] (Y.P. Chaubey).
http://dx.doi.org/10.1016/j.spl.2014.03.023 0167-7152/© 2014 Elsevier B.V. All rights reserved.
(1.3)
n 1 i =1
Xi
−
1 X¯
provides an unbiased estimate of 1/λ, namely (1.4)
2
Y.P. Chaubey et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx
For a broad review and applications of the IG family and other related results, the reader may refer to the text by Chhikara and Folks (1989) and Seshadri (1993, 1998). This article concerns testing about the coefficient of variation (CV) defined as CV = σ /µ where σ 2 is the variance of the population and µ is the mean. For the IG(µ, λ) population, it is given by CV = γ =
µ . λ
(1.5)
Testing problems concerning γ may, equivalently, be considered in terms of the parameter υ = γ 2 = µ/λ. We are particularly interested in testing H0 : υ ≤ υ0 against H1 : υ > υ0 for a given υ0 . Q2 The knowledge of CV may provide an improved estimator of mean; see the articles by Searles (1964); Srivastava (1974) and Chaubey and Sen (2008). Hence or otherwise a test of hypothesis about CV may be useful, for example, in a preliminary test estimator (see Chaubey and Sen, 2007). Chhikara and Folks (1989) have used the Neyman–Pearson theory in obtaining tests for means of IG populations as well as for the dispersion parameter λ. Chhikara (1972) discussed tests for υ based on ∧ the maximum likelihood estimator of υ and its asymptotic distribution. Later, Hsieh (1990) found these approximations not to be of much use, especially for moderate sized samples and advocated exact computations for probabilities associated with the corresponding likelihood ratio test. In the present note we find that the likelihood ratio test provides the uniformly most powerful invariant test for testing H0 : υ ≤ υ0 against H1 : υ > υ0 for a given υ0 , under the group of scale transformations. In Section 2, we present the main result along with the preliminaries of the theory of invariance in hypothesis testing. Section 3 considers the computational aspect of the exact distribution of the test statistic along with some new approximations. Finally, the last section presents a numerical illustration to demonstrate the operational execution of the test using the exact and approximate distributions of the test statistic. 2. Uniformly most powerful invariant test for H0 : υ ≤ υ0 against H1 : υ > υ0
S (X) = X¯ , V
(2.1)
is completely sufficient for (µ, λ). ∧ The likelihood ratio test approach provides the test statistic T (X) = X¯ V . We derive this using the theory of invariance for hypothesis testing (see Section 5.6 of Ferguson, 1967) and show that the resulting test is UMP invariant under the group of scale transformations. Note that the null and alternative hypotheses about υ may be written as H0 : µ ≤ υ0 λ against H1 : µ > υ0 λ. The main result is given in the following theorems. Theorem 2.1. For the family of inverse Gaussian densities IG(µ, µ/υ), the uniformly most powerful invariant test of size α for H0 : υ ≤ υ0 against H1 : υ > υ0 , under the group of scale transformations, is given by the decision rule d(X) =
Reject H0 Accept H0
if T (X) > c if T (X) ≤ c
3 4
5
6 7 8 9 10 11 12 13 14 15 16 17 18 19
21 22 23 24 25 26 27 28
29 30
31
where c is a constant such that P (T (X) > c |υ = υ0 ) = α.
2
20
Our interest lies in testing hypotheses about the parameter υ = µ/λ, based on a random sample X = (X1 , X2 , . . . , Xn ) from IG(µ, λ). It is well known (see Folks and Chhikara, 1978) that the bivariate statistic
1
32
(2.2)
Proof. In order to prove the above theorem, first we present the basic theory of invariance in hypothesis testing as outlined in Ferguson (1967). Let us consider the general problem of testing H0 : θ ∈ Θ0 against H1 : θ ∈ Θ1 , where Θ denotes the parameter space of interest, Θ0 ∪ Θ1 = Θ and Θ0 ∩ Θ1 = ∅. We say that a group G of transformations on the sample space leaves the hypothesis-testing problem invariant if the families ∧ of distributions under Θ0 and Θ1 are invariant. Next we define a maximal invariant that plays the key role in the theory of invariance in hypothesis testing. Definition. Let X be a space and G be a group of transformations on X. A function T (x) on X is said to be a maximal invariant with respect to X if (a) (invariance) T (g (x)) = T (x) for all x ∈ X and g ∈ G; (b) (maximality) T (x1 ) = T (x2 ) implies x1 = g (x2 ) for some g ∈ G. Following theorems (see Theorems 1 and 2 in Section 5.6 of Ferguson, 1967) indicate the role of the maximal invariant in the hypothesis testing problem. Theorem 2.2. Let X be a space, let G be a group of transformations on X, and let T (x) be a maximal invariant with respect to G. A function d(x) is invariant with respect to G, if and only if, d is a function of T (x). Theorem 2.3. Suppose a statistical decision problem is invariant under a group G and let υ(θ ) be a maximal invariant under the induced group of transformations G¯ on Θ . Then if T (x) is invariant under G, the distribution of T (x) depends only on υ(θ ).
33
34 35 36 37 38 39
40 41 42 43 44 45
46 47
48 49
Y.P. Chaubey et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx
1 2 3
Thus the invariance approach restricts the class of invariant tests to those which are functions of a maximal invariant. In order to exhibit a UMP invariant test we consider the family of distributions of the sufficient statistic S (X) = (U , V ), where 1 1 U = X¯ and V = i ( X − X¯ ). The statistics U and V are independent, and from (1.3), i
U ∼ IG(µ, nµ/υ),
4 5
8 9 10
gc (U , V ) = (cU , (1/c )V ),
13 14 15
16 17
T =
20
21 22 23
24
U
26 27 28
µ/υ
30 31
(2.5) µ
∧
Theorem 2.4. If the distribution of T has a monotone likelihood ratio (MLR), i.e., if the density of f (t |θ ) of T is such that for
θ1 < θ2 , the likelihood ratio f (t |θ2 ) f (t |θ1 )
is a non-decreasing function of t in the set of its existence, any test of the form ∧
d(X) =
if t > c if t ≤ c ,
Reject H0 Accept H0
is UMP of its size for testing H0 : θ ≤ θ0 against H1 : θ > θ0 , for any θ0 ∈ Θ , provided that its size is not zero. In order to show that f (t |υ) has the MLR property, we first show that for every y > 0, the conditional density f (t |υ, y) of T given Y = y has the MLR property. Toward this end we note that this density is that of IG(yυ, ny), and we have ∧
f (t |υ2 , y) f (t |υ1 , y)
= exp −
ny
2
t yv2
2t
n
1
2
v1
−
1
−1 t
2
t
+
yv1
1
1
−1
+ −2 v2 y v2 v1 that is an increasing function of t for υ1 < υ2 . This implies that for t2 > t1 and υ1 < υ2 , = exp
f (t2 |υ2 , y) ≥ f (t1 |υ1 , y) ∀y > 0,
(2.6)
(2.7)
and therefore,
29
(2.4)
µ V = ZY υ
25
c > 0.
U where Z = µ/υ ∼ IG(υ, n), Y = υ V ∼ χn2−1 and Z and Y are independent. Thus the problem of finding UMP invariant test of H0 against H1 becomes the problem of finding a UMP test based on T of the hypothesis H0 against H1 . In order to show that the test exhibited in the theorem is uniformly most powerful, we will show that the distribution of T has monotone likelihood ratio and use the following theorem (see Theorem 1 in Section 5.2 of Ferguson, 1967).
18
19
(2.3)
It is easily seen that T (U , V ) = UV is a maximal invariant with respect to the group Gc = {gc }. The induced transformation on the parameter space is given by g¯c (µ, λ) = (c µ, c λ) and a maximal invariant with respect to this group is given by υ = µ/λ. Therefore, by Theorem 2.3, the probability distribution of T must depend upon (µ, λ) only through the parameter υ . This can be directly seen by writing T as
11
12
(µ/υ)V ∼ χn2−1 .
The scale transformation of the observations induces the following transformation on the sufficient statistics:
6 7
3
f (t2 |υ2 , y)fY (y)dy ≥
f (t1 |υ1 , y)fY (y)dy
∀y > 0 .
(2.8)
This says that the unconditional distribution f (t |υ) satisfies f (t2 |υ2 ) ≥ f (t1 |υ1 ),
(2.9)
35
for t2 > t1 and υ1 < υ2 . This establishes the MLR property for f (t |υ) and that completes the proof of the theorem. Fig. 1 depicts the power function β(υ) for α = 0.05 for n = 15, 50 and 100 for H0 : υ ≤ 1 against H1 : υ > 1. It clearly shows that the power function β(υ) ≤ 0.05 for H0 and it is greater than 0.05 for H1 . Moreover, it is an increasing function of υ . In the following section, we discuss the computation of the distribution of T and some approximations.
36
3. Distribution of the test statistic
37
3.1. Exact distribution
32 33 34
38
39
The probability density function of T can be evaluated using the representation T = ZY where Z ∼ IG(υ, n),
Y ∼ χn2−1 ,
ind
and Y ∼ Z
4
Y.P. Chaubey et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx
Fig. 1. Power function β(υ) for υ0 = 1 and α = 0.05.
that was mentioned in the previous section. This representation can be used to establish the following form of the probability density function of T : f (t |υ) =
n/υ
n
e
n/2
t
Kn/2
n
(1 + t /n) ,
1 2
(3.1)
3
where Kq (z ) is the modified Bessel function of the second kind, that is represented as (see Theorem 2.1 of Glasser et al., 2012)
4
π
t3
zq
Kq (z ) =
∞
2q+1
√ v (1 + t /n)
2(n/2)−1 Γ (ν/2)
v
2 u−q−1 e−u−z /4u du.
(3.2)
To arrive at the expression in (3.1) we write f (t |υ) as f (t |υ) =
∞
t
1
y
y
fZ 0
6
fY (y)dy,
(3.3)
where fX (·) represents the probability density function of X . Using the form of IG(υ, n) and Chi-square densities and further ∧ simplification leads to f (t |υ) =
5
0
en/υ
n 2π
t3
∞
2(ν/2) Γ (ν/2)
− a(yt ) −b(t )y
y(n/2)−1 e
dy,
(3.4)
7
8 9
10
0
where
11
1
nt
and b(t ) = (1 + n/t ). 2υ 2 2 Q3 Now we use the formula (2.11) a(t ) =
∞
wq−1 exp − 0
a
w
− bw dw = 2
(3.5)
13
a q/2 b
√
Kq (2 ab)
(3.6)
from Glasser et al. (2012) and get f (t |υ) =
π
t3
2(n/2)−1 Γ (ν/2)
a(t )
n/4
b(t )
Kn/2 (2 a(t )b(t )).
(3.7)
Further simplification of this yields the result in (3.1). In application of the test procedure, we propose to use the p-value that requires the computation of the distribution function of T . This can be facilitated using numerical integration software. In our computations used for comparing the distributional approximations we have used the following formulae using R software (Ihaka and Gentleman, 1996): F (t |υ) = P (T ≤ t ) =
∞
P
14
15
en/υ
n
12
Y ≤
0
t
z
fZ (z )dz
(3.8)
or
16
17 18 19 20
21
22
F (t |υ) = P (T ≤ t ) =
∞
P
0
Z ≤
t y
fY (y)dy.
(3.9)
23
Y.P. Chaubey et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx
For the formula (3.8), the internal R-function pgamma is used in computing P Y ≤
1 2
t z
5
defining the integrand and for the
formula (3.9), the following formula for the distribution function of IG(υ, n) (see (1.2)) is used in computing P Z ≤ ∧
t y
:
6
n z 2n n z P (Z ≤ z ) = Φ − 1 + exp Φ − +1 (3.10) z υ υ z υ that can be computed using the R-function pnorm. However, it is not useful for small values of υ and/or large values of n, because exp( 2n ) can blow up in these cases. Hence the formula (3.8) is advocated for the exact computation of the probυ abilities related to T . In the next section we discuss approximations for computing FT (t ) in terms of inverse Gaussian and
7
Gamma distributions that are easily available in modern statistical software.
8
3.2. Approximations
3
4 5
9 10 11
The approximations for the test statistic T are moment based guided by the limiting behavior of the components Z and V . Using the moment properties of inverse Gaussian and Chi-square distributions, we give below the first three cumulants ∧ of the statistic T , E (T ) = υν
12
13
14
15
16 17 18 19
n
22 23
25
n
(3.13)
3.2.1. First approximation Noting that for large n, Y /ν converges to 1 with probability 1, we may approximate T by an IG random variable. Note that E (T /(υν)) = 1, we approximate the distribution of T /(υν) by that of IG(1, λ1 ), where λ1 may be obtained by equating T nν the second moment. Since Var( υν ) = λ1 , λ1 = (2+ν)υ+ . Note that for large n, λ1 ≈ υn ; therefore for large n 2n ∧
1
T /ν ≈ IG(υ, n).
(3.14)
3.2.2. Second approximation Using a similar approach to the distribution of Z , instead of that of Y , we can approximate T by a multiple of Chi-square T random variable. That is, we have approximately υν ∼ Gamma(α, β), where αβ = 1 and αβ 2 = Var(T /(υν)). This gives
β=
24
26
n
(3.12)
where ν = n − 1.
20
21
(3.11)
υ3 (2ν + nu2 ) + υ 2 + 2ν V (T ) = ν υ4 υ5 υ3 k3 (T ) = (ν 3 + 6ν 2 + 8ν) υ 3 + 3 + 3 2 − 3υν υ 2 + (2ν + ν 2 ) + 2υ 3 ν 3
(2 + ν)υ + 2n = 1/λ1 and α = λ1 . nν
(3.15)
3.2.3. Third approximation This approximation combines the strengths of the above approximations through a mixture approach where
T
1
28
= ω IG(1, λ1 ) ⊕ (1 − ω) Gamma λ1 , , (3.16) υν λ1 with ω being the mixing coefficient to be determined appropriately, and ⊕ denoting that the distribution (and consequently
29
the pdf) is the mixture of the constituent random variables. Therefore
27
P
30
31
T
υν
33
≤ x = ω cdf of IG(1, λ1 ) + (1 − ω) cdf of Gamma λ1 ,
35
36
λ1
.
(3.17)
∧
E
T
υν
3
= ωE [IG(1, λ1 )] + (1 − ω)E Gamma λ1 , 3
1
3
λ1
.
(3.18)
This gives
34
1
Using the value of the mixing coefficient ω we equate the third central moments:
32
k3 (T /(υν)) = ωk3 [IG(1, λ1 )] + (1 − ω)k3 Gamma λ1 , Noting that k3 [IG(1, λ)] = λ32
ω=
λ21 k3 (T ) − 2. υ 3ν3
and k3 Gamma λ, λ1 =
2
λ2
1
λ1
.
(3.19)
, we get (3.20)
6
Y.P. Chaubey et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx
Fig. 2. Exact and approximate distributions of T and their error analysis for n = 20.
The simplicity of the mixture approximation is in approximating the percentiles, simply by mixtures of the percentiles of the components (see Chaubey, 1989), i.e. tα = (υν)[ωXα(1) + (1 − ω)Xα(2) ],
(3.21)
1 2 3
where Xα(1) denotes the percentile of IG(1, λ1 ) and Xα(2) denotes that for the random variable Gamma(λ1 , 1/λ1 ).
4
3.3. Comparison of the approximations
5
For a qualitative assessment of the three approximations, the exact distribution function and the corresponding approximations were computed for a selection of values of the sample size n and the exact value of the CV as given in terms of υ . Figs. 2–3 represent these for sample sizes 20 and 50 respectively. The corresponding errors are given in accompanying boxplots. From the graph of the distribution functions, it is difficult to discriminate between different approximations, but the ∧ error plots seem to be quite useful in determining the quality of approximations. We also compare the exact values of the Q4 percentiles for different values of υ0 and α = 0.01 and 0.05 as displayed in Table 1. The computations were performed using programs developed by the authors in R (see Ihaka and Gentleman, 1996) that may obtained from http://alcor.concordia. ca/∼chaubey/R-Programs.pdf. From the graphs of the distribution functions in Figs. 2–3, it seems that all the three approximations provide similar qualitative performance. However, the accompanying error plots show that the IG approximation is not as good as the χ 2 approximation for small υ . For larger values of υ , IG approximation is quite good; however, mixture approximation may ∧ provide substantial improvement over both of them in providing a very accurate approximation. The mixture approximation may also provide a fairly accurate (up to two decimals) approximation to the percentiles as seen from Table 1. 4. A numerical example
6 7 8 9 10 11 12 13 14 15 16 17 18
19
The following data from Chhikara and Folks (1977) on active repair times (in hours) of an airborne communication
20
Q5 transceiver are used to illustrate the testing procedure. The data presented below were found to fit the IG model (see Table 2).
21
Y.P. Chaubey et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx
Fig. 3. Exact and approximate distributions of T and their error analysis for n = 50.
Table 1 A comparison of the mixture approximation for the percentiles of the test statistic T for different sample sizes. n = 10
n = 20
n = 30
n = 50
1.907226 1.928325 1.906014 1.908313
1.604352 1.620498 1.602906 1.604630
1.481422 1.493704 1.480233 1.481531
1.364071 1.372344 1.363272 1.364133
1.959070 1.978577 1.956529 1.962573
1.638569 1.653179 1.634673 1.639464
1.508182 1.519290 1.504930 1.508575
1.383821 1.391281 1.381517 1.383957
2.470374 2.602469 2.453586 2.468925
1.940409 2.009526 1.932056 1.939647
1.736048 1.782771 1.730547 1.735577
1.547014 1.575322 1.543780 1.546775
2.590431 2.705068 2.543698 2.587937
2.008769 2.069618 1.985152 2.007021
1.786265 1.827709 1.770659 1.785142
1.581772 1.607070 1.572561 1.581184
υ = 0.1, α = 0.05 Exact IG approx. Gamma approx. Mixture approx.
υ = 0.3, α = 0.05 Exact IG approx Gamma approx Mixture approx
υ = 0.1, α = 0.01 Exact IG approx Gamma approx Mixture approx
υ = 0.3, α = 0.01 Exact IG approx Gamma approx Mixture approx
7
8
Y.P. Chaubey et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx
Table 2 Active repair times (in hours) of an airborne communication. 0.2 1.0 2.5 7.0
0.3 1.0 2.7 7.5
0.5 1.0 3.0 8.8
0.5 1.0 3.0 9.0
0.5 1.1 3.3 10.3
0.5 1.3 3.3 22.0
0.6 1.5 4.0 24.5
0.6 1.5 4.0
0.7 1.5 4.5
0.7 1.5 4.7
0.7 2.0 5.0
0.8 2.0 5.4
0.8 2.2 5.4
Let us test H0 : υ ≤ 2 against H1 : υ > 2. Here n = 46, Xi = 165.9, (1/Xi ) = 40.48467. Hence, X¯ = 3.6065, V = ¯ 27.73 and the test statistic is T = X V = 100.0082. The modified statistic T ∗ = T /((n − 1)υ0 ) = 1.1112. The corresponding P-value is obtained using a program for the exact distribution function developed in R. This gives P (T ∗ ≤ 1.1112) = ∧ 0.68913, i.e., the P-value is given by 0.31087. This value is quite large for a 1% level of significance and therefore a squared CV of less than equal to 2 is accepted. This is not a surprising result for this data as the unbiased estimate of υ is υˆ = T /(n − 1) = 2.2224. It may be noted that the P-value obtained using the mixture approximation discussed here is almost the same as the exact value, namely 0.3109.
Acknowledgments Q6
1 2 3 4 5 6 7
8
We acknowledge the partial support of this research through a Discovery Grant from NSERC, Canada. The authors are thankful to an anonymous referee for some useful comments that have improved the presentation of an earlier version of this manuscript. They would also like to thank the Co-Editor-in-Chief, Prof. Hira Koul for his encouragement.
10
References
12
Chaubey, Y.P., 1989. Edgeworth expansions with mixtures. Metron XLVII, 53–64. Chaubey, Y.P., Sen, D., 2007. Properties of a preliminary test estimator of an inverse Gaussian population mean. Statistical techniques. In: Pandey, B.N. (Ed.), Life-Testing, Reliability, Sampling Theory and Quality Control. Narosa Pub. House, New Delhi, India. Chaubey, Y.P., Sen, D., 2008. Estimator of mean in an inverse Gaussian population based on the coefficient of variation. J. Statist. Res. 42, 1–16. Chhikara, R.S., 1972. Statistical inference related to the inverse Gaussian distribution. Ph.D. Dissertaion. Oklahoma State University. Chhikara, R.S., Folks, J.L., 1977. The inverse Gaussian distribution as a lifetime model. Technometrics 19, 461–468. Chhikara, R.S., Folks, J.L., 1989. The Inverse Gaussian Distribution: Theory, Methodology and Applications. Marcel Dekker, New York. Ferguson, T.S., 1967. Mathematical Statistics: A Decision Theoretic Approach. Academic Press, New York. Folks, J.L., Chhikara, R.S., 1978. The inverse Gaussian distribution and its statistical application—a review. J. R. Stat. Soc. Ser. B 40, 263–289. Glasser, L., Kohl, K.T., Koutschan, C., Moll, V.H., Straub, A., 2012. The integrals in Gradshteyn and Ryzhik. Part 22: Bessel K -functions. Scientia A 22, 129–151. Hsieh, H.K., 1990. Inferences on the coefficient of variation of an inverse Gaussian distribution. Comm. Statist. Theory Methods 19, 1589–1605. Ihaka, R., Gentleman, R., 1996. R: a language for data analysis and graphics. J. Comput. Graph. Statist. 5, 299–314. Searles, D.T., 1964. The utilization of a known coefficient of variation in the estimation procedure. J. Amer. Statist. Assoc. 59, 1225–1226. Seshadri, V., 1993. The Inverse Gaussian Distribution: A Case Study in Exponential Families. Clarenden Press, Oxford. Seshadri, V., 1998. The Inverse Gaussian Distribution: Statistical Theory and Applications. Springer-Verlag, New York. Srivastava, V.K., 1974. On the use of coefficient of variation in estimating normal mean. J. Indian Soc. Agricultural Statist. 26, 33–36. Tweedie, M.C.K., 1957a. Statistical properties of inverse Gaussian distributions-I. Ann. Math. Statist. 28, 362–377. Tweedie, M.C.K., 1957b. Statistical properties of inverse Gaussian distributions-II. Ann. Math. Statist. 28, 696–705.
9
11
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29