Computing Fractional Bayes Factor Using The ... - CiteSeerX

0 downloads 0 Views 198KB Size Report
method will be applied to variance ratio, , of random e ects model and the ..... full data index set I = f1;:::;ng be divided to the uncensored data index set. Iu = fi1;::: ...
1

Computing Fractional Bayes Factor Using The Generalized Savage-Dickey Density Ratio 1 Younshik Chung and Sangjeen Lee 2

3

ABSTRACT A computing method of fractional Bayes factor (FBF) for a point null hypothesis is explained. We propose alternative form of FBF that is the product of density ratio and a quantity using the generalized Savage-Dickey density ratio method. When it is dicult to compute the alternative form of FBF analytically, each term of the proposed form can be estimated by MCMC method. Finally, two examples are given.

KEYWORDS: Fractional Bayes factor; Generalized Savage-Dickey density ratio; Gibbs sampling; Testing hypothesis.

1. Introduction The Bayesian approach to testing hypothesis was developed by Je reys (1961) as a major part of his program for scienti c inference. Mainly Bayes factor has been used for the Bayesian approach of model selection or hypothesis testing. But it has a drawback, that is, it is hard to approach with improper prior. Consider a statistical model with data Y and its parameter vector . Suppose that we wish to test the null hypothesis H versus alternative H , according to a probability density f (Yj ) or f (Yj ) respectively. Given priori probabilities p(H ) and p(H ) = 1?p(H ), the data Y produces posterior 0

0

0

1

0

1

1

1

0

The research was supported (in part) by the Matching Fund Programs of Research Institute for Basic Sciences, Pusan National University, Korea, 1998. 2 Department of Statistics, Pusan National University, Pusan, 609-735 Korea. 3 Department of Statistics, Pusan National University, Pusan, 609-735 Korea. 1

2

Younshik Chung and Sangjeen Lee

probabilities p(H jY) and p(H jY). The Bayes factor, B (Y), in favor of H is jY)=p(H jY) = m (Y) ; (1.1) B (Y) = p(Hp(H )=p(H ) m (Y) R where mi (Y) = i (i)fi(Yji)di is the marginal density of Y under model Hi and i (i) is the prior density of i under Hi, for i = 0; 1. Bayes factors typically depend rather strongly on the prior distributions much more so than in estimations. For instance, as the sample size grows, the in uence of the prior distribution disappears in estimation, but does not in hypothesis testing or model selction. Especially, when improper priors are used, it is dicult to compare hypothesis or models by Bayes factor. An improper prior for i is usually written as i(i ) / gi(i); where gi is a integrable function over i-space, for example, a location noninformative prior is given by i(i ) / 1. That is, it could be expressed that 0

1

0

0

0

1

0

1

1

i (i) = ci gi(i ); i = 0; 1: If there is no normalizing constant, we can treat ci as an unspeci ed constant. However, Bayes factor in favor of H , with respect to these priors, R g ( )f (Yj )d B (Y) = c =c R g ( )f (Yj )d ; depends on the ratio c =c . Recently, to overcome this problem, various approaches have been advocated. Aitkin (1991) proposed the posterior Bayes factor which is used proper posterior distribution instead of improper prior distribution. In this approach, data are doubly used. As an alternative, the concept of the partial Bayes factor is thought. Berger and Pericchi (1996) provided intrinsic Bayes factor with the (imaginary) training sample idea of Smith and Spiegelhalter (1980) and Spiegelhalter and Smith (1982). Fractional Bayes factor (FBF) method is proposed by O'Hagan (1995). The FBF in favor of H , Y) ; (1.2) Br (Y) = qq ((r;r; Y ) 0

0

0

1

0

0

0

0

0

1

1

1

1

1

1

0

0

1

R

is simple and has practical merits, where for i = 0; 1, qi(r; Y) = R   ff YYjj dd and for the choice of r, refer to O'Hagan (1995). In particular, the possible 1 ? ? choices of r are m=n, n log n or n 2 where n and m are the original sample 1

i( i) i(

i)

i

r i( i ) i (

i)

i

Computing Fractional Bayes Factor

3

size and minimal training sample size which is de ned in Berger an Pericchi (1996), respectively. In section 2, we review Verdinelli and Wasserman's (1995) method for computing Bayes factor using the generalized Savage - Dickey density ratio method. Then we suggest a computing method for the fractional Bayes factor (FBF) to test a point null hypothesis. In section 3, the suggested computing method will be applied to variance ratio, , of random e ects model and the mean parameter of truncated normal distribution involving censored data.

2. Computing Fractional Bayes Factor Consider a parameter  = (!;  ) 2  =   and suppose that we wish to test the null hypothesis H : ! = ! versus alternative H : ! 6= ! . Suppose that  ( ) is the prior density for  under H and  (!;  ) is the prior density for (!;  ) under H . Let r be the approximation coecient of fractional Bayes factor explained in the equation (1.2). The FBF, Br (Y), in favor of H is expressed as Y) Br (Y) = qq ((r;r; Y R ) 0

0

1

0

0

0

1

1

0

0

1

R 00  ff YYjj!!00;; dd RR : R R 11 !;!; ff YYjj!;!; d!d d!d ( ) (

=

)

( )

r(

)

(

) (

)

(

)

r

(

(2.1)

)

Then we can consider f r (Yj! ;  ) and f r (Yj!;  ) as the sampling distriR r bution densities, by normalizing them. That is, if let 1=c (r;  ) = f (Yj! ;  ) R r dY and 1=c (r; !;  ) = f (Yj!;  )dY, then c (r;  )f r (Yj! ;  ) and c (r; !;  ) f r (Yj!;  ) are densities for Y given (! ;  ) and (!;  ), respectively. Thus the equation (2.1) can be written R  ()f (Yj! ; )d R R  (!; )f r(Yj!; )d!d Br (Y) = R R  (!;  )f (Yj!;  )d!d  R  ( )f r (Yj! ;  )d (Y)  m ;r (Y) = m m (Y) m ;r (Y) = BF (Y)  BF r (Y); (2.2) R RR where m ;r (Y) =  ( )f r (Yj! ;  )d , m ;r (Y) =  (!;  )f r(Yj!;  )d!d , BF (Y) = mm10 YY , and BF r (Y) = mm10 YY . 0

0

1

0

0

0

1

0

0

1

0

1

0

0

1

1

0

01

0

01

10

0

(

)

(

)

0

0

10

1

;r (

)

;r (

)

1

4

Younshik Chung and Sangjeen Lee

Then m ;r (Y) and m ;r (Y) are not the marginal densities since there are no normalizing constant terms of f r (Yj! ;  ) and f r (Yj!;  ), respectively. But m ;r (Y) can be regarded as the marginal density of Y with the sampling density c (r;  )  f r (Yj! ;  ) and prior c00r; . Also, m ;r (Y) will be considered by the same way. Thus, the FBF in (2.2) seems to be the product of two Bayes factors. Therefore, we can compute the FBF in (2.2) by computing each Bayes factor and then producting them. Dickey (1971) showed that if  ( j!) =  ( ), then Bayes factor is  (! jY)= (! ), where  (!jY) and  (!) are the marginal posterior density and the marginal prior density under H , respectively. Dickey (1971) attributed this formula to Savage and called the expression Savage's density ratio. If Dickey's condition is not satis ed, we can apply to Verdinelli and Wasserman's approach, which is called the generalized Savage-Dickey density ratio method. Lemma 2.1. (Verdinelli and Wasserman, 1995) Assume that 0 <  (! jY);  (! ;  ) < 1 for almost all  , then 0

1

0

0

0

( )

0

1

(

0

1

1

)

0

1

0

1

1

1

1

1

0

0

BF (Y) = (!(!jY) )  E [  ((j!) ) ] =  (! jY)  E [  (;(!) ) ] 1

01

0

0

1

0

1

1

0

0

0

1

0

assuming that the expectation is nite with respect to  ( j! ; Y). Proof. see Verdinelli and Wasserman (1995). For the notational convenience, we will represent various notations as follows; 1

0

r  ) = R R  (!;  )f r(Yj!;  )  ;r (!;  jY) =  (!;  ) fm(Yj(!; Y)  (!;  )f r(Yj!;  )d!d 1

1

1

;r

1

Z

1

 ;r (! jY) =  ;r (! ;  jY)d; and  ;r ( j! ; Y) = ;r (!(!; jYjY) ) : 1

0

1

0

1

1

0

0

;r

1

0

Note that  ;r (!;  jY) can be regarded as posterior density of ! and  given !;  c (r; !;  )f r (Yj!;  ) Y, because  (!;  )f r(Yj!;  ) can be expressed as c11r;!; which are the product of prior and likelihood where c (r; !;  ) is de ned as before. But, in practice the computation of c (r; !;  ) is not needed. By applying Lemma 2.1 to the FBF in (2.2), we have the following theorem. 1

(

1

(

) ) 1

1

1

Computing Fractional Bayes Factor

5

Theorem 2.2. Assume that the following densities and expectations are nite. The fractional Bayes factor in (2.1) is

E 1 j!0;Y [ 10j!0 ]  ( ! j Y ) Br (Y) =  (! jY)  1 j!0 ;Y 0  : ;r E [ 1 j!0 ] (

1

( )

)

(

0

1

;r (

0

(2.3)

)

( )

)

(

)

Proof. By Lemma 2.1, we can write that BF (Y) = (!(!jY) )  E [  ((j!) ) ] 1

01

0

1

0

0

1

and

R

0

r = m ;r (Y) =  ( )f (Yj! ;  )d Zm ;r(Y())f r(Yj! ; m) ;r (Y) = m (Y) (! jY) d   ;r (! jY): ;r ;r Since  ;r (! jY) =  ;r (! ;  jY)= ;r( j! ; Y); Z  ()f r(Yj! ; ) ;r(j! ; ) r 1=BF (Y) =  ;r (! jY)  m ;r (Y) ;r (! ;  jY) d Z =  ;r (! jY)   ( ) ;r ( j! ; Y) d  (! ;  ) = ;r (!(!jY) )  E 1 j!0;Y [ ( )= ( j! )]: ;r Therefore, it follows from (2.2) that

1=BF r (Y)

0

10

0

1

1

0

0

1

1

0

1

10

0

0

1

1

1

0

1

0

1

0

1

1

0

0

0

0

0

1

0

1

1

0

0

0

1

;r (

1

0

)

0

0

1

0

Br (Y) = BF (Y)  BF r (Y) 1 j!0 ;Y =  (! jY)  E1 j!0;Y [ ( )= ( j! )] :  ;r (! jY) E [ ( )= ( j! )] 01

1

1

10

(

0

)

;r (

0

0

)

0

1

1

0

0

Remark 2.3. For a special case, if  ( j! ) =  ( ), then the equation (2.3) is simpli ed to (2.4) Br (Y) =  ((!! jjYY)) : ;r If it is not easy to compute Br (Y) in (2.3) analytically, we can use the 1

0

1

1

0

0

0

Gibbs sampler (Gelfand and Smith, 1990) which is recently powerful method to

6

Younshik Chung and Sangjeen Lee

overcome the diculty of Bayesian computation. Let f(!r ; r ); : : : ; (!rG; rG)g is a samples from the posterior  ;r (!;  jY) by MCMC method. First we consider the computations of  (! jY) and  ;r (!jY). Since the numerical estimations of  (! jY) and  ;r (!jY) are similar and if r = 1  ;r (!jY) =  (! jY), only the case for  ;r (! jY) will be presented. If  ;r (!j; Y) is in the closed form, then the marginal posterior density for ! evaluated at ! is estimated using Rao-Blackwellizing estimator of Gelfand and Smith (1990) as follow; 1

1

1

1

1

0

0

1

1

1

1

0

1

0

1

0

XG ^ ;r (! jY) = G1  ;r (! jY; ri): 1

0

i=1

1

0

If  ;r (!j; Y) is not in the closed form, we use the another method due to Chen (1994) as follows; 1

G X 1 ); ^ ;r (! jY) = G q(!rijri) gg((!! ;; rijjY ir ri Y) i 1

0

0

=1

where g(!;  jY) = f r (Yj!;  ) (!;  ) and q(!;  ) is an arbitrary probability density function and so q(!j ) is its conditional density . Chen (1994) proposed that in some cases a reasonable choice for q is to use a normal density whose mean and covariance are based on the sample mean and sample covariance of f(!r ; r ); (!r ; r ); : : : ; (!rG; rG)g. Next, in order to estimate Cr = E 1 j!0;Y 10!0; ; we draw a sample fr ; : : : ; rG g from  ;r ( j! ; Y). Then, we estimate Cr by 1

1

1

2

2

;r (

(1)

(

)

1

( )

)

(

)

0

G  ( i ) X 1 r ^ Cr = G i : i  (! ; r ) ( )

0

=1

1

( )

0

Also, similarly we can estimate C = E 1 j!0;Y 10!0; : Therefore, we could have the estimate for Br (Y), B^r (Y) = 11 !!00jYjY  CC1 : (

1

^ (

^

;r (

( )

)

)

)

(

)

^

^r

3. Some Applications In this section, we consider the two examples which use the forms of FBF in (2.3) and (2.4)

3.1. Random e ects model

Computing Fractional Bayes Factor

7

Consider a one-way balanced random e ect model:

yij =  + ei + ij ; for i = 1; : : : ; I and j = 1; : : : ; J

(3.1)

where  is the mean of yij , and ei and ij are independent normal2 variables with 0 means and variances e and  , respectively. Let  = J 2 , which is the variance ratio, of interest in various elds. For notational convenience, let  = (; ;  ) and Y = (yij )I J . We are interested in the ratio of variances, , of model (3.1). This pararameter in the random e ect model has been of interest for a long time in various elds. We want to test the hypothesis H :  =  versus H :  6=  with proposed method. For model (3.1), the likelihood function of parameter  = (; ;  ) is given by (y:: ? ) + S )g L(; ;  ) / ?IJ (1 + )?I= expf? 21 ( S + IJ 1+ P PP P where Pi Pj (yyiji: ?=yi:)j :yij =J , y:: = i j yij =IJ; S = J i(yi: ? y::) and S = Theorem 3.1. In the random e ects model (3.1), Je reys' priors are assumed in both hypotheses. Then, the FBF in favor of H is 2

2

e

2

0

0

1

0

2

2

2 1

2

2

2 2

2

2

2 1

2

2 2

0

Br (Y) =

p ;q ( WW ) 1 +  q?q ( W ) (1 + 1 +W )p ?p W p;q ( W ) r

r

0

1+

0

r

r

q ?q ;

+

(3.2)

r

1+

where p = I=2, q =R I (J ? 1)=2, pr = rI=2, qr = rI (J ? 1)=2, and W = S =S , and i;j (x) = x ti? (1 ? t)j? dt is the incomplete beta function of (i; j ) evaluated at x. Proof. Recall that Je reys' priors for H and H are  (;  ) = ? and  (; ;  ) = ? (1 + )? = ; respectively. This situation is the special case of our computing method and we can use the FBF in (2.4). Thus we want to nd the marginal posterior density for . Since the joint posterior density under H is 2 1

2 2

1

0

2

1

3

1

0

3 2

1

2

0

1

 ;r (; ;  jy; s ; s ) / ? rIJ (1 + )? rI = (y:: ? ) + S )g; expf? 2r ( S + IJ 1+ 1

2

2 1

2 2

(

+3)

(

2 1

2

+3) 2

2

2 2

3

8

Younshik Chung and Sangjeen Lee

the marginal posterior density for  is

ZZ  ;r (; ;  jy; s ; s )dd Z ? rIJ ? rI =

 ;r (jY) = 1

2

1

2 1

2 2

2

 (1 + ) expf? 2r (S + 1 S+  )g 2  (1 + )g = d f rIJ Z ? rIJ = ? rI = ( ) expf? r (S + S )gd / (1 + ) 2 1+ W / (1 + )? rI = (1 + 1 +  )?rIJ= : To nd its normalizing constant, we have to get the value of integral, Z1 (3.3) (1 + )? rI = (1 + W )?rIJ= d: 1+ To do this, let Z = W=(W + 1 + ). Then the integral (3.3) can be obtained as follows; Z +1 W ( ? W )? rI = rIJ= ( W )?rIJ= W dZ Z Z Z Z +1 Z rI= ? (1 ? Z )rI J ? = ? dZ = W ?rI=  = W ?p  p ;q ( W ): W +1 Hence, 1 (1 + )q ? (1 + 1 +  )?p ?q :  ;r (jY) = (3.4) W p ;q ( WW ) W q

/

(

+3)

(

2

1 2

(

+3) 2

2

2 1

2 2

2

2

+2) 2

(

2

(

+2) 2

(

2 1

2 2

+2) 2

2

2

2

+2) 2

0

W W

(

+2) 2+

2

2

2

0

W W

2

2

1

(

1) 2

1

0

r

r

r

r

1

r

1

r

r

r

r

+1

And  (jY) is the special case of  ;r (jY) with r = 1. Since Br (Y) =  ;r ( jY)= ( jY), the proof is complete. Now, we apply this approach to a real data, Box and Tiao (1973)'s Dyestu data, which is set out in Table 3.1. The object of the experiment was to learn to what extent batch to batch variation in a certain raw material was responsible for variation in the nal product yield with Je reys prior. Five samples from each of six randomly chosen batches of raw material were taken. From data, S = 56,357.5, S =58,830.0 and W = 0:95797. 1

1

0

1

1

0

2 1

2 2

Computing Fractional Bayes Factor

9

So, MS = S = = 11; 271:50 and MS = S = = 2; 451:25 where  = 5 and  = 24 are between and within batch degrees of freedom, respectively. Hence,2 ^ = MS = 2451:25 and ^e = (MS ? MS )= = 1764:05 and so ^ = J 2 = 3:6 which ispthe maximum likelihood estimator and near its posterior mode. r = n? 12 = 1= 30 is used for the approximation coecient of FBF. The value of FBF for testing the hypothesis H :  = 3:6 is 2:2805. Therefore, we can say that this non-Bayesian estimator for  is decided as a good estimator in Bayesian viewpoint. 2 1

2

2 1

2 2

1

2

2 2

^e ^

2 2

2

2

2 1

1

2 2

1

0

Table 3.1. Dyestu Data Batch 1 Observations 1545 1440 1440 1520 1580

2 1540 1555 1490 1560 1495

3 1595 1550 1605 1510 1560

4 1445 1440 1595 1465 1545

5 1595 1630 1515 1635 1625

6 1520 1455 1450 1480 1445

3.2. The truncated normal data involving censored data Assume that x ; : : : ; xn be a random sample from normal distribution with mean  and variance  and each xj be in Aj for j = 1;    ; n. Let the full data index set I = f1; : : : ; ng be divided to the uncensored data index set Iu = fi ; : : : ; in g and the censored data index set Ic = fj ; : : : ; jn g, that is, n = nu + nc. Suppose that xi1 2 Ai1 ; : : : ; xi 2 Ai are only observed for i ; : : : ; in 2 Iu, and xj1 2 Aj1 ; : : : ; xj 2 Aj are censored for j ; : : : ; jn 2 Ic. We want to test the hypothesis H :  =  where  is a known value. In this situation, following on Je reys (1961), we take  () / ? under H and  (; ) =  (j) () where  (j) = N ( ;  ) and  () =  () under H . The true values of censored data xj1 ; : : : ; xj are treated as parameters. So, the marginal posterior density  (jx ;    ; xn) can be estimated using the method of Gelfand and Smith (1990). To overcome the unknown constant problem from improper priors, we use the FBF in section 2. Especially, the form of FBF in (2.3) is needed because  (j ) 6=  (). At rst, for the case f (x ;    ; xn j; ) we draw +  ;  ); jx ;    ; xn;   N ( nnx + 1 n+1 1

1

2

1

u

nu

1

nc

u

1

1

1

0

1

0

2

0

1

1

0

0

1

1

1

0

2

c

1

0

nc

1

1

nu

nc

0

1

c

0

0

10

Younshik Chung and Sangjeen Lee = jx ;    ; xn;   S n

1 2 1

1

+1

and

xj j;   NA (;  ); j 2 Ic P (xj ? x) =(n ? 1); x = where S = ( n ? 1) s + n ( x ?  ) + (  ?  ) ; s = Pj xj =n, NB (;  ) is a normal truncated to the set Bj , and Ic is de ned as before. Next, for the case f r (x ;    ; xnj; ) we draw 2

j

2

1

2

2

0

2

2

2

1

+  ;  ); jx ;    ; xn;   N ( rnrnx + 1 rn + 1 = S jx ;    ; xn;    r 2

0

1

1 2

1

rn+1

and

xj j;   NA (;  =r); j 2 Ic where Sr = r(n ? 1)s + rn(x ? ) + ( ?  ) : After these Gibbs sampling processes are performed N times iteratively, we have two samples, f g ;  g ; xjg1 ; : : : ; xjg ; g = 1; : : : ; N g and fr g ; r g ; xjr1g ; : : : ; xjr g ; g = 1; : : : ; N g. Let x g and xr g be the means of full data including g-th generated values for censored data fxjg ; j 2 Icg and fxjr g ; j 2 Icg, respectively. From these samples, we estimate that 2

j

2

2

( )

( )

( )

0

( )

2

( )

( )

( )

nc

( )

( )

( )

nc

( )

( )

N g g X ^ ( jx 2 A ;    ; xn 2 An) = N1 N ( nxn ++1 ; (n + )1 )( ) g ( )

1

0

1

1

( ) 2

0

0

=1

and

N rg X r g ) )( ) ^ ;r ( jx 2 A ;    ; xn 2 An) = N1 N ( rnxrn ++1  ; (rn +1 g ( )

1

0

1

1

0

( ) 2

0

=1

where the notation N (a; b)(c) means the normal density value at c with parameters of mean a and variance b. Now, to compute both of expectation terms of (2.3) we need two samples f~ ; ;    ; ~ ;N g and f~r; ;    ; ~r;N g from  (j ; x 2 A ;    ; xn 2 An) and  ;r (j ; x 2 A ;    ; xn 2 An), respectively. By proceeding as before with 11

1

1

0

1

1

1

1

0

1

1

Computing Fractional Bayes Factor

11

p

 xedpat  , we can get them. Since  ()= ( ; ) = 1p=( j) = 2; C = 2E (j ; X 2 A ;    ; Xn 2 An) and Cr = 2Er (j ; X 2 A ;    ; Xn 2 An); where E () and Er () are the expectations with respect to the distributions  (j ; x ;    ; xn) and  ;r (j ; x ;    ; xn), respectively. Hence, they are estimated by 0

1

0

1

0

1

1

0

0

1

1

0

1

1

1

0

1

1

0

1

p X p X C^ = 2 ~ ;g =N and C^r = 2 ~r;g =N: N

1

N

g=1

1

g=1

Therefore, we estimate Br (x1;    ; xn) as ^ B^r (x1;    ; xn) = ^^ (( jjxx 22AA ;;;;xxn 22AAn))  C^ : ;r n n Cr 1

1

0

0

1

1

1

1

1

(3.5)

We work numerically with data from an experiment described by Sampford and Taylor (1959) involving 17 pairs of rats that were litter-mates. One of each pair was injected with vitamin B ; the other was a control. The di erences between the log survival times in minites are < ?:25; ?:18; ?:83; ?:57; ?:49; ?:12; ?:11; ?:05; ?:04; ?:03; ?:11; :14; :30; :33; :43; :45 and > :30. The rst and the last observations are censored. We are interested in testing whether  =  = 0: Since Berger and Pericchi (1996)'s minimal training sample size is 1, we choose r = 1=17. To certify the convergence of each drawn sample, we use the checking method proposed by Gelman and Rubin (1992). The value of FBF in (3.5) for favoring H :  =  = 0 is computed as 1.4945. Therefore, we can conclude that the injection of vitamin B is not e ective, since the mean of di erences is decided as zero. 12

10

0

0

0

12

REFERENCES (1) Aitkin, M. (1991), Posterior Bayes factor (with discussion), Journal of the

Royal Statistical Society, Ser. B 53, 111-142.

(2) Berger, J. O. and Pericchi, L. R. (1996) The Intrinsic Bayes Factor for

Model Selection and Prediction, Journal of the American Statistical Association, 91, No. 433, 109-122. (3) Chen, M.H. (1994) Importance-weighted marginal Bayesian posterior density estimator, Journal of the American Statistical Association, 89. 818824.

12

Younshik Chung and Sangjeen Lee

(4) Dickey, J. (1971), The Weighted Likelihood Ratio Linear Hypotheses on

Normal Location Parameters, The Annuals of Statistics, 42, 204-223. (5) Gelfand, A. and Smith, A. F. M. (1990), Sampling-Based Approaches to Calculating Marginal Densities, Journal of the American Statistical Association, 85, 398-409. (6) Gelman, A. and Rubin, D. B. (1992), Inference from Iterative Simulation (with disscussion), Statistical Science, 7, 457-511. (7) Je reys, H. (1961), Theory of Probability, Oxford University Press, London. (8) Sampford, M.R. and Taylor, J. (1959), Censored Observations in Randomized Block Experiments, Journal of the Royal Statistical Society, Ser.B, 21, 214-237. (9) Smith, A. F. M. and Spiegelhalter, D. J. (1980), Bayes Factors and Choice Criteria for Linear Models, Journal of the Royal Statistical Society, Ser. B, 42, 213-220. (10) Spiegelhalter, D. J. and Smith, A. F. M. (1982), Bayes Factors for Liner and Log-Linear Models with Vague Prior Information, Journal of the Royal Statistical Society, Ser. B, 44, 377-387. (11) O'Hagan, A. (1995), Fractional Bayes Factors for Model Comparison, Journal of the Royal Statistical Society, Ser. B, 57, 99-138. (12) Verdinelli, I. and Wasserman, L. (1995), Computing Bayes Factors Using a Generalization of Savage-Dickey Density Ratio, Journal of the American Statistical Association, 90, 614-618.

Suggest Documents