Nov 10, 2007 - is a weighted sum of the NYSE, AMEX & NASDAQ indices. ... rzp is proxied by the 1-month T-bill rates
Bayesian Tests of Mean-Variance Efficiency Matthew Pollard 10th November 2007
Abstract This paper presents two Bayesian tests for the mean-variance efficiency of a portfolio. The first uses generalization of confidence intervals into RN , the second uses Bayes factors. The tests are applied in testing mean-variance efficiency of the market capitalization-weighted US stock market.
1
Introduction
Mean-variance efficiency is an important concept in modern finance. Both Markowitz’s (1956) portfolio selection model and Sharpe-Lintner (1965) capital asset pricing model (CAPM) use mean-variance efficient portfolios. In portfolio selection, mean-variance efficient portfolios are held by rational investors; in the CAPM, all assets are priced in relation to the market portfolio, which is assumed to be mean-variance efficient. Testing portfolio efficiency is therefore an important in verifying these models: do investors hold mean-variance efficient portfolios; and is the market mean-variance efficient? There has been considerable research focus deriving efficiency tests. Roll (1977) proved that portfolio mean-variance efficiency is equivalent to excess returns on all assets being linearly related to the excess portfolio returns with zero intercept. The testable consequence under this equivalent representation is thus whether the estimated intercept for each asset is zero. The influential paper of Gibbons, Ross and Shanken (1989) used ordinary-least squares regression theory to construct a generalized t-test of zero-intercepts. Importantly, they assume that returns drawn i.i.d. from the multivariate-normal distribution. There has been considerable discussion regarding the power of their test when returns do not conform to this distributional assumption. On daily and weekly timeframes, asset returns are non-normal and not identically distributed (Cont, 2003), and display volatility clustering and excess kurtosis. The General Method of Moments (Hansen, 1982) and Bayesian inference are two alternative methods that allow weakened distributional assumptions. GMM doe not make explicit distributional assumptions for returns and tests zero-intercepts by a goodness-offit statistic for moment conditions. Bayesian methods require parametric specification of the return distribution, however yield exact in-sample inference and enjoy several powerful tests based on the posterior joint distribution of the asset intercepts. We construct two Bayesian tests of mean variance efficiency using the joint distribution of estimated intercepts. The first test uses the multivariate posterior distribution of 1
the intercept parameters tests the zero-intercept hypothesis by constructing Bayesian credibility sets. The second test uses Bayes factors to compare the posterior likelihood of the null linear regression with zero intercept to the alternative with free intercept. The two tests are applied to the value-weighted CRSP index using 10 size decile portfolios as proxies for stock returns. Mean-variance efficiency is accepted at 95% confidence using both tests. The paper is organized as follows. Section 2 defines mean-variance efficiency, discusses Roll’s spanning result and presents the test for mean-variance efficiency. Section 3 discusses three statistical methods for testing the null: OLS, GMM and Bayesian, and section 4 presents the two Bayesian tests. Section 5 specifies the two models for returns used and specifies prior distributions. Section 6 presents the results.
2
Mean-Variance Efficiency
Let xp denote weights for a portfolio over N assets. Let µ and Σ respectively denote the vector of mean returns and the N × N matrix of variance-covariances. Given a set level 0
of expected return, E = xp µ, xp is a mean-variance efficient portfolio if it solves 0
0
xp = arg minx (x Σx), such that x µ = E.
(1)
This condition is equivalent to the following spanning result due to Roll (1977): xp is efficient iff for all assets i, E(ri − rzp ) = βi (Erp − Erzp ) where βi =
Cov(ri ,rp ) V ar(rp )
(2)
and rzp is the return on the (unique) portfolio satisfying Cov(rp , rzp ) =
0. If a risk-free asset does not exist in the set of assets, (2) is the statement of the Black CAPM with arbitrary benchmark portfolio. If a risk-free rate rf exists, rzp = rf and (2) is the Sharpe-Litnier CAPM. Testing (1) directly is difficult since the distribution of the efficient frontier curve is hard to derive; see Kan and Smith (2006). Testing (2) is straightforward since it specifies a linear regression with no intercept. An immediate testable implication is whether the intercept is zero. If we run the vector regression, rt − rzp,t = α + β(rp,t − rzp,t ) + εt , where rt = (r1,t , r2,t , ..., rN,t )T , then testing the null hypothesis H0 : {α1 = ... = αN = 0}
(3)
is a test of mean-variance efficiency. The general alternative hypothesis is that one or more αi 6= 0.
2
3
Tests of H0
There are several statistical methods for testing H0 : ordinary-least squares regression theory, generalized method of moments (GMM) and Bayesian. A short overview of each is provided.
3.1
Ordinary Least Squares
Several standard regression tests exist for H0 if returns satisfy usual ordinary least-squares (OLS) assumptions: normality, serial independence and homoscedasticity. Under these assumptions, t− and F −tests for significance may be used. The t-test for an individual αi is performed by calculating the t statistic, ti = q
α ˆi εˆ0 εˆ 0 −1 T −2 /(X X) .
,
and comparing each ti to the 1 − α quantile of the Student-t distribution with T − 2 degrees of freedom. Each t−test is carried out separately do not constitute a joint test of the null. Bonferonni adjustments are one way to correct this. Consider the hypothesis H0∗
:
N \
{αi = 0}.
i=1
Under this construction, H0∗ is rejected if one (or more) individual t-tests rejects αi = 0. Clearly, the probability of rejecting H0∗ is higher than rejecting an individual null {αi = 0}. Let the probability of a incorrectly rejecting the hypothesis {αi = 0} equal to x, that is p(reject {αi = 0}|H0 ) = x. If the test-statistics are independent, then the the probability of incorrectly H0∗ is equal to p(reject H0∗ |H0 ) = 1 − (1 − x)N
>x
Bonferroni adjustment is the process of scaling down the significance level x to x∗ so that p(reject H0∗ |H0 ) = x; solving for x∗ yields x∗ = 1 − (1 − x)1/N . Now testing H0∗ with x∗ significance has equal type-1 error as H0 . H0∗ still tests if each null is simultaneously true, and is consistent with H0 under the OLS assumptions and independence. While this procedure seems reasonable, Bonferroni adjustment are associated with several inferential problems; see Perneger, 1998. A joint test of H0 can be constructed using the OLS General Linear Hypothesis test . Let R = [1, ..., 1, 0, ..., 0] andX be the T × N matrix of repeated (rtm ) rows, βˆ = | {z } N
3
[ˆ α1 , ..., α ˆ N , βˆ1 , ..., βˆN ]T and r = [0, ..., 0]. The test statistic −1 (Rβˆ − r)T R(X T X)−1 RT (Rβˆ − r)/(2N − 1) f= T εˆ ε/(T − 2N ) has an F distribution with (2N − 1, T − 2N ) degrees of freedom. H0 is rejected if f >F (x, 2N − 1, T − 2N ). This test is similar in construction to the Gibbons-RossShanken (1989) T 2 test. If normality, independence, homoscedasticity assumptions are not met, OLS tests perform poorly (incorrect coverage probability and low power to reject alternatives). It is well documented that daily and monthly returns data is highly non-normal and heteroscedastic (for instance, Cont, 2001; Harris & K¨ u¸cu ¨k¨ozmen, 2001). Two statistical methods that are robust to these statistical properties are General Method of Moments and Bayesian. A short summary follows of both methods.
3.2
General Method of Moments
GMM tests whether the moment restrictions specified by a model hold. Generally, a model and set of distibutions it specifies (X, Θ) can be characterized by the entire (infinite) set of the moment restrictions:
E[g1 (X, Θ) − x1 ] = 0 E[g2 (X , Θ) − x2 ] = 0 .. .. .. . . . where X is the vector of random variables and Θ is the set of unknown parameters. GMM truncates this list of moment restrictions and estimates the parameters Θ to minimize the weighted squared error in the moment restrictions, " ˆ = argminΘ Θ
#0 " # T T 1X 1X gk (Θ, xt ) W gk (Θ, xt ) T T t=1
k
t=1
k
where W is a suitably chosen weighting matrix. If the number of estimated parameters, M , is less than the number of moment restrictions, N , the system is“over-identified” and a test statistic for the sample moment goodness-of-fit can be constructed. The test statistic is the weighted vector of sample moment errors, ˆ 0 W g(Θ), J = g(Θ) and under the null H0 : E[gk (Θ, X)] = 0, ∀k, T × J has an asymptotic chi-squared distribution with N − M degrees of freedom. A p-value test is used to test the null model: if J exceeds the 1 − α quantile χ2N −M,1−α of the chi-squared distribution, the model is rejected. A GMM test of H0 uses the moment restriction E{ri − rzp − βi (rp − rf )} = 0. Three 4
unknown parameters are Erzp , Erp and βi (for all assets i). This introduces two additional restrictions for estimation. The full sample moments restrictions are:
g=
1 T
T X
ri,t − rzp − βi (rp,t − rzp ) ∀i
2 (rm,t − [Erp,t ]2 )βi − (rp,t − Erp ri ) t=1 rp,t − Erp
The second condition estimates βi and is derived by rearranging βi =
Cov(ri ,rp ) V ar(rp ) .
The third
estimates Erm . For N assets, There are 2N + 1 moment restrictions and N + 2 unknown parameters. The system is over-identified for N ≥ 2 and the test statistic has distribution T J = T g 0 SW g ∼ χ2N −1 .
3.3
Bayesian Inference and MCMC
Bayesian inference is a formal system for turning prior beliefs and new evidence into posterior beliefs. Beliefs are specified as probability distributions over plausible but unknown states, S. Suppose data Y are observed. Bayes theorem provides the formula for posterior beliefs, p(S|Y ) = R
P (Y |S)p(S) . p(Y |S)p(S)dS
or simply, p(S|Y ) ∝ p(Y |S)p(S). The posterior belief is a weighted average of the prior belief, p(S), with weights equal to the likelihood of the data, p(Y |S). An advantage for Bayesian methods is that in-sample inference is exact if the model is properly specified and the posterior distribution is known. Bayesians do not rely on asymptotic results (central limit theorem, law of large numbers) for inference. These results are used in classical statistics to approximate the distribution of a sample statistic. In the Bayesian inference, the exact posterior distribution of the state is obtained and inference using this distribution follows immediately. The downside is that deriving closed form, analytical expressions for posterior distributions is extremely difficult; most models yield intractable posterior distributions. Due to this technical reason, Bayesian methods were rarely used until the development of fast PCs and Markov Chain Monte Carlo (MCMC). MCMC is a method of drawing samples from the posterior distribution by numerically simulating the path of a Markov chain. The chain is specially constructed to have an equilibrium distribution equal to the posterior distribution. If the conditional distribution of each unknown parameter in the model is known, a Gibbs sampler is used to draw samples. If not, the slower Metropolis-Hastings sampler is used. Until recently, MCMC was not computationally feasible and researchers had to program up their own sampling algorithms from scratch, which demands considerable effort. The purpose built MCMC software BUGS/WinBUGS (Spiegelhalter et. al, 1986) solves this problem. Sampling is treated as a black-box and the researcher needs to only specify the model as a sequence of conditional distributions and specify prior distributions. WinBUGS is used to perform the two Bayesian tests.
5
4
Two Bayesian Tests
The first test is based on constructing N -th dimensional confidence intervals, or “credibility sets” in Bayesian terminology, over the joint posterior distribution of intercepts. If the vector 0 = (0, ..., 0)lies within this set, H0 is accepted and the portfolio is mean-variance efficient. The second test uses Bayes Factors, a tool used in Bayesian model selection. Bayes factor are the ratio of posterior model likelihoods for two models. Here, the comparison is between the model with free intercept (α ∈ RN ) and the model with constrained intercept (α = 0). If the likelihood for the free-intercept model is considerably higher, we reject H0 .
4.1
Credibility Set Test
Let r denote the matrix of observed returns for each asset, and p(α|r) denote the posterior distribution of α = (α1 , ..., αN ). The test of H0 at (1 − x)% confidence is Accept H0 if (0, ..., 0) ∈ Minimal Credibility Setx (α), where Minimal Credibility Setx (α) is defined as the N -dimensional ball satisfying
(1) For all α∗ drawn from p(α1 , ..., αN |r), Pr[α∗ ∈ Minimal Credibility Setx (α)] = x (2) The volume the set is minimal. This set exists and is unique provided that the posterior distribution p(α|r) is smooth. The credibility set is a generalization of confidence intervals, which exist in R1 , to space RN . The condition that the volume is minimal is equivalent to the boundary of the set occupying the highest probability density region, which is a desirably property for any confidence interval. With N = 2, the set is an oval; with N = 3, it is an ovoid. The posterior joint distribution p(α|Y ) is the distribution of intercepts implied from data Y and prior information p(α), By Bayes rule, the posterior joint distribution of α is
p(α|Y ) =
1 C
Z
Z p(Y |α1 , ..., αN )p(α1 , ..., αN )d(α1 , ..., αN )
... αN
α1
where C is an integration constant ensuring that p(α|Y ) integrates to 1.
4.2
Bayes Factor Test
The Bayes Factor is a Bayesian model selection statistic used to compare evidence for two competing models, M0 and MA . It is defined as the ratio of posterior likelihoods for data Y : BF (M0 , MA ) =
6
p(Y |MA ) . p(Y |M0 )
Here, M0 is the constrained model with α = 0 and MA is the unconstrained model with α ∈ RN . Bayesian likelihoods are not the same as classical (“frequentist”) likelihoods. A classical ˆ of parameters into the conditional likelihood is calculated by substituting estimates (ˆ α, β) formula p(Y |α, β), that is, it measures the likelihood at single point. Estimated parameˆ are commonly picked to maximise this likelihood. A Bayesian likelihood is the ters (ˆ α, β) weighted average of point likelihoods corresponding to different parameter values. The weights equal the posterior distribution of each parameter. Formally,
ˆ Frequentist Likelihood = p(Y |ˆ α, β), Z Z Bayesian Likelihood = p(Y |α, β)p(α, β|Y )dαdβ β
α
The Bayesian likelihood automatically penalizes models with uncertain parameters by averaging p(Y |α, β) over the whole posterior space p(α, β|Y ). If BF ≥1, this is evidence for MA over M0 , or equivalently rejecting H0 in favour of HA . When Y accords to H0 , we should expect a Bayes factor of slightly less than one: M0 nests into MA ; and thus there should be little difference between the fitted regressions under H0 . Using classical likelihoods, this ratio has expectation of exactly 1. However, since MA contains an additional parameter for each asset, the Bayesian likelihood for this model should be lower due to greater parameter uncertainty. In deciding whether to accept MA , a threshold Bayes Factor value is needed. In general, there is no one-to-one correspondence between Bayes-factors and classical pvalues, i.e. “accept MA with 95% confidence if BF > threshold(0.95)”. Rules of thumb have instead been adopted. Kass and Rafterly (1995) suggest BF > 10 is “substantial” evidence for MA , and BF > 20 is “decisive” evidence. A critical threshold of 20 is adopted in this paper.
5
Models
The regression model is specified by: rt − rzp,t = α + β(rp,t − rzp,t ) + εt , where rt is the vector of N asset returns at time t, and α, β are vectors with elements αi , βi . We consider two models for the distribution of asset returns: i.i.d. multivariate normal and i.i.d multivariate student-t.
(1) : rt
∼ i.i.d
M V N (µ, Σ)
(2) : rt ∼ i.i.d. M V T (µ, Σ, k)
7
where µ is the vector of means, Σ is the variance-covariance matrix and k is the vector of degrees of freedom for each student-t. The multivariate normal model is used by Gibbons, Ross and Shanken (1989). The student-t distribution allows excess-kurtosis in returns to be modelled. More complicated specifications for error structure, such as multivariate-GARCH(1,1), are possible to run WinBUGS. For simplicity, we focus on these two i.i.d models. Prior distributions must be specified for each parameter in the model. We use highly diffuse priors that carry little prior information. Conjugate distributions are also used so that the fast Gibbs algorithm can be used for sampling. The priors are: αi ∼ N (0, 100), βi ∼ N (1, 100), Σi,j ∼ W ishart(100 × Ii=j , 10), ki ∼ χ2 (5) for each asset i. The variance-covariance matrix has a prior Wishart distribution, with density function k/2
f (Σ) = |R|
(k−p−1)/2
|Σ|
1 exp − (RΣ) 2
where k = 10 and R is a 10×10 matrix with i − jth element {100 × Ii=j }, that is, the prior variance for each asset is 100 and the prior covariances are zero. The conditional likelihood of data r used to calculate Bayes Factors are as follow. For the normal model, the conditional likelihood under MA is T Y 1 0 −10/2 1/2 p(r|, α, β, Σ) = (2π) |Σ| exp − (rt − α − βrm,t ) T (rt − α − βrm,t ) . 2
(4)
t=1
For the student-t model, the conditional likelihood under MA is: p(r|, α, β, Σ, k) =
−(k+10)/2 Γ[(k + 10)/2] 1 0 1/2 (r − α − βr ) Σ(r − α − βr ) |Σ| 1 + t m t m k Γ(k/2)k 10/2 π 10/2 (5)
Under M0 , the formulae are the same after setting α = 0.
6
Data
The test portfolio for mean-variance efficiency is the US market weighted by market capitalization. This is proxied by the monthly CRSP1 value-weighted aggregated index, which is a weighted sum of the NYSE, AMEX & NASDAQ indices. Ten CRSP stock portfolios are used as proxies for asset returns, which are sorted by capitalization decile. The data window is January 1926 to December 2006. Gibbons, Ross and Shanken (1989) the same dataset but with the equally-weighted CRSP index. The zero-correlation portfolio return rzp is proxied by the 1-month T-bill rates supplied by the CRSP risk-free rate index. The t-bill rate is treated as risk-free, that is, it is assumed that Cov(rp , rzp ) = 0. 1
Center for Research in Security Prices. Data available through Wharton Research Data Services, wrds.wharton.edu.
8
7
Results
7.1
Posterior Distribution Test
The posterior distribution p(α1 , ..., α10 |Y ) is sampled from using the Gibbs algorithm 10,000 times for the multivariate-normal model, and a Metropolis-Hastings algorithm for the multivariate-t model. 2000 initial observations are discarded to ensure stationarity of (k)
(k)
the chain, leaving 8000 observations pairs, {α1 , ..., α10 }K=8000 . The minimal credibility k=1 set is fitted by constructing a smoothed kernel density estimate for each αi , joining densities into an array and sliding a hyperplane perpendicular to the density until the volume above the intersection is equal to probability x. The method is outlined in Hyndman (1996), who has written an R library2 of credibility set tools. Multivariate-Normal The joint distribution p(α1 , ..., α10 |r) is shown in figure 1 through cross-sectional distribution p(αk , α10 |r) for ascending k = 1 to 10. There is strong negative correlation between α1 and α10 , which represent the lowest and highest deciles of capitalization respectively. Cross-sections of the R10 minimum credibility set are drawn at 1% (outer circle) and 5% (inner circle). The point 0 = (0, ..., 0) is not an element of this set at 1% or 5%, as seen in the top-left panel which plots p(α1 , α10 |r) : the point (0, 0) lies out of the set. Consequently, mean-variance efficiency is rejected at 1% and 5%. Multivariate student-t
Figure 2 shows the cross sectional plots and minimum cred-
ibility sets at 1% and 5%. A similar correlation structure is present. The credibility set has slightly wider volume and have slightly different positioning. The point 0 is contained in the set at 1% and marginally at 5%: the point (0, 0) is only just contained in the 5% set in p(α4 , α10 |r). Mean-variance efficiency is accepted at both levels.
7.2
Bayes Factor Test
A random sample of 8000 estimated parameters is generated for the unrestricted model (k)
(k)
(k)
MA (α ∈ RN ) and the restricted model M0 (α = 0). The pairs (αA , βA , ΣA ) and (k)
(k)
(β0 , Σ0 ) of parameter values for the two respective models are substituted into the conditional likelihood formulae given in (4) and (5). The Bayes factor is estimated by ˆ (M0 , MA ) = BF
(k) (k) (k) i=1 p(r|αA , βA , ΣA ) . PK (k) (k) p(r|β , Σ ) 0 0 i=1
PK
Multivariate Normal The estimated Bayes factor is ˆ (M0 , MA ) = 51.25 BF which indicates decisive evidence for the unconstrained model, α ∈ RN over the constrained model α = 0. The Bayes factor exceeds the adopted “decisive” threshold of 20. Mean-variance efficiency is rejected. 2
hdrcde, available through cran.r-project.org
9
0.002 0.001
0.002 0.001
4 e−04
−0.001
−0.001
0.000
0.000
0 e+00
o
−0.003
−0.003 0.010
0.015
−0.002
−0.001
0.000
0.001
0.002
0.003
0.002 0.001 0.000 −0.0025
−0.0015
−0.0005
0.0000
0.0005
0.000
0.001
−0.0015
−0.0005 0.0000
0.0005
0.0010
−0.001 −0.002
o
−0.003 −0.004
−0.004 −0.001
−0.0025
0.000
0.001 0.000 −0.001 −0.002
o
−0.003
−0.003 −0.004
−0.002
o
0.001
0.002 0.001 0.000 −0.001 −0.002
o
0.001
0.002
0.0005
0.000
−0.003
−0.004 −0.0005 0.0000
0.002
−0.0015
−0.001
−0.004
−0.003
−0.002 −0.003 −0.004 −0.0025
−0.002
−0.002
o
−0.002
o
−0.003
−0.001
−0.001
−0.001
0.000
0.000
0.001
0.001
0.002
0.002
0.005
−0.004
−0.004
−8 e−04
0.000
o
−0.002
−0.002
−4 e−04
o
−0.003
−0.002
−0.001
0.000
0.001
−0.003
−0.002
−0.001
0.000
0.001
Figure 1: Multivariate Normal. Cross sections of the 1% and 5% minimum credibility set (α1 , ..., α10 ). The set lies in R10 and each cross-section shows the joint posterior p(αk , α10 |r), k = 1 to 9. The set does not contain 0 = (0, ..., 0) at 1% or 5%, which is visible in the top-left panel; consequently efficiency is rejected at 1% and 5%.
10
0.001
0.001
0.002
0.002
4 e−04 2 e−04 −2 e−04 0 e+00
0.000
o
−0.002
0.000
0.002
−0.001
0.004
−0.002
−0.001
0.000
0.001
−0.0025
−0.0005
0.0000
0.0000
0.0005
0.002 −0.0010
−0.0005
0.0000
o
0.0005
−0.0010
−0.0005
0.0000
0.0005
0.002
−0.0015
0.000
0.000
0.001
0.001
0.002 0.001 0.000
−0.0005
−0.001 −0.0015
0.002
−0.0010
−0.0010
−0.002
−0.002
−0.001 −0.002
−0.0015
−0.0015
0.001 o
−0.001
o
−0.0020
0.000
0.000
0.000
0.001
0.001
0.002
0.002
−0.004
o
−0.002
−0.002
−6 e−04
−0.001
0.000
o
o
o
−0.0015
−0.0010
−0.0005
0.0000
0.0005
0.0010
−0.0020
−0.001 −0.002
−0.002
−0.002
−0.001
−0.001
o
−0.0015
−0.0010
−0.0005
0.0000
0.0005
0.0010
−0.002
−0.001
0.000
0.001
0.002
Figure 2: Multivariate Student-t. Cross sections of the 1% and 5% minimum credibility set for (α1 , ..., α10 ). The contains 0 = (0, ..., 0) both 1% or 5%, consequently efficiency is accepted.
11
Multivariate student-t The estimated Bayes factor is ˆ (M0 , MA ) = 21.10 BF which indicates significance evidence for the unconstrained model, α ∈ RN over the constrained model α = 0. Using the threshold of 20, mean-variance efficiency is marginally rejected.
8
Discussion
Results from the tests illustrate the importance of distributional assumptions in parametric test methods. Under the i.i.d multivariate-normal model, mean-variance efficiency is rejected “decisively” by the Bayes factor test, and rejected at the 1% and 5% level in the credibility set test. However, under the i.i.d multivariate-t model, the Bayes factor is considerably lower is only marginally decisiveness. Mean-variance efficiency is rejected at 5% but not 1% in the credibility set test. In conclusion, this paper presents two new Bayesian tests of mean-variance efficiency; the first uses the the joint distribution of intercept parameters to construct credibility sets; the second
References [1] Cont, R., “Empirical Properties of Asset Returns: Stylized Facts and Statistical Issues,” Quantitative Finance, 1, 2, 223-236, 2001. [2] Gibbons, M., Ross, S., and Shanken, J., “A test of the efficiency of a given portfolio,” Econometrica, 57, 5, 1121-1152, 1989. [3] Harris, R. & K¨ u¸cu ¨k¨ozmen, C., “The Empirical Distribution of UK and US Stock Returns,” Journal of Business Finance & Accounting, 28, 5-6, 715-740, 2001. [4] Hyndman, R.J., “Computing and graphing highest density regions,” American Statistician, 50, 120-126, 1996. [5] Kan, R., & Smith, D., “The Distribution of the Sample Minimum-Variance Frontier,” working paper, Simon Fraser University, 2006. [6] Perneger, T., “What’s wrong with Bonferroni adjustments,” British Medical Journal, 1998. [7] Roll, R., “A critique of the asset pricing theory’s test, Part I: On past and potential testability of the theory,” Journal of Financial Economics, 4, 2, 1977.
12
A
WinBUGS Model
For brevity, the two models that follow estimate the system with 2 assets; the k − th additional asset can be added by repeated the equations for meank, priors alphak and betak. An extrw rows and columns in the Wishart distribution matrix R with 0.01 diagonal must be added.
A.1
Normal Model
model{ for (t in 1:N) { mu[t,1]