Infinite-State Markov-switching for Dynamic Volatility Models : Web ...

1 downloads 216 Views 258KB Size Report
Mar 19, 2014 - Volatility Models : Web Appendix. Arnaud Dufays1 .... As the function φ is user-defined, one can choose
Infinite-State Markov-switching for Dynamic Volatility Models : Web Appendix Arnaud Dufays1

Centre de Recherche en Economie et Statistique March 19, 2014

1

Comparison of the two MS-GARCH approximations

Section three of the paper (see Estimation by Bayesian inference and model comparison) details two algorithms to infer the parameters of an MS-GARCH model. These two algorithms differ on the approximation to the MS-GARCH model. A more accurate approximation will lead to a higher acceptance rate as well as a lower autocorrelation between posterior draws. In order to differentiate the two algorithms, we carry out a Monte Carlo study based on simulated data of 1000 observations and analyze the mixing properties. For each simulation we compute the autocorrelation time1 as well as the required time for sampling one effective posterior draw.2 We consider four different MS-GARCH models exhibiting a break in ω with one or multiple switches (CP and MS in the Table 1). The data generated processes differ on their persistence (α + β) that are assumed to be equal across regimes. The Monte Carlo study consists of eight hundred different simulated data (one hundred per DGP per algorithm). The main interest of this study lies in the ability of sampling the state vector. These MCMC simulations therefore fix the GARCH parameters at their MLE and only draw the state vector given these parameters. The Table 1 displays the difference average of the maximum autocorrelation time and the difference average of the effective time for the two algorithms (Kl-MH for the model of Klaassen (2002) minus HMP-MH for the model of Haas, Mittnik, and Paolella (2004)). P The autocorrelation time is computed by batch means (see Geyer (1992)) and is defined as 1+2 ∞ i=1 ρi where ρi is the autocorrelation coefficient of order i between the posterior draws of a state variable. 2 following the formula : autocorrelation time x elapsed time for N posterior draws divided by N 1

Break in ω Persistence

ω1 = 0.1 0.7

0.8

ω2 = 0.7 0.9

0.95

CP MS

90 % CI of the difference between elapsed times for one effective draw [-0.23;0.02] [-0.54;0.02] [-0.75;0.01] [-1.10;0.01] [-0.85;0.15] [-1.05;0.01] [-1.19;0.05] [-1.35;0.29]

CP MS

90 % CI of the difference between autocorrelation times [-20.91;1.06] [-49.03;1.65] [-68.34;0.75] [-90.24;0.11] [-79.05;6.55] [-96.63;-1.10] [-111.96;-1.08] [-109.71;10.15]

Table 1: The differences are as follows Kl-MH minus HMP-MH where Kl-MH denotes ’Klaassens Metropolis-Hastings’ and HMP-MH stands for ’Haas, Mittnik and Paolella Metropolis-Hastings’. A negative value provides evidence in favor of the Kl-MH algorithm. Although almost all confidence intervals include zero, the distributions are skewed to the left whatever the persistence and the number of breaks. The left limit also increases if the persistence and/or the number of switches grow while no systematic rule can be depicted from the table for the right limit. These two observations lead us to believe that the Klaassen model provides a better approximation to the MS-GARCH model than the specification of Haas, Mittnik, and Paolella (2004). Another evidence is given in the simulation exercise of the paper (see subsection 4.3 Comparison with other algorithms). The result is not surprising since the former model keeps track of the preceding variances when a switch in the state occurs. Despite these comments, the HMP model could still be preferred in the IHMM context. Indeed the beam sampler combined with the Klaassen model somewhat complicates the sampling of the state vector. These computational difficulties do not arise with the HMP model.

2

Estimation of the spline-GARCH parameters by SMC sampler

The SMC sampler discretely approximates an artificial sequence of distributions {πn }pn=1 by sequential importance sampling. Let denote x = {α, β, ω0 , ..., ωk+1 }, the set containing all the spline-GARCH parameters. The artificial sequence of distributions is obtained by introducing an increasing function φ : Z → [0, 1] with φ(1) = 0 and φ(p) = 1 such that πi (x|YT ) ∝ f (YT |x)φ(i) f (x)

2

where f(x) denotes the prior density evaluated at x. Note that when n = p, the distribution πp coincides with the posterior distribution of interest. On the contrary, when n = 1, π1 is equal to the prior distribution (if the latter is proper). In the paper of Del Moral, Doucet, and Jasra (2006), a SMC algorithm is provided for sequentially approximating each of the distribution in the artificial sequence. As the function φ is user-defined, one can choose a function that smoothly increases such that, at a specific iteration of the SMC, the approximation of the previous targeted distribution remains close to the current one. The algorithm operates as follows. First, sample N draws {xi1 }N i=1 from the prior distribution and link them to uniform weights {W1i =

1 N N }i=1 .

Then, starting from n = 2 until

n = p, apply the steps (a)-(c) : 1. Correction step : ∀i ∈ [1, N ], Re-weight each particle with respect to the nth posterior distribution w ˜ni = f (YT |xin−1 )φ(n)−φ(n−1) Normalize the particle weights : Wni =

i i Wn−1 w ˜n PN j j . ˜n j=1 Wn−1 w

2. Re-sampling step : Compute the Effective Sample Size (ESS) as ESS = [

N X

(Wni )2 ]−1 .

i=1

if ESS

Suggest Documents