Recursive Predictability Tests for Real-Time Data ... - Semantic Scholar

0 downloads 0 Views 291KB Size Report
All variables (except interest rates) are in logarithms, and ..... is a seasonally adjusted annual rate, available at monthly frequency, in billions of chained 1996 ...
Recursive Predictability Tests for Real-Time Data Atsushi Inoue

Barbara Rossi

NC State

Duke University June 2004

Abstract.

We propose a sequential test for predictive ability. The test is

designed for regressions in which the researcher is interested in recursively assessing whether some economic variables have predictive or explanatory content for another variable. It is common in the forecasting literature to assess predictive ability by using “one-shot” tests at each estimation period. We show that this practice: (i) leads to size distortions; (ii) selects overfitted models and provides spurious evidence of in-sample predictive ability; (iii) may lower the accuracy of the model selected by the test. The usefulness of the proposed test is shown in well-known empirical applications to the real-time predictive content of money for output, and the selection between linear and non-linear models.

Keywords: Sequential tests, predictive ability, model selection. JEL Classification: C52, C53

We would like to thank Todd Clark, Lutz Kilian, Michael McCracken, Alessandro Tarozzi and two anonymous referees for many useful and detailed comments. We are also grateful to seminar participants at the Financial Econometrics Lunch at Duke University and Louisiana State University, in particular T. Bollerslev, R. Gallant, E. Hillebrand and G. Tauchen for comments and helpful suggestions.

Corresponding author: Barbara Rossi, Department of Economics, Duke University, Durham, NC27705 USA. Phone: 919 660 1801. E-mail: [email protected].

1

1.

Introduction

Assessing whether there is predictability among macroeconomic variables has always been a central issue for applied researchers. For example, much effort has been devoted to analyzing whether money has predictive content for output. This question has been addressed by using both simple linear Granger Causality (GC) tests (e.g. Stock and Watson (1989)) as well as tests that allow for non-linear predictive relationships (e.g. Amato and Swanson (2001) and Stock and Watson (1999), among others). When parameters may be time-varying, and the objective of the researcher is to assess the presence of a relationship between two economic variables, it is tempting to use predictability tests recursively. While this procedure has the correct size at each point in time, it will not have the correct size over the whole sequence of test statistics. In particular, the overall size of the tests will approach one as the procedure is repeated more and more times. Similar problems are likely to occur when the researcher recursively tests whether inflation is under control, as many inflation-targeting Central Banks in practice do. We propose a new recursive test for predictive ability that controls the overall size of the procedure and, hence, protects the researcher from overfitting. Our test applies to predictive regressions in which, at each point in time, the researcher tests whether a set of economic variables has predictive content for some variable of interest on the basis of an in-sample test using only observations available until that time, and the parameters are recursively re-estimated as time goes by. The outcome of the test

2

may be used as evidence of in-sample predictive ability as well as for out of sample forecasting purposes. Commonly used tests, whose critical values do not take into account the recursive nature of the test (referred to as “one-shot tests”) will have size equal to the nominal (desired) level at each point in time. However, their recursive application will lead to severe size distortions. We instead derive the distribution of the test statistic under the null hypothesis by considering the recursive nature of the whole testing procedure. This allows us to derive the correct critical values, which can then be used to recursively test for predictive ability. The test statistics proposed in this paper are calculated as usual, but their critical values are different, and depend upon the sample size. These critical values can be easily calculated by using a table provided in the paper, so that applied researchers can directly apply the proposed test procedure. The test is similar in spirit to the fluctuation test discussed in Chu et al. (1996), but our test focuses on predictive ability. We also allow for a more general GMM framework and possibly nonlinear restrictions. The GMM framework can also be useful to select between linear and non-linear models, which is one of the empirical applications that we consider. Our test is different from existing out-of-sample recursive tests for predictive ability (e.g. Clark and McCracken (2001, 2003d) for one step ahead predictions, and Clark and McCracken (2003c) for h-steps ahead predictions, under the maintained assumption of dynamic correct specification) or out-of-sample tests of Granger Causality (see Chao, Corradi and Swanson (2001), and Corradi and Swanson (2002) for an

3

out-of-sample test for Granger Causality which is consistent against generic alternatives, and which allows for dynamic misspecification under the null). In these tests, the available sample is given, i.e. it is considered fixed. The sample is recursively split into two subsets: one which is used to estimate the parameters, and one which is used to validate the forecasts of the model. Despite the fact that this procedure involves recursive estimation of the parameters, the test is, in essence, one-shot, because the sample size is given. Furthermore, our procedure can be applied to situations in which data available at different times vary as a result of redefinitions, a common situation for macroeconomic data (see Croushore and Stark (2001)). Our discussion may shed some light on the fragile link between in-sample model selection and out-of-sample forecasting in real time. Stock and Watson (1989) apply in-sample Granger Causality tests and find some evidence that money has predictive content for output whereas more recent contributions find no evidence of outof-sample predictive ability. Thus, what kind of guide do in-sample tests offer to out-of-sample predictive ability? In-sample and out-of-sample tests often provide contradictory results. These contradictory findings are often attributed to overfitting or low power of forecasting tests (Kilian and Inoue (2002)) or to the presence of parameter instability (Clark and McCracken (2003a,b,d)). This paper investigates another possible explanation, namely the fact that repeated tests for model selection might select overfitted models, thus deteriorating forecasting ability. On the other hand, the approach in this paper is valid only for comparing two nested models and,

4

in this sense, it cannot be viewed as a sequential alternative to the Diebold and Mariano (1995) and West (1996) out-of-sample tests. The paper is organized as follows. Section 2 discusses background and motivation, Section 3 the main result of the paper: the recursive tests. Section 4 provides some small Monte Carlo evidence on the size and power of the proposed tests, and shows that they have both good size and power properties. Section 5 applies the recursive tests to two important empirical applications: the relationship between money and output, and the choice between linear and non-linear models for a few representative macroeconomic variables. The last section concludes. 2.

Background and motivation

As a simple motivating example,1 consider a researcher that has available a historical dataset of size T . He is interested in testing a null hypothesis on a parameter at each point in time t > T , that is, t = T +1, T +2, ... For example, the researcher is interested in assessing whether a scalar variable “x” has predictive content at any point in time for another variable “y”. That is, the researcher is interested in recursively testing hypotheses on β t in the regression: yt+1 = β t xt + ut+1 , where ut+1 satisfies the usual linear regression assumptions. The null hypothesis is: β t = β 0 at every t ≥ T +1, and b denote the recursive estimate the alternative is: β t 6= β 0 for some t ≥ T + 1. Let β t

of β at every point in time t = T + 1, T + 2, ... , and let τ t denote the associated t-test

statistic. To test the null hypothesis, one might simply perform a t-test using conven1

This is just an example. The framework of this paper is much more general, as explained later.

5

tional (normal) large sample critical values at each t, and reject the null hypothesis if the t-test rejects at any point in time. Unfortunately, with conventional critical values, by the Law of Iterated Logarithm the probability that this test eventually rejects the null hypothesis is asymptotically one. Note that the same argument remains true if one uses any constant critical values, no matter how large. To remedy this problem, this paper derives critical values that allow to “follow” the test statistic through the whole sequence as t = T + 1, T + 2, ... in such a way that the probability of rejecting the null hypothesis is under control at each t. This requires a boundary function such that the path of the test statistic crosses this boundary with the desired probability level under the null hypothesis. This is achieved by controlling the behavior of the test statistic as a function of π ≡ t/T and exploiting results on boundary crossing proban√ o √ bilities (e.g. Chu et al., 1996) like lim P t|τ t | ≥ T ψ (t/T ) , for some t > T = T →∞ n o lim P | √τ tπ | ≥ ψ (π) , for some π > 1 , where ψ (.) is the boundary function. There

T →∞

are various possible choices for the boundary function. For example, a fluctuation q 2 + ln (π), where k test of size α would use ψ (π) = k1,α 1,α is a constant such that

2[1 − Φ (kα ) + kα φ (kα )] = α, α is the desired size, and Φ, φ are the c.d.f. and p.d.f. of a standard normal distribution. In this paper, we also propose critical values that result in more powerful tests when there is more than one restriction.

6

3.

Assumptions and Theorems

Assume that {zT,t } is a triangular array of random variables. Consider estimation of the parameter θ based on moment conditions

E[g(zT,t , θ0 )] = 0

The researcher is interested in testing the null hypothesis: H0 : a(θ0 ) = 0, versus the alternative: a(θ0 ) 6= 0, where g : Z × Θ →

Suggest Documents