mitigate future uncertainty (âexperimentationâ). Unlike most previous literature we consider forward-looking models. Particular focus on DSGE models. Issues:.
Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic Approach Lars E.O. Svensson1 1 Sveriges 2 University
Noah Williams2
Riksbank
of Wisconsin - Madison
November 2009
Svensson and Williams
Optimal Model Policy Under Uncertainty
Introduction I have long been interested in the analysis of monetary policy under uncertainty. The problems arise from what we do not know; we must deal with the uncertainty from the base of what we do know. [...] The Fed faces many uncertainties, and must adjust its one policy instrument to navigate as best it can this sea of uncertainty. Our fundamental principle is that we must use that one policy instrument to achieve long-run price stability. [...] My bottom line is that market participants should concentrate on the fundamentals. If the bond traders can get it right, they’ll do most of the stabilization work for us, and we at the Fed can sit back and enjoy life. William Poole, (1998) “A Policymaker Confronts Uncertainty” Svensson and Williams
Optimal Model Policy Under Uncertainty
Overview Develop methods for policy analysis under uncertainty. Methods have broad potential applications. Consider optimal policy when policymakers don’t observe true economic structure, must learn from observations. Classic problem of learning and control: actions have informational component. Motive to alter actions to mitigate future uncertainty (“experimentation”). Unlike most previous literature we consider forward-looking models. Particular focus on DSGE models. Issues: How does uncertainty affect policy? How does learning affect losses? How does experimentation motive affect policy and losses?
Svensson and Williams
Optimal Model Policy Under Uncertainty
Overview Develop methods for policy analysis under uncertainty. Methods have broad potential applications. Consider optimal policy when policymakers don’t observe true economic structure, must learn from observations. Classic problem of learning and control: actions have informational component. Motive to alter actions to mitigate future uncertainty (“experimentation”). Unlike most previous literature we consider forward-looking models. Particular focus on DSGE models. Issues: How does uncertainty affect policy? How does learning affect losses? How does experimentation motive affect policy and losses?
Svensson and Williams
Optimal Model Policy Under Uncertainty
Some Related Literature This paper: application of our work in Svensson-Williams (2007- . . .) Aoki (1967), Chow (1973): Multiplicative uncertainty in LQ model (only backward-looking/control case) Control theory: Costa-Fragoso-Marques (2005), others Recursive saddlepoint method: Marcet-Marimon (1998) Blake-Zampolli (2005), Zampolli (2005): similar observable modes case, less general Wieland (2000, 2006), Beck and Wieland (2002): Optimal experimentation with backward looking models Cogley, Colacito, Sargent (2007): Adaptive policy as approximation to Bayesian, expectational variables Tesfaselassie, Schaling, Eijffinger (2006), Ellison (2006): similar, less general Svensson and Williams
Optimal Model Policy Under Uncertainty
Some Related Literature This paper: application of our work in Svensson-Williams (2007- . . .) Aoki (1967), Chow (1973): Multiplicative uncertainty in LQ model (only backward-looking/control case) Control theory: Costa-Fragoso-Marques (2005), others Recursive saddlepoint method: Marcet-Marimon (1998) Blake-Zampolli (2005), Zampolli (2005): similar observable modes case, less general Wieland (2000, 2006), Beck and Wieland (2002): Optimal experimentation with backward looking models Cogley, Colacito, Sargent (2007): Adaptive policy as approximation to Bayesian, expectational variables Tesfaselassie, Schaling, Eijffinger (2006), Ellison (2006): similar, less general Svensson and Williams
Optimal Model Policy Under Uncertainty
The Model
Standard linear rational expectations framework: Xt+1 = A11 Xt + A12 xt + B1 it + C1 εt+1 Et Hxt+1 = A21 Xt + A22 xt + B2 it + C2 εt Xt predetermined, xt forward-looking, it CB instruments (controls) εt i.i.d. shocks, N (0, I ) Matrices can take Nj different values in period t, corresponding to n modes jt = 1, 2, ..., Nj . Modes jt follow Markov chain w/transition matrix P = [Pjk ].
Svensson and Williams
Optimal Model Policy Under Uncertainty
The Model
Markov jump-linear-quadratic framework: Xt+1 = A11,t+1 Xt + A12,t+1 xt + B1,t+1 it + C1,t+1 εt+1 Et Ht+1 xt+1 = A21,t Xt + A22,t xt + B2,t it + C2,t εt Xt predetermined, xt forward-looking, it CB instruments (controls) εt i.i.d. shocks, N (0, I ) Matrices can take Nj different values in period t, corresponding to n modes jt = 1, 2, ..., Nj . Modes jt follow Markov chain w/transition matrix P = [Pjk ].
Svensson and Williams
Optimal Model Policy Under Uncertainty
Beliefs and Loss Central bank (and aggregate private sector) observe Xt , it , do not (in general) observe jt or εt . pt|t : perceived probabilities of modes in period t Prediction equation: pt+1|t = P 0 pt|t CB intertemporal loss function: Et
∞ X
δ τ Lt+τ
(1)
τ =0
Period loss:
0 Xt Xt 1 Lt ≡ xt Wt xt 2 it it
Svensson and Williams
(2)
Optimal Model Policy Under Uncertainty
Beliefs and Loss Central bank (and aggregate private sector) observe Xt , it , do not (in general) observe jt or εt . pt|t : perceived probabilities of modes in period t Prediction equation: pt+1|t = P 0 pt|t CB intertemporal loss function: Et
∞ X
δ τ Lt+τ
(1)
τ =0
Period loss:
0 Xt Xt 1 Lt ≡ xt Wt xt 2 it it
Svensson and Williams
(2)
Optimal Model Policy Under Uncertainty
General and Tractable Way to Model Uncertainty Large variety of uncertainty configurations. Approximate most (all?) relevant kinds of model uncertainty Regime switching models i.i.d. and serially correlated random model coefficients (generalized Brainard-type uncertainty) Different structural models Different variables, different number of leads and lags Backward- or forward-looking models Particular variable Private-sector expectations
Ambiguity aversion, robust control (P ∈ P) Different forms of CB judgment (for instance, perceived uncertainty) And many more . . . Svensson and Williams
Optimal Model Policy Under Uncertainty
General and Tractable Way to Model Uncertainty Large variety of uncertainty configurations. Approximate most (all?) relevant kinds of model uncertainty Regime switching models i.i.d. and serially correlated random model coefficients (generalized Brainard-type uncertainty) Different structural models Different variables, different number of leads and lags Backward- or forward-looking models Particular variable Private-sector expectations
Ambiguity aversion, robust control (P ∈ P) Different forms of CB judgment (for instance, perceived uncertainty) And many more . . . Svensson and Williams
Optimal Model Policy Under Uncertainty
General and Tractable Way to Model Uncertainty Large variety of uncertainty configurations. Approximate most (all?) relevant kinds of model uncertainty Regime switching models i.i.d. and serially correlated random model coefficients (generalized Brainard-type uncertainty) Different structural models Different variables, different number of leads and lags Backward- or forward-looking models Particular variable Private-sector expectations
Ambiguity aversion, robust control (P ∈ P) Different forms of CB judgment (for instance, perceived uncertainty) And many more . . . Svensson and Williams
Optimal Model Policy Under Uncertainty
General and Tractable Way to Model Uncertainty Large variety of uncertainty configurations. Approximate most (all?) relevant kinds of model uncertainty Regime switching models i.i.d. and serially correlated random model coefficients (generalized Brainard-type uncertainty) Different structural models Different variables, different number of leads and lags Backward- or forward-looking models Particular variable Private-sector expectations
Ambiguity aversion, robust control (P ∈ P) Different forms of CB judgment (for instance, perceived uncertainty) And many more . . . Svensson and Williams
Optimal Model Policy Under Uncertainty
Approximate MJLQ Models MJLQ models provide convenient approximations for nonlinear DSGE models. Underlying function of interest: f (X , θ), where X is continuous, θ ∈ {θ1 , . . . , θnj }. ¯ ¯ , θ): Taylor approximation around (X ¯ + fX (X ¯ ¯ j − θ). ¯ ¯ , θ) ¯ , θ)(X ¯ ) + fθ (X ¯ , θ)(θ f (X , θj ) ≈ f (X −X ¯ small shocks to X and θ. ¯ and θ → θ: Valid as X → X ¯ j , θj ): MJLQ approximation around (X ¯ j , θj )(X − X ¯ j ). ¯ j , θj ) + fX (X f (X , θj ) ≈ f (X ¯ j : small shocks to X , slow variation in θ Valid as X → X (P → I ). Svensson and Williams
Optimal Model Policy Under Uncertainty
Approximate MJLQ Models MJLQ models provide convenient approximations for nonlinear DSGE models. Underlying function of interest: f (X , θ), where X is continuous, θ ∈ {θ1 , . . . , θnj }. ¯ ¯ , θ): Taylor approximation around (X ¯ + fX (X ¯ ¯ j − θ). ¯ ¯ , θ) ¯ , θ)(X ¯ ) + fθ (X ¯ , θ)(θ f (X , θj ) ≈ f (X −X ¯ small shocks to X and θ. ¯ and θ → θ: Valid as X → X ¯ j , θj ): MJLQ approximation around (X ¯ j , θj )(X − X ¯ j ). ¯ j , θj ) + fX (X f (X , θj ) ≈ f (X ¯ j : small shocks to X , slow variation in θ Valid as X → X (P → I ). Svensson and Williams
Optimal Model Policy Under Uncertainty
Four Different Cases In each we assume commitment under timeless perspective. Follow Marcet-Marimon (1999). Convert to saddlepoint/min-max problem. Extended state vector includes lagged Lagrange multipliers, controls include current multipliers. 1
Observable modes (OBS) Current mode known, uncertainty about future modes
2
Optimal policy with no learning (NL) Naive updating equation: pt+1|t+1 = P 0 pt|t
3
Adaptive optimal policy (AOP) Policy as in NL, Bayesian updating of pt+1|t+1 each period No experimentation
4
Bayesian optimal policy (BOP) Optimal policy taking Bayesian updating into account Optimal experimentation Svensson and Williams
Optimal Model Policy Under Uncertainty
Four Different Cases In each we assume commitment under timeless perspective. Follow Marcet-Marimon (1999). Convert to saddlepoint/min-max problem. Extended state vector includes lagged Lagrange multipliers, controls include current multipliers. 1
Observable modes (OBS) Current mode known, uncertainty about future modes
2
Optimal policy with no learning (NL) Naive updating equation: pt+1|t+1 = P 0 pt|t
3
Adaptive optimal policy (AOP) Policy as in NL, Bayesian updating of pt+1|t+1 each period No experimentation
4
Bayesian optimal policy (BOP) Optimal policy taking Bayesian updating into account Optimal experimentation Svensson and Williams
Optimal Model Policy Under Uncertainty
Four Different Cases In each we assume commitment under timeless perspective. Follow Marcet-Marimon (1999). Convert to saddlepoint/min-max problem. Extended state vector includes lagged Lagrange multipliers, controls include current multipliers. 1
Observable modes (OBS) Current mode known, uncertainty about future modes
2
Optimal policy with no learning (NL) Naive updating equation: pt+1|t+1 = P 0 pt|t
3
Adaptive optimal policy (AOP) Policy as in NL, Bayesian updating of pt+1|t+1 each period No experimentation
4
Bayesian optimal policy (BOP) Optimal policy taking Bayesian updating into account Optimal experimentation Svensson and Williams
Optimal Model Policy Under Uncertainty
Four Different Cases In each we assume commitment under timeless perspective. Follow Marcet-Marimon (1999). Convert to saddlepoint/min-max problem. Extended state vector includes lagged Lagrange multipliers, controls include current multipliers. 1
Observable modes (OBS) Current mode known, uncertainty about future modes
2
Optimal policy with no learning (NL) Naive updating equation: pt+1|t+1 = P 0 pt|t
3
Adaptive optimal policy (AOP) Policy as in NL, Bayesian updating of pt+1|t+1 each period No experimentation
4
Bayesian optimal policy (BOP) Optimal policy taking Bayesian updating into account Optimal experimentation Svensson and Williams
Optimal Model Policy Under Uncertainty
1. Observable modes (OBS) Policymakers (and public) observe jt , know jt+1 drawn according to P. Analogue of regime-switching models in econometrics. ˜ t linear, preferences quadratic in X ˜t Law of motion for X conditional on modes. ˜ t for given j Solution linear in X ˜t , it = Fjt X ˜ t for given j Value function quadratic in X ˜ t , jt ) ≡ 1 X ˜ t + wjt . ˜ 0V ˜ ˜ X V (X 2 t X X,jt
Svensson and Williams
Optimal Model Policy Under Uncertainty
1. Observable modes (OBS) Policymakers (and public) observe jt , know jt+1 drawn according to P. Analogue of regime-switching models in econometrics. ˜ t linear, preferences quadratic in X ˜t Law of motion for X conditional on modes. ˜ t for given j Solution linear in X ˜t , it = Fjt X ˜ t for given j Value function quadratic in X ˜ t , jt ) ≡ 1 X ˜ t + wjt . ˜ 0V ˜ ˜ X V (X 2 t X X,jt
Svensson and Williams
Optimal Model Policy Under Uncertainty
2. Optimal policy with no learning (NL) Interpretation: Policymakers forget past Xt−1 , . . . in period t when choosing it . Allows for persistence of modes. But means beliefs don’t satisfy law of iterated expectations. Requires slightly more complicated Bellman equation. ˜ t , dual preferences quadratic in Law of motion linear in X ˜ Xt . pt|t exogenous. ˜ t for given pt|t Solution linear in X ˜t , it = Fi (pt|t )X ˜ t for given pt|t Value function quadratic in X ˜ t , pt|t ) ≡ 1 X ˜ 0 V ˜ ˜ (p )X ˜ t + w(pt|t ). V (X 2 t X X t|t Svensson and Williams
Optimal Model Policy Under Uncertainty
2. Optimal policy with no learning (NL) Interpretation: Policymakers forget past Xt−1 , . . . in period t when choosing it . Allows for persistence of modes. But means beliefs don’t satisfy law of iterated expectations. Requires slightly more complicated Bellman equation. ˜ t , dual preferences quadratic in Law of motion linear in X ˜ Xt . pt|t exogenous. ˜ t for given pt|t Solution linear in X ˜t , it = Fi (pt|t )X ˜ t for given pt|t Value function quadratic in X ˜ t , pt|t ) ≡ 1 X ˜ 0 V ˜ ˜ (p )X ˜ t + w(pt|t ). V (X 2 t X X t|t Svensson and Williams
Optimal Model Policy Under Uncertainty
3. Adaptive optimal policy (AOP) Similar to adaptive learning, anticipated utility, passive learning. Policy as under NL (disregarding Bayesian updating), ˜ t , pt|t ), xt = z(X ˜ t , pt|t ) it = i(X Transition equation for pt+1|t+1 from Bayes rule: ˜ t , pt|t , xt , it , jt , εt , jt+1 , εt+1 ). pt+1|t+1 = Q(X ˜ t . True AOP value function not Nonlinear, interacts with X ˜t . quadratic in X Evaluation of loss more complex numerically, but recursive implementation simple.
Svensson and Williams
Optimal Model Policy Under Uncertainty
3. Adaptive optimal policy (AOP) Similar to adaptive learning, anticipated utility, passive learning. Policy as under NL (disregarding Bayesian updating), ˜ t , pt|t ), xt = z(X ˜ t , pt|t ) it = i(X Transition equation for pt+1|t+1 from Bayes rule: ˜ t , pt|t , xt , it , jt , εt , jt+1 , εt+1 ). pt+1|t+1 = Q(X ˜ t . True AOP value function not Nonlinear, interacts with X ˜t . quadratic in X Evaluation of loss more complex numerically, but recursive implementation simple.
Svensson and Williams
Optimal Model Policy Under Uncertainty
Bayesian Updating Makes Beliefs Random Ex post, ˜ t , pt|t , xt , it , jt , εt , jt+1 , εt+1 ) pt+1|t+1 = Q(X is random variable, depends on jt+1 and εt+1 . Note that Et pt+1|t+1 = pt+1|t = P 0 pt|t . Bayesian updating gives a mean-preserving spread of pt+1|t+1 . ˜ t , pt|t ) concave in pt|t , lower loss under AOP, and it’s If V (X beneficial to learn. Note that we assume symmetric beliefs. Learning by public changes the nature of policy problem, may make stabilization more difficult. Svensson and Williams
Optimal Model Policy Under Uncertainty
Bayesian Updating Makes Beliefs Random Ex post, ˜ t , pt|t , xt , it , jt , εt , jt+1 , εt+1 ) pt+1|t+1 = Q(X is random variable, depends on jt+1 and εt+1 . Note that Et pt+1|t+1 = pt+1|t = P 0 pt|t . Bayesian updating gives a mean-preserving spread of pt+1|t+1 . ˜ t , pt|t ) concave in pt|t , lower loss under AOP, and it’s If V (X beneficial to learn. Note that we assume symmetric beliefs. Learning by public changes the nature of policy problem, may make stabilization more difficult. Svensson and Williams
Optimal Model Policy Under Uncertainty
Bayesian Updating Makes Beliefs Random Ex post, ˜ t , pt|t , xt , it , jt , εt , jt+1 , εt+1 ) pt+1|t+1 = Q(X is random variable, depends on jt+1 and εt+1 . Note that Et pt+1|t+1 = pt+1|t = P 0 pt|t . Bayesian updating gives a mean-preserving spread of pt+1|t+1 . ˜ t , pt|t ) concave in pt|t , lower loss under AOP, and it’s If V (X beneficial to learn. Note that we assume symmetric beliefs. Learning by public changes the nature of policy problem, may make stabilization more difficult. Svensson and Williams
Optimal Model Policy Under Uncertainty
4. Bayesian optimal policy (BOP) Optimal “experimentation” incorporated: may alter actions to mitigate future uncertainty. More complex numerically. Dual Bellman equation as in AOP, but now with belief updating equation incorporated in optimization. Because of the nonlinearity of Bayesian updating, solution ˜ t for given pt|t . no longer linear in X ˜ (X ˜ t , pt|t ), primal value function Dual value function V ˜ t , pt|t ), no longer quadratic in X ˜ t for given pt|t V (X Always weakly better than AOP in backward-looking models. Not necessarily true in forward-looking: experimentation by public changes policymaker constraints. Backward:
V
Forward: V
= min Et [L + δV ] i∈I h i ˜ + δV = max min Et L
Svensson and Williams
γ∈Γ i∈I
Optimal Model Policy Under Uncertainty
4. Bayesian optimal policy (BOP) Optimal “experimentation” incorporated: may alter actions to mitigate future uncertainty. More complex numerically. Dual Bellman equation as in AOP, but now with belief updating equation incorporated in optimization. Because of the nonlinearity of Bayesian updating, solution ˜ t for given pt|t . no longer linear in X ˜ (X ˜ t , pt|t ), primal value function Dual value function V ˜ t , pt|t ), no longer quadratic in X ˜ t for given pt|t V (X Always weakly better than AOP in backward-looking models. Not necessarily true in forward-looking: experimentation by public changes policymaker constraints. Backward:
V
Forward: V
= min Et [L + δV ] i∈I h i ˜ + δV = max min Et L
Svensson and Williams
γ∈Γ i∈I
Optimal Model Policy Under Uncertainty
4. Bayesian optimal policy (BOP) Optimal “experimentation” incorporated: may alter actions to mitigate future uncertainty. More complex numerically. Dual Bellman equation as in AOP, but now with belief updating equation incorporated in optimization. Because of the nonlinearity of Bayesian updating, solution ˜ t for given pt|t . no longer linear in X ˜ (X ˜ t , pt|t ), primal value function Dual value function V ˜ t , pt|t ), no longer quadratic in X ˜ t for given pt|t V (X Always weakly better than AOP in backward-looking models. Not necessarily true in forward-looking: experimentation by public changes policymaker constraints. Backward:
V
Forward: V
= min Et [L + δV ] i∈I h i ˜ + δV = max min Et L
Svensson and Williams
γ∈Γ i∈I
Optimal Model Policy Under Uncertainty
Numerical Methods & Summary of Results Suite of programs available on my website for OBS, NL cases. Very fast, efficient, and adaptable. For AOP and BOP, use Miranda-Fackler collocation methods, CompEcon toolbox. ˜ t , pt|t ) is not always concave in pt|t . Under NL, V (X AOP significantly different from NL, but not necessarily lower. Learning typically beneficial in backward-looking models, not always in forward-looking. May be easier to control expectations when agents don’t learn: the bond traders may get it (more) right, but that doesn’t always improve welfare. BOP modestly lower loss than AOP Ethical and other issues with BOP relative to AOP: Perhaps not much of a practical problem? Svensson and Williams
Optimal Model Policy Under Uncertainty
Numerical Methods & Summary of Results Suite of programs available on my website for OBS, NL cases. Very fast, efficient, and adaptable. For AOP and BOP, use Miranda-Fackler collocation methods, CompEcon toolbox. ˜ t , pt|t ) is not always concave in pt|t . Under NL, V (X AOP significantly different from NL, but not necessarily lower. Learning typically beneficial in backward-looking models, not always in forward-looking. May be easier to control expectations when agents don’t learn: the bond traders may get it (more) right, but that doesn’t always improve welfare. BOP modestly lower loss than AOP Ethical and other issues with BOP relative to AOP: Perhaps not much of a practical problem? Svensson and Williams
Optimal Model Policy Under Uncertainty
Numerical Methods & Summary of Results Suite of programs available on my website for OBS, NL cases. Very fast, efficient, and adaptable. For AOP and BOP, use Miranda-Fackler collocation methods, CompEcon toolbox. ˜ t , pt|t ) is not always concave in pt|t . Under NL, V (X AOP significantly different from NL, but not necessarily lower. Learning typically beneficial in backward-looking models, not always in forward-looking. May be easier to control expectations when agents don’t learn: the bond traders may get it (more) right, but that doesn’t always improve welfare. BOP modestly lower loss than AOP Ethical and other issues with BOP relative to AOP: Perhaps not much of a practical problem? Svensson and Williams
Optimal Model Policy Under Uncertainty
New Keynesian Phillips Curve Examples πt = (1 − ωjt )πt−1 + ωjt Et πt+1 + γjt yt + cjt εt Assume policymakers directly control output gap yt . Period loss function Lt = πt2 + 0.1yt2 ,
δ = 0.98
Example 1: How forward-looking is inflation? Assume ω1 = 0.2, ω2 = 0.8. E(ωj ) = 0.5. Fix other parameters: γ = 0.1, c = 0.5. Example 2: What is the slope of the Phillips curve? Assume γ1 = 0.05, γ2 = 0.25. E(γj ) = 0.15. Fix other parameters: ω = 0.5, c = 0.5. In both cases, highly persistent modes: 0.98 0.02 P= 0.02 0.98 Svensson and Williams
Optimal Model Policy Under Uncertainty
New Keynesian Phillips Curve Examples πt = (1 − ωjt )πt−1 + ωjt Et πt+1 + γjt yt + cjt εt Assume policymakers directly control output gap yt . Period loss function Lt = πt2 + 0.1yt2 ,
δ = 0.98
Example 1: How forward-looking is inflation? Assume ω1 = 0.2, ω2 = 0.8. E(ωj ) = 0.5. Fix other parameters: γ = 0.1, c = 0.5. Example 2: What is the slope of the Phillips curve? Assume γ1 = 0.05, γ2 = 0.25. E(γj ) = 0.15. Fix other parameters: ω = 0.5, c = 0.5. In both cases, highly persistent modes: 0.98 0.02 P= 0.02 0.98 Svensson and Williams
Optimal Model Policy Under Uncertainty
Example 1: Effect of Uncertainty Constant coefficients vs. OBS Policy: OBS and Constant Modes
Loss: OBS and Constant Modes
OBS 1 OBS 2 E(OBS) Constant
8 6
35 30
4 25 Loss
yt
2 0 −2
20 15
−4 10 −6 5
−8 −5
0 πt
5
Svensson and Williams
0 −5
0 πt
Optimal Model Policy Under Uncertainty
5
Example 1: Value functions
Loss: BOP
Loss: AOP
80
80
πt=0
70
70
πt=3.33
60
80
πt=−5
70 Loss
Loss
Loss
Loss: NL
60
60
50
50
50
40
40
40
30
0.2
0.4
0.6
0.8
0.2
0.4
p1t
0.6 p1t
Svensson and Williams
0.8
0.2
0.4
0.6
0.8
p1t
Optimal Model Policy Under Uncertainty
Example 1: Loss Differences Loss difference: BOP−NL
Loss differences: BOP−AOP
−9
x 10
1.5 −2 −2.5 −3 1 Loss
Loss
−3.5 −4 −4.5 −5
0.5
−5.5 −6 −6.5 0 0.2
0.4
0.6
0.8
p1t
0.2
0.4
0.6
0.8
p1t
Svensson and Williams
Optimal Model Policy Under Uncertainty
Example 1: Optimal Policies Policy: BOP
8
8
6
6
4
4
2
2 yt
yt
Policy: AOP
0
−2
−4
−4
−6
−6
−5
p1t=0.5 p1t=0.11
0
−2
−8
p1t=0.89
−8 0 πt
5
Svensson and Williams
−5
0 πt
Optimal Model Policy Under Uncertainty
5
Example 1: Policy Differences: BOP-AOP Policy difference: BOP−AOP −9
x 10
Policy difference: BOP−AOP
3
−9
x 10 2
5
1 yt
yt
0 0
−5 −1 −10 −2
0.8
5
0.6 −3 −5
0.4 0 πt
5 p1t
Svensson and Williams
0 0.2 −5
πt
Optimal Model Policy Under Uncertainty
Example 2: Effect of Uncertainty Constant coefficients vs. OBS Policy: OBS and Constant Modes
Loss: OBS and Constant Modes 18
OBS 1 OBS 2 E(OBS) Constant
4 3
16 14
2
12 Loss
yt
1 0
8
−1
6
−2
4
−3
2
−4 −5
10
0 πt
5
Svensson and Williams
0 −5
0 πt
Optimal Model Policy Under Uncertainty
5
Example 2: Value functions
Loss: NL
Loss
26 24
28
πt=0
26
πt=3
28
πt=−2
26 Loss
28
Loss
Loss: BOP
Loss: AOP
24
24
22
22
22
20
20
20
0.2
0.4
0.6
0.8
0.2
0.4
p1t
0.6 p1t
Svensson and Williams
0.8
0.2
0.4
0.6
0.8
p1t
Optimal Model Policy Under Uncertainty
Example 2: Loss Differences Loss difference: BOP−NL
Loss differences: BOP−AOP
0.7 −0.012 0.65 −0.014
0.6
−0.016
0.5
Loss
Loss
0.55
0.45
−0.018 −0.02
0.4 −0.022
0.35
−0.024
0.3 0.25
−0.026 0.2
0.4
0.6
0.8
p1t
0.2
0.4
0.6
0.8
p1t
Svensson and Williams
Optimal Model Policy Under Uncertainty
Example 2: Optimal Policies Policy: BOP
4
4
3
3
2
2
1
1 yt
yt
Policy: AOP
0
−1
−2
−2
−3
−3
−5
p1t=0.5 p1t=0.08
0
−1
−4
p1t=0.92
−4 0 πt
5
Svensson and Williams
−5
0 πt
Optimal Model Policy Under Uncertainty
5
Example 2: Policy Differences: BOP-AOP Policy difference: BOP−AOP Policy difference: BOP−AOP 0.4 0.5
0.3 0.2
0 yt
yt
0.1 0
−0.5
−0.1 −0.2
−1 −0.3 0.8 −0.4
5
0.6 0.4
−5
0 πt
5 p1t
Svensson and Williams
0 0.2 −5
πt
Optimal Model Policy Under Uncertainty
Example 3: Estimated New Keynesian Model πt
= ωfj Et πt+1 + (1 − ωfj )πt−1 + γj yt + cπj επt ,
yt
= βfj Et yt+1 + (1 − βfj ) [βyj yt−1 + (1 − βyj )yt−2 ] −βrj (it − Et πt+1 ) + cyj εyt .
Estimated hybrid model constrained to have one mode backward-looking, one partially forward-looking. Parameter ωf γ βf βr βy cπ cy
Mean 0.0938 0.0474 0.1375 0.0304 1.3331 0.8966 0.5572
Svensson and Williams
Mode 1 0.3272 0.0580 0.4801 0.0114 1.5308 1.0621 0.5080
Mode 2 0 0.0432 0 0.0380 1.2538 0.8301 0.5769
Optimal Model Policy Under Uncertainty
Example 3: Estimated New Keynesian Model πt
= ωfj Et πt+1 + (1 − ωfj )πt−1 + γj yt + cπj επt ,
yt
= βfj Et yt+1 + (1 − βfj ) [βyj yt−1 + (1 − βyj )yt−2 ] −βrj (it − Et πt+1 ) + cyj εyt .
Estimated hybrid model constrained to have one mode backward-looking, one partially forward-looking. Parameter ωf γ βf βr βy cπ cy
Mean 0.0938 0.0474 0.1375 0.0304 1.3331 0.8966 0.5572
Svensson and Williams
Mode 1 0.3272 0.0580 0.4801 0.0114 1.5308 1.0621 0.5080
Mode 2 0 0.0432 0 0.0380 1.2538 0.8301 0.5769
Optimal Model Policy Under Uncertainty
Example 3: More Detail
Estimated transition probabilities 0.9579 0.0421 P= 0.0169 0.9831 Loss function: Lt = πt2 + yt2 + 0.2(it − it−1 )2 , δ = 1 Only feasible to consider NL and AOP. Evaluate them via 1000 simulations of 1000 periods each.
Svensson and Williams
Optimal Model Policy Under Uncertainty
Example 3: Simulated Impulse Responses Response of π to π shock
Response of π to y shock
1 0.8 0.6 0.4 0.2 0
0.2 0.15 0.1 0.05 10
20
30
40
50
0
Response of y to π shock
−0.5 −1 10
20
30
40
50
0.6 0.4 0.2 0 −0.2 0
Response of i to π shock 3
2
2
1
1
0 10
20
30
40
30
40
50
10
20
30
40
50
Response of i to y shock
3
0
20
Response of y to y shock
0
0
10
50
Svensson and Williams
0 0
Constant AOP Median NL Median
10
20
30
Optimal Model Policy Under Uncertainty
40
50
Example 3: Simulated Distributions Distribution of Eπt2 0.15
Distribution of Eyt2 0.2
AOP NL
0.15 0.1 0.1 0.05 0.05 0 0
20
40
60
80
100
0 0
Distribution of Ei 2 t
20
40
60
80
100
Distribution of ELt
0.03 0.06 0.025
0.05
0.02
0.04
0.015
0.03
0.01
0.02
0.005
0.01
0
50
100
150
200
Svensson and Williams
0 0
50
100
150
Optimal Model Policy Under Uncertainty
200
Example 3: Representative Simulation Inflation 10 0 −10 −20 0
AOP NL 100
200
300
400
500 600 Output Gap
700
800
900
1000
100
200
300
400
500 600 Interest Rate
700
800
900
1000
100
200
300
400 500 600 Probability in Mode 1
700
800
900
1000
100
200
300
400
700
800
900
1000
20 0 −20 0 50 0 −50 0 1 0.5 0 0
Svensson and Williams
500
600
Optimal Model Policy Under Uncertainty
Conclusion MJLQ framework flexible, powerful, yet tractable way of handling model uncertainty and non-certainty equivalence. Large variety of uncertainty configurations, also able to incorporate a large variety of CB judgment. Extension to forward-looking variables via recursive saddlepoint method. Straightforward to incorporate unobservable modes w/o learning. Adaptive policy as easy to implement, harder to evaluate. Bayesian optimal policy more complex, particularly in forward-looking cases. Learning has sizeable effects, may or may not be beneficial. Experimentation seems to have relatively little effect. Svensson and Williams
Optimal Model Policy Under Uncertainty
Conclusion MJLQ framework flexible, powerful, yet tractable way of handling model uncertainty and non-certainty equivalence. Large variety of uncertainty configurations, also able to incorporate a large variety of CB judgment. Extension to forward-looking variables via recursive saddlepoint method. Straightforward to incorporate unobservable modes w/o learning. Adaptive policy as easy to implement, harder to evaluate. Bayesian optimal policy more complex, particularly in forward-looking cases. Learning has sizeable effects, may or may not be beneficial. Experimentation seems to have relatively little effect. Svensson and Williams
Optimal Model Policy Under Uncertainty
Conclusion MJLQ framework flexible, powerful, yet tractable way of handling model uncertainty and non-certainty equivalence. Large variety of uncertainty configurations, also able to incorporate a large variety of CB judgment. Extension to forward-looking variables via recursive saddlepoint method. Straightforward to incorporate unobservable modes w/o learning. Adaptive policy as easy to implement, harder to evaluate. Bayesian optimal policy more complex, particularly in forward-looking cases. Learning has sizeable effects, may or may not be beneficial. Experimentation seems to have relatively little effect. Svensson and Williams
Optimal Model Policy Under Uncertainty