You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By thinking a little harder about the mathematical structure of the Ramsey problem and using some linear algebra, we can simplify the problem that we hand over to a ``machine learning`` algorithm.
711
711
712
-
The idea here is that the Ramsey problem that chooses $\vec \mu$ to maximize the government's value function {eq}`eq:Ramseyvalue`subject to equation {eq}`eq:inflation101` is actually a quadratic optimum problem whose solution is characterized by a set of simultaneous linear equations in $\vec \mu$.
712
+
We start by recalling that the Ramsey problem that chooses $\vec \mu$ to maximize the government's value function {eq}`eq:Ramseyvalue`subject to equation {eq}`eq:inflation101`.
713
+
714
+
This is actually an optimization problem with a quadratic objective function and linear constraints.
715
+
716
+
First-order conditions for this problem are a set of simultaneous linear equations in $\vec \mu$.
717
+
718
+
If we trust that the second-order conditions for a maximum are also satisfied (they are in our problem),
719
+
we can compute the Ramsey plan by solving these equations for $\vec \mu$.
713
720
714
721
We'll apply this approach here and compare answers with what we obtained above with the gradient descent approach.
To help us learn about the structure of the Ramsey plan, we shall compute some least squares linear regressions of particular components of $\vec \theta$ and $\vec \mu$ on others.
1013
1020
1014
1021
Our hope is that these regressions will reveal structure hidden within the $\vec \mu^R, \vec \theta^R$ sequences associated with a Ramsey plan.
1015
1022
1016
-
It is worth pausing here to think about roles played by **human** intelligence and **artificial** intelligence here.
1023
+
It is worth pausing here to think about roles being played by **human** intelligence and **artificial** intelligence.
1017
1024
1018
-
Artificial intelligence, in this case meaning a computer, is running the regressions for us.
1025
+
Artificial intelligence, i.e., some Python code and a computer, is running the regressions for us.
1019
1026
1020
1027
But we are free to regress anything on anything else.
1021
1028
1022
-
Human intelligence tells us which regressions to run.
1029
+
Human intelligence tells us what regressions to run.
1023
1030
1024
1031
Additional inputs of human intelligence will be required fully to appreciate what those regressions reveal about the structure of a Ramsey plan.
1025
1032
1026
1033
```{note}
1027
-
At this point, it is worthwhile to read how Chang {cite}`chang1998credible` chose
1034
+
When we eventually get around to trying to understand the regressions below, it will worthwhile to study the reasoning that let Chang {cite}`chang1998credible` to choose
1028
1035
$\theta_t$ as his key state variable.
1029
1036
```
1030
1037
@@ -1049,15 +1056,15 @@ plt.show()
1049
1056
```
1050
1057
1051
1058
We notice that $\theta_t$ is less than $\mu_t$for low $t$'s but that it eventually converges to
1052
-
the same limit that $\mu_t$ does.
1059
+
the same limit $\bar \mu$ that $\mu_t$ does.
1053
1060
1054
-
This pattern reflects how formula {eq}`eq_grad_old3` for low $t$'s makes $\theta_t$ makes a weighted average of future $\mu_t$'s.
1061
+
This pattern reflects how formula {eq}`eq_grad_old3` makes $\theta_t$ be a weighted average of future $\mu_t$'s.
1055
1062
1056
1063
We begin by regressing $\mu_t$ on a constant and $\theta_t$.
1057
1064
1058
-
This might seem strange because, first of all, equation {eq}`eq_grad_old3` asserts that inflation at time $t$ is determined $\{\mu_s\}_{s=t}^\infty$
1065
+
This might seem strange because, after all, equation {eq}`eq_grad_old3` asserts that inflation at time $t$ is determined $\{\mu_s\}_{s=t}^\infty$
1059
1066
1060
-
Nevertheless, we'll run this regression anyway and provide a justification later.
1067
+
Nevertheless, we'll run this regression anyway.
1061
1068
1062
1069
```{code-cell} ipython3
1063
1070
# First regression: μ_t on a constant and θ_t
@@ -1078,7 +1085,12 @@ $$
1078
1085
1079
1086
fits perfectly.
1080
1087
1081
-
Let's plot this function and the points $(\theta_t, \mu_t)$ that lie on it for $t=0, \ldots, T$.
1088
+
1089
+
```{note}
1090
+
Of course, this means that a regression of $\theta_t$ on $\mu_t$ and a constant would also fit perfectly.
1091
+
```
1092
+
1093
+
Let's plot the regression line $\mu_t = .0645 + 1.5995 \theta_t$ and the points $(\theta_t, \mu_t)$ that lie on it for $t=0, \ldots, T$.
1082
1094
1083
1095
```{code-cell} ipython3
1084
1096
plt.scatter(θs, μs, label=r'$\mu_t$')
@@ -1089,10 +1101,9 @@ plt.legend()
1089
1101
plt.show()
1090
1102
```
1091
1103
1092
-
The time $0$ pair $\theta_0, \mu_0$ appears as the point on the upper right.
1104
+
The time $0$ pair $(\theta_0, \mu_0)$ appears as the point on the upper right.
1093
1105
1094
-
Points for succeeding times appear further and further to the lower left and eventually converge to
1095
-
$\bar \mu, \bar \mu$.
1106
+
Points $(\theta_t, \mu_t)$ for succeeding times appear further and further to the lower left and eventually converge to $(\bar \mu, \bar \mu)$.
1096
1107
1097
1108
1098
1109
Next, we'll run a linear regression of $\theta_{t+1}$ against $\theta_t$.
@@ -1136,6 +1147,8 @@ plt.show()
1136
1147
Points for succeeding times appear further and further to the lower left and eventually converge to
1137
1148
$\bar \mu, \bar \mu$.
1138
1149
1150
+
### Continuation Values
1151
+
1139
1152
Next, we'll compute a sequence $\{v_t\}_{t=0}^T$ of what we'll call "continuation values" along a Ramsey plan.
We can also verify this by inspecting a graph of $v_t$ against $t$ for $t=0, \ldots, T$ along with the value attained by a restricted Ramsey planner $V^{CR}$ and the optimized value of the ordinary Ramsey planner $V^R$
1208
+
We can also verify approximate equality by inspecting a graph of $v_t$ against $t$ for $t=0, \ldots, T$ along with the value attained by a restricted Ramsey planner $V^{CR}$ and the optimized value of the ordinary Ramsey planner $V^R$
Figure {numref}`continuation_values` shows several striking patterns:
1229
+
1230
+
* The sequence of continuation values $\{v_t\}_{t=0}^T$ is monotonically decreasing
1231
+
* Evidently, $v_0 > V^{CR} > v_T$ so that
1232
+
* the value $v_0$ of the ordinary Ramsey plan exceeds the value $V^{CR}$ of the special Ramsey plan in which the planner is constrained to set $\mu_t = \mu^{CR}$ for all $t$.
1233
+
* the continuation value $v_T$ of the ordinary Ramsey plan for $t \geq T$ is constant and is less than the value $V^{CR}$ of the special Ramsey plan in which the planner is constrained to set $\mu_t = \mu^{CR}$ for all $t$
1234
+
1235
+
1236
+
```{note}
1237
+
The continuation value $v_T$ is what some researchers call the "value of a Ramsey plan under a
1238
+
time-less perspective." A more descriptive phrase is "the value of the worst continuation Ramsey plan."
1239
+
```
1240
+
1241
+
1219
1242
Next we ask Python to regress $v_t$ against a constant, $\theta_t$, and $\theta_t^2$.
1220
1243
1221
1244
$$
@@ -1266,10 +1289,18 @@ The highest continuation value $v_0$ at $t=0$ appears at the peak of the graph.
1266
1289
1267
1290
Subsequent values of $v_t$ for $t \geq 1$ appear to the left and converge monotonically from above to $v_T$ at time $T$.
1268
1291
1292
+
**Aug 5 Request for Humphrey**
1293
+
1294
+
Please add a horizontal line labeled $V^{CR}$ to the above graph.
1295
+
It will make it even more of a killer graph!
1296
+
Thanks.
1297
+
1298
+
**End of Request for Humphrey**
1299
+
1269
1300
1270
1301
1271
1302
1272
-
## What has machine learning taught us?
1303
+
## What has Machine Learning Taught Us?
1273
1304
1274
1305
1275
1306
Our regressions tells us that along the Ramsey outcome $\vec \mu^R, \vec \theta^R$, the linear function
0 commit comments