You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the interest of looking for some parameters that might help us learn about the structure of
1012
-
the Ramsey plan, we shall compute some least squares linear regressions of particular components of $\vec \theta$ and $\vec \mu$ on others.
1011
+
To help us learn about the structure of the Ramsey plan, we shall compute some least squares linear regressions of particular components of $\vec \theta$ and $\vec \mu$ on others.
1013
1012
1014
-
These regressions will reveal structure that is hidden within the $\vec \mu^R, \vec \theta^R$ sequences associated with the Ramsey plan.
1013
+
Our hope is that these regressions will reveal structure hidden within the $\vec \mu^R, \vec \theta^R$ sequences associated with a Ramsey plan.
1015
1014
1016
-
It is worth pausing here and noting the roles played by human intelligence and artificial intelligence (ML) here.
1015
+
It is worth pausing here to think about roles played by **human** intelligence and **artificial** intelligence here.
1017
1016
1018
-
AI (a.k.a. ML) is running the regressions for us.
1017
+
Artificial intelligence (AI a.k.a. ML) is running the regressions.
1019
1018
1020
1019
But you can regress anything on anything else.
1021
1020
1022
-
Human intelligence is telling us which regressions to run.
1021
+
Human intelligence tell us which regressions to run.
1023
1022
1024
-
And when we have those regressions in hand, considerably more human intelligence is required fully to
1025
-
appreciate what they reveal about the structure of the Ramsey plan.
1023
+
Furthermore, once we have those regressions in hand, considerably more human intelligence is required fully to appreciate what they reveal about the structure of the Ramsey plan.
1026
1024
1027
1025
```{note}
1028
-
At this point, an advanced reader might want to read Chang {cite}`chang1998credible` and think about why he Chang takes
1029
-
$\theta_t$ as a key state variable.
1026
+
At this point, it is worthwhile to read how Chang {cite}`chang1998credible` chose
1027
+
$\theta_t$ as his key state variable.
1030
1028
```
1031
1029
1032
1030
1031
+
**REQUEST FOR HUMPHREY, JULY 18**
1032
+
1033
+
Please simply plot $\mu_t$ and $\theta_t$ for $t =0, \ldots, T$ against $t$ in the same graph with
1034
+
$t$ on the $x$ axis. These are the data that we'll be running the regressions on.
1035
+
1036
+
1037
+
**END OF REQUEST FOR HUMPHREY, JULY 18**
1038
+
1039
+
1040
+
1041
+
1042
+
We begin by regressing $\mu_t$ on $\theta_t$.
1043
+
1044
+
This might seem strange because, first of all, equation {eq}`eq_grad_old3` asserts that inflation at time $t$ is determined $\{\mu_s\}_{s=t}^\infty$
1045
+
1046
+
Nevertheless, we'll run this regression anyway and provide a justification later.
1047
+
1033
1048
```{code-cell} ipython3
1034
1049
# Compute θ using optimized_μ
1035
1050
θs = np.array(compute_θ(optimized_μ))
@@ -1044,6 +1059,18 @@ results1 = model1.fit()
1044
1059
print("Regression of μ_t on a constant and θ_t:")
1045
1060
print(results1.summary(slim=True))
1046
1061
```
1062
+
Our regression tells us that along the Ramsey outcome $\vec \mu, \vec \theta$ the linear function
1063
+
1064
+
$$
1065
+
\mu_t = .0645 + 1.5995 \theta_t
1066
+
$$
1067
+
1068
+
fits perfectly.
1069
+
1070
+
Let's plot this function and the points $(\theta_t, \mu_t)$ that lie on it for $t=0, \ldots, T$.
1071
+
1072
+
1073
+
1047
1074
1048
1075
```{code-cell} ipython3
1049
1076
plt.scatter(θs, μs)
@@ -1054,6 +1081,15 @@ plt.legend()
1054
1081
plt.show()
1055
1082
```
1056
1083
1084
+
The time $0$ pair $\theta_0, \mu_0$ appears as the point on the upper right.
1085
+
1086
+
Points for succeeding times appear further and further to the lower left and eventually converge to
1087
+
$\bar \mu, \bar \mu$.
1088
+
1089
+
Next, we'll run a linear regression of $\theta_{t+1}$ against $\theta_t$.
1090
+
1091
+
We'll include a constant.
1092
+
1057
1093
```{code-cell} ipython3
1058
1094
# Second regression: θ_{t+1} on a constant and θ_t
1059
1095
θ_t = np.array(θs[:-1]) # θ_t
@@ -1066,6 +1102,15 @@ results2 = model2.fit()
1066
1102
print("\nRegression of θ_{t+1} on a constant and θ_t:")
1067
1103
print(results2.summary(slim=True))
1068
1104
```
1105
+
We find that the regression line fits perfectly and thus discover the affine relationship
1106
+
1107
+
$$
1108
+
\theta_{t+1} = - .0645 + .4005 \theta_t
1109
+
$$
1110
+
1111
+
that prevails along the Ramsey outcome for inflation.
1112
+
1113
+
Let's plot $\theta_t$ for $t =0, 1, \ldots, T$ along the line.
1069
1114
1070
1115
```{code-cell} ipython3
1071
1116
plt.scatter(θ_t, θ_t1)
@@ -1078,12 +1123,32 @@ plt.tight_layout()
1078
1123
plt.show()
1079
1124
```
1080
1125
1081
-
Now to learn about the structure of the optimal value $V$ as a function of $\vec \mu, \vec \theta$,
1082
-
we'll run some more regressions.
1126
+
Points for succeeding times appear further and further to the lower left and eventually converge to
1127
+
$\bar \mu, \bar \mu$.
1128
+
1129
+
### Continuation values
1130
+
1131
+
1132
+
We first define the following generalization of formula
1133
+
{eq}`eq:valueformula101` for the value of the Ramsey planner.
1134
+
1135
+
Formula tells the Ramsey planner's value at time $0$, the time at which the Ramsey planner
1136
+
chooses the sequence $\vec \mu$ once and for all.
1137
+
1138
+
We define the Ramsey planner's **continuation value** at time $s \in [0, \ldots, T-1]$ as
Now let's run a regression of $v_t$ on a constant, $\theta_t$, and $\theta_t^2$ and see how it fits.
1107
1173
1108
1174
```{code-cell} ipython3
1109
1175
# Compute v_t
@@ -1130,6 +1196,23 @@ X_vt = sm.add_constant(X)
1130
1196
model3 = sm.OLS(v_ts, X_vt).fit()
1131
1197
```
1132
1198
1199
+
1200
+
**REQUEST FOR HUMPHREY**
1201
+
1202
+
please write out a cell to print out the regression results -- i.e., the quadratic affine function coefficients, as you did earlier.
1203
+
1204
+
1205
+
**END OF REQUEST FOR HUMPHREY**
1206
+
1207
+
We discover that the fit is perfect and that continuation values and inflation rates satisfy
1208
+
the following relationship along a Ramsey outcome path:
1209
+
1210
+
$$
1211
+
v_t = XXX + XXXX \theta_t + XXXX \theta_t^2
1212
+
$$
1213
+
1214
+
Let's plot continuation values as a function of $\theta_t$ for $t =0, 1, \ldots, T$.
1215
+
1133
1216
```{code-cell} ipython3
1134
1217
plt.figure()
1135
1218
plt.scatter(θs, v_ts)
@@ -1139,9 +1222,13 @@ plt.ylabel('$v_t$')
1139
1222
plt.legend()
1140
1223
plt.show()
1141
1224
```
1225
+
In this graph, $\theta_t, v_t$ pairs start at the upper right at $t=0$ and move along downward along the smooth curve until they converge to $\bar \mu, v_T$ at $t=0$.
1226
+
1227
+
### What has machine learning taught us?
1142
1228
1143
-
Using a different and more structured computational strategy, this quantecon lecture {doc}`calvo` represented
1144
-
a Ramsey plan recursively via the following system of linear equations:
1229
+
Assembling our regression findings, we have discovered by somehow guessing useful regressions to
1230
+
run for our single Ramsey outcome path $\vec \mu^R, \vec \theta^R$ that along that path
1231
+
the following relationships prevail:
1145
1232
1146
1233
1147
1234
@@ -1151,14 +1238,28 @@ a Ramsey plan recursively via the following system of linear equations:
where $b_0, b_1, g_0, g_1, g_2$ were positive parameters that the lecture computed with Python code.
1246
+
where the initial value $\theta_0^R$ was computed along with other components of $\vec \mu^R, \vec \theta^R$ when we computed the Ramsey plan, and where $b_0, b_1, g_0, g_1, g_2$ are parameters whose values we estimated with our regressions.
1247
+
1248
+
1249
+
We have discovered this representation by running some carefully chosen regressions and staring at the results, noticing that the $R^2$ of unity tell us that the fits are perfect.
1250
+
1251
+
We have learned something about the structure of the Ramsey problem, but it is challenging to say more using the ideas that we have deployed in this lecture.
1252
+
1253
+
There are many other linear regressions among components of $\vec \mu^R, \theta^R$ that would also have given us perfect fits.
1254
+
1255
+
For example, we could have regressed $\theta_t$ on $\mu_t$ and gotten the same $R^2$ value.
1256
+
1257
+
Wouldn't that direction of fit have made more sense?
1258
+
1259
+
To answer that question, we'll have to deploy more economic theory.
1260
+
1261
+
We do that in this quantecon lecture {doc}`calvo`.
1160
1262
1161
-
By running regressions on the outcomes $\vec \mu^R, \vec \theta^R$ that we have computed with the brute force gradient descent method in this lecture, we have recovered the same representation.
1263
+
There, we'll discover that system {eq}`eq_old9101` is actually a very good way to represent
1264
+
a Ramsey plan because it reveals many things about its structure.
1162
1265
1163
-
However, in this lecture we have discovered the representation partly by brute force -- i.e.,
1164
-
just by running some well chosen regressions and staring at the results, noticing that the $R^2$ of unity tell us that the fits are perfect.
0 commit comments