@@ -1029,8 +1029,7 @@ $\theta_t$ as his key state variable.
1029
1029
1030
1030
We'll begin by simply plotting the Ramsey plan's $\mu_t$ and $\theta_t$ for $t =0, \ldots, T$ against $t$ in a graph with $t$ on the ordinate axis.
1031
1031
1032
- These are the data that we'll be running some linear least squares regressions on.
1033
-
1032
+ These are the data that we'll be running some linear least squares regressions on.
1034
1033
1035
1034
```{code-cell} ipython3
1036
1035
# Compute θ using optimized_μ
@@ -1135,6 +1134,98 @@ plt.show()
1135
1134
Points for succeeding times appear further and further to the lower left and eventually converge to
1136
1135
$\bar \mu, \bar \mu$.
1137
1136
1137
+ Now we proceed to the third regression.
1138
+
1139
+ First we compute a sequence $\{v_t\}_{t=0}^T$ backward from $v_T$
1140
+
1141
+ $$
1142
+ v_T = \frac{1}{1-\beta} s(\bar \mu, \bar \mu).
1143
+ $$
1144
+
1145
+ Then starting from $t=T-1$, iterate backwards on the recursion
1146
+
1147
+ $$
1148
+ v_t = s(\theta_t, \mu_t) + \beta v_ {t+1}
1149
+ $$
1150
+
1151
+ for $t=0, \ldots, T-1$ to compute the sequence $\{v_t\}_{t=0}^T.$
1152
+
1153
+ ```{code-cell} ipython3
1154
+ # Define function for s and U in section 41.3
1155
+ def s(θ, μ, u0, u1, u2, α, c):
1156
+ U = lambda x: u0 + u1 * x - (u2 / 2) * x**2
1157
+ return U(-α*θ) - (c / 2) * μ**2
1158
+
1159
+ # Calculate v_t sequence backward
1160
+ def compute_vt(θ, μ, β, c, u0=1, u1=0.5, u2=3, α=1):
1161
+ T = len(μs)
1162
+ v_t = np.zeros(T)
1163
+ μ_bar = μs[-1]
1164
+
1165
+ # Reduce parameters
1166
+ s_p = lambda θ, μ: s(θ, μ,
1167
+ u0=u0, u1=u1, u2=u2, α=α, c=c)
1168
+
1169
+ # Define v_T
1170
+ v_t[T-1] = (1 / (1 - β)) * s_p(μ_bar, μ_bar)
1171
+
1172
+ # Backward iteration
1173
+ for t in reversed(range(T-1)):
1174
+ v_t[t] = s_p(θ[t], μ[t]) + β * v_t[t+1]
1175
+
1176
+ return v_t
1177
+
1178
+ v_t = compute_vt(θs, μs, β=0.85, c=2)
1179
+ ```
1180
+
1181
+ Now we can run regression
1182
+
1183
+ $$
1184
+ v_t = g_0 + g_1 \theta_t + g_2 \theta_t^2 .
1185
+ $$
1186
+
1187
+ ```{code-cell} ipython3
1188
+ # Third regression: v_t on a constant, θ_t and θ^2_t
1189
+ X3_θ = np.column_stack((np.ones(T), θs, θs**2))
1190
+ model3 = sm.OLS(v_t, X3_θ)
1191
+ results3 = model3.fit()
1192
+
1193
+
1194
+ # Print regression summary
1195
+ print("\nRegression of v_t on a constant, θ_t and θ^2_t:")
1196
+ print(results3.summary(slim=True))
1197
+ ```
1198
+
1199
+ **NOTE TO TOM**
1200
+
1201
+ We find that $\theta_t$ and $\theta_t^2$ are highly "linearly" correlated
1202
+
1203
+ ```{code-cell} ipython3
1204
+ np.corrcoef(θs, θs**2)
1205
+ ```
1206
+
1207
+ So the condition number is large.
1208
+
1209
+ **END OF NOTE TO TOM**
1210
+
1211
+ +++
1212
+
1213
+ Now we plot $v_t$ against $\theta_t$
1214
+
1215
+ ```{code-cell} ipython3
1216
+ θ_grid = np.linspace(min(θs), max(θs), 100)
1217
+ X3_grid = np.column_stack((np.ones(len(θ_grid)), θ_grid, θ_grid**2))
1218
+
1219
+ plt.scatter(θs, v_t)
1220
+ plt.plot(θ_grid, results3.predict(X3_grid), color='C1',
1221
+ label='$\hat v_t$', linestyle='--')
1222
+ plt.xlabel(r'$\theta_{t}$')
1223
+ plt.ylabel(r'$v_t$')
1224
+ plt.legend()
1225
+
1226
+ plt.tight_layout()
1227
+ plt.show()
1228
+ ```
1138
1229
1139
1230
### What has machine learning taught us?
1140
1231
@@ -1204,7 +1295,6 @@ and the parameters $d_0, d_1$ in the updating rule for $\theta_{t+1}$ in represe
1204
1295
1205
1296
First, we'll again use ``ChangLQ`` to compute these objects (along with a number of others).
1206
1297
1207
-
1208
1298
```{code-cell} ipython3
1209
1299
clq = ChangLQ(β=0.85, c=2, T=T)
1210
1300
```
0 commit comments