Skip to content

Commit 85dbae6

Browse files
committed
Adding a new regression v_t on theta and theta^2
1 parent f41653d commit 85dbae6

File tree

1 file changed

+93
-3
lines changed

1 file changed

+93
-3
lines changed

lectures/calvo_machine_learn.md

Lines changed: 93 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1029,8 +1029,7 @@ $\theta_t$ as his key state variable.
10291029
10301030
We'll begin by simply plotting the Ramsey plan's $\mu_t$ and $\theta_t$ for $t =0, \ldots, T$ against $t$ in a graph with $t$ on the ordinate axis.
10311031
1032-
These are the data that we'll be running some linear least squares regressions on.
1033-
1032+
These are the data that we'll be running some linear least squares regressions on.
10341033
10351034
```{code-cell} ipython3
10361035
# Compute θ using optimized_μ
@@ -1135,6 +1134,98 @@ plt.show()
11351134
Points for succeeding times appear further and further to the lower left and eventually converge to
11361135
$\bar \mu, \bar \mu$.
11371136
1137+
Now we proceed to the third regression.
1138+
1139+
First we compute a sequence $\{v_t\}_{t=0}^T$ backward from $v_T$
1140+
1141+
$$
1142+
v_T = \frac{1}{1-\beta} s(\bar \mu, \bar \mu).
1143+
$$
1144+
1145+
Then starting from $t=T-1$, iterate backwards on the recursion
1146+
1147+
$$
1148+
v_t = s(\theta_t, \mu_t) + \beta v_{t+1}
1149+
$$
1150+
1151+
for $t=0, \ldots, T-1$ to compute the sequence $\{v_t\}_{t=0}^T.$
1152+
1153+
```{code-cell} ipython3
1154+
# Define function for s and U in section 41.3
1155+
def s(θ, μ, u0, u1, u2, α, c):
1156+
U = lambda x: u0 + u1 * x - (u2 / 2) * x**2
1157+
return U(-α*θ) - (c / 2) * μ**2
1158+
1159+
# Calculate v_t sequence backward
1160+
def compute_vt(θ, μ, β, c, u0=1, u1=0.5, u2=3, α=1):
1161+
T = len(μs)
1162+
v_t = np.zeros(T)
1163+
μ_bar = μs[-1]
1164+
1165+
# Reduce parameters
1166+
s_p = lambda θ, μ: s(θ, μ,
1167+
u0=u0, u1=u1, u2=u2, α=α, c=c)
1168+
1169+
# Define v_T
1170+
v_t[T-1] = (1 / (1 - β)) * s_p(μ_bar, μ_bar)
1171+
1172+
# Backward iteration
1173+
for t in reversed(range(T-1)):
1174+
v_t[t] = s_p(θ[t], μ[t]) + β * v_t[t+1]
1175+
1176+
return v_t
1177+
1178+
v_t = compute_vt(θs, μs, β=0.85, c=2)
1179+
```
1180+
1181+
Now we can run regression
1182+
1183+
$$
1184+
v_t = g_0 + g_1 \theta_t + g_2 \theta_t^2 .
1185+
$$
1186+
1187+
```{code-cell} ipython3
1188+
# Third regression: v_t on a constant, θ_t and θ^2_t
1189+
X3_θ = np.column_stack((np.ones(T), θs, θs**2))
1190+
model3 = sm.OLS(v_t, X3_θ)
1191+
results3 = model3.fit()
1192+
1193+
1194+
# Print regression summary
1195+
print("\nRegression of v_t on a constant, θ_t and θ^2_t:")
1196+
print(results3.summary(slim=True))
1197+
```
1198+
1199+
**NOTE TO TOM**
1200+
1201+
We find that $\theta_t$ and $\theta_t^2$ are highly "linearly" correlated
1202+
1203+
```{code-cell} ipython3
1204+
np.corrcoef(θs, θs**2)
1205+
```
1206+
1207+
So the condition number is large.
1208+
1209+
**END OF NOTE TO TOM**
1210+
1211+
+++
1212+
1213+
Now we plot $v_t$ against $\theta_t$
1214+
1215+
```{code-cell} ipython3
1216+
θ_grid = np.linspace(min(θs), max(θs), 100)
1217+
X3_grid = np.column_stack((np.ones(len(θ_grid)), θ_grid, θ_grid**2))
1218+
1219+
plt.scatter(θs, v_t)
1220+
plt.plot(θ_grid, results3.predict(X3_grid), color='C1',
1221+
label='$\hat v_t$', linestyle='--')
1222+
plt.xlabel(r'$\theta_{t}$')
1223+
plt.ylabel(r'$v_t$')
1224+
plt.legend()
1225+
1226+
plt.tight_layout()
1227+
plt.show()
1228+
```
11381229
11391230
### What has machine learning taught us?
11401231
@@ -1204,7 +1295,6 @@ and the parameters $d_0, d_1$ in the updating rule for $\theta_{t+1}$ in represe
12041295
12051296
First, we'll again use ``ChangLQ`` to compute these objects (along with a number of others).
12061297
1207-
12081298
```{code-cell} ipython3
12091299
clq = ChangLQ(β=0.85, c=2, T=T)
12101300
```

0 commit comments

Comments
 (0)