Skip to content

Commit 4930e88

Browse files
committed
update graph for v_t and regression data
1 parent 85e36c0 commit 4930e88

File tree

1 file changed

+35
-13
lines changed

1 file changed

+35
-13
lines changed

lectures/calvo_machine_learn.md

Lines changed: 35 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -697,7 +697,8 @@ np.linalg.norm(clq.μ_CR - optimized_μ_CR)
697697
```
698698
699699
```{code-cell} ipython3
700-
compute_V(optimized_μ_CR, β=0.85, c=2)
700+
V_CR = compute_V(optimized_μ_CR, β=0.85, c=2)
701+
V_CR
701702
```
702703
703704
```{code-cell} ipython3
@@ -933,7 +934,8 @@ print(f'deviation = {np.linalg.norm(optimized_μ - clq.μ_series)}')
933934
```
934935
935936
```{code-cell} ipython3
936-
compute_V(optimized_μ, β=0.85, c=2)
937+
V_R = compute_V(optimized_μ, β=0.85, c=2)
938+
V_R
937939
```
938940
939941
We find that by exploiting more knowledge about the structure of the problem, we can significantly speed up our computation.
@@ -1039,8 +1041,8 @@ These are the data that we'll be running some linear least squares regressions o
10391041
# Plot the two sequences
10401042
Ts = np.arange(T)
10411043
1042-
plt.plot(Ts, μs, label=r'$\mu_t$')
1043-
plt.plot(Ts, θs, label=r'$\theta_t$')
1044+
plt.scatter(Ts, μs, label=r'$\mu_t$', alpha=0.7)
1045+
plt.scatter(Ts, θs, label=r'$\theta_t$', alpha=0.7)
10441046
plt.xlabel(r'$t$')
10451047
plt.legend()
10461048
plt.show()
@@ -1079,7 +1081,7 @@ fits perfectly.
10791081
Let's plot this function and the points $(\theta_t, \mu_t)$ that lie on it for $t=0, \ldots, T$.
10801082
10811083
```{code-cell} ipython3
1082-
plt.scatter(θs, μs)
1084+
plt.scatter(θs, μs, label=r'$\mu_t$')
10831085
plt.plot(θs, results1.predict(X1_θ), 'C1', label='$\hat \mu_t$', linestyle='--')
10841086
plt.xlabel(r'$\theta_t$')
10851087
plt.ylabel(r'$\mu_t$')
@@ -1121,8 +1123,8 @@ that prevails along the Ramsey outcome for inflation.
11211123
Let's plot $\theta_t$ for $t =0, 1, \ldots, T$ along the line.
11221124
11231125
```{code-cell} ipython3
1124-
plt.scatter(θ_t, θ_t1)
1125-
plt.plot(θ_t, results2.predict(X2_θ), color='C1', label='$\hat θ_t$', linestyle='--')
1126+
plt.scatter(θ_t, θ_t1, label=r'$\theta_{t+1}$')
1127+
plt.plot(θ_t, results2.predict(X2_θ), color='C1', label='$\hat θ_{t+1}$', linestyle='--')
11261128
plt.xlabel(r'$\theta_t$')
11271129
plt.ylabel(r'$\theta_{t+1}$')
11281130
plt.legend()
@@ -1148,8 +1150,7 @@ $$
11481150
v_t = s(\theta_t, \mu_t) + \beta v_{t+1}
11491151
$$
11501152
1151-
for $t= T-1, T-2, \ldots, 0.$
1152-
1153+
for $t= T-1, T-2, \ldots, 0.$
11531154
11541155
```{code-cell} ipython3
11551156
# Define function for s and U in section 41.3
@@ -1158,8 +1159,10 @@ def s(θ, μ, u0, u1, u2, α, c):
11581159
return U(-α*θ) - (c / 2) * μ**2
11591160
11601161
# Calculate v_t sequence backward
1161-
def compute_vt(θ, μ, β, c, u0=1, u1=0.5, u2=3, α=1):
1162+
def compute_vt(μ, β, c, u0=1, u1=0.5, u2=3, α=1):
11621163
T = len(μs)
1164+
θ = compute_θ(μ, α)
1165+
11631166
v_t = np.zeros(T)
11641167
μ_bar = μs[-1]
11651168
@@ -1176,11 +1179,9 @@ def compute_vt(θ, μ, β, c, u0=1, u1=0.5, u2=3, α=1):
11761179
11771180
return v_t
11781181
1179-
v_t = compute_vt(θs, μs, β=0.85, c=2)
1180-
print("continuation value sequence = ", v_t)
1182+
v_t = compute_vt(μs, β=0.85, c=2)
11811183
```
11821184
1183-
11841185
The initial continuation value $v_0$ should equals the optimized value of the Ramsey planner's criterion $V$ defined
11851186
in equation {eq}`eq:RamseyV`.
11861187
@@ -1195,6 +1196,26 @@ Also, please add a graph of $v_t$ against $t$ for $t=0, \ldots, T$.
11951196
11961197
**End of note to Humphrey**
11971198
1199+
Indeed, we find that the deviation is very small
1200+
1201+
```{code-cell} ipython3
1202+
print(f'deviation = {np.linalg.norm(v_t[0] - V_R)}')
1203+
```
1204+
1205+
We can also verify this by inspecting a graph of $v_t$ against $t$ for $t=0, \ldots, T$ along with the value attained by a restricted Ramsey planner $V^{CR}$ and the optimized value of the ordinary Ramsey planner's criterion $V$
1206+
1207+
```{code-cell} ipython3
1208+
plt.scatter(Ts, v_t, label='$v_t$')
1209+
plt.axhline(V_R, color='C2', linestyle='--', label='$V$')
1210+
plt.axhline(V_CR, color='C1', linestyle='--', label='$V^{CR}$')
1211+
plt.xlabel(r'$t$')
1212+
plt.ylabel(r'$v_t$')
1213+
plt.legend()
1214+
1215+
plt.tight_layout()
1216+
plt.show()
1217+
```
1218+
11981219
Next we ask Python to regress $v_t$ against a constant, $\theta_t$, and $\theta_t^2$.
11991220
12001221
$$
@@ -1240,6 +1261,7 @@ plt.legend()
12401261
plt.tight_layout()
12411262
plt.show()
12421263
```
1264+
12431265
The highest continuation value $v_0$ at $t=0$ appears at the peak of the graph.
12441266
12451267
Subsequent values of $v_t$ for $t \geq 1$ appear to the left and converge monotonically from above to $v_T$ at time $T$.

0 commit comments

Comments
 (0)