@@ -697,7 +697,8 @@ np.linalg.norm(clq.μ_CR - optimized_μ_CR)
697
697
```
698
698
699
699
```{code-cell} ipython3
700
- compute_V(optimized_μ_CR, β=0.85, c=2)
700
+ V_CR = compute_V(optimized_μ_CR, β=0.85, c=2)
701
+ V_CR
701
702
```
702
703
703
704
```{code-cell} ipython3
@@ -933,7 +934,8 @@ print(f'deviation = {np.linalg.norm(optimized_μ - clq.μ_series)}')
933
934
```
934
935
935
936
```{code-cell} ipython3
936
- compute_V(optimized_μ, β=0.85, c=2)
937
+ V_R = compute_V(optimized_μ, β=0.85, c=2)
938
+ V_R
937
939
```
938
940
939
941
We find that by exploiting more knowledge about the structure of the problem, we can significantly speed up our computation.
@@ -1039,8 +1041,8 @@ These are the data that we'll be running some linear least squares regressions o
1039
1041
# Plot the two sequences
1040
1042
Ts = np.arange(T)
1041
1043
1042
- plt.plot (Ts, μs, label=r'$\mu_t$')
1043
- plt.plot (Ts, θs, label=r'$\theta_t$')
1044
+ plt.scatter (Ts, μs, label=r'$\mu_t$', alpha=0.7 )
1045
+ plt.scatter (Ts, θs, label=r'$\theta_t$', alpha=0.7 )
1044
1046
plt.xlabel(r'$t$')
1045
1047
plt.legend()
1046
1048
plt.show()
@@ -1079,7 +1081,7 @@ fits perfectly.
1079
1081
Let's plot this function and the points $(\theta_t, \mu_t)$ that lie on it for $t=0, \ldots, T$.
1080
1082
1081
1083
```{code-cell} ipython3
1082
- plt.scatter(θs, μs)
1084
+ plt.scatter(θs, μs, label=r'$\mu_t$' )
1083
1085
plt.plot(θs, results1.predict(X1_θ), 'C1', label='$\hat \mu_t$', linestyle='--')
1084
1086
plt.xlabel(r'$\theta_t$')
1085
1087
plt.ylabel(r'$\mu_t$')
@@ -1121,8 +1123,8 @@ that prevails along the Ramsey outcome for inflation.
1121
1123
Let's plot $\theta_t$ for $t =0, 1, \ldots, T$ along the line.
1122
1124
1123
1125
```{code-cell} ipython3
1124
- plt.scatter(θ_t, θ_t1)
1125
- plt.plot(θ_t, results2.predict(X2_θ), color='C1', label='$\hat θ_t $', linestyle='--')
1126
+ plt.scatter(θ_t, θ_t1, label=r'$\theta_{t+1}$' )
1127
+ plt.plot(θ_t, results2.predict(X2_θ), color='C1', label='$\hat θ_{t+1} $', linestyle='--')
1126
1128
plt.xlabel(r'$\theta_t$')
1127
1129
plt.ylabel(r'$\theta_{t+1}$')
1128
1130
plt.legend()
1148
1150
v_t = s(\theta_t, \mu_t) + \beta v_ {t+1}
1149
1151
$$
1150
1152
1151
- for $t= T-1, T-2, \ldots, 0.$
1152
-
1153
+ for $t= T-1, T-2, \ldots, 0.$
1153
1154
1154
1155
```{code-cell} ipython3
1155
1156
# Define function for s and U in section 41.3
@@ -1158,8 +1159,10 @@ def s(θ, μ, u0, u1, u2, α, c):
1158
1159
return U(-α*θ) - (c / 2) * μ**2
1159
1160
1160
1161
# Calculate v_t sequence backward
1161
- def compute_vt(θ, μ, β, c, u0=1, u1=0.5, u2=3, α=1):
1162
+ def compute_vt(μ, β, c, u0=1, u1=0.5, u2=3, α=1):
1162
1163
T = len(μs)
1164
+ θ = compute_θ(μ, α)
1165
+
1163
1166
v_t = np.zeros(T)
1164
1167
μ_bar = μs[-1]
1165
1168
@@ -1176,11 +1179,9 @@ def compute_vt(θ, μ, β, c, u0=1, u1=0.5, u2=3, α=1):
1176
1179
1177
1180
return v_t
1178
1181
1179
- v_t = compute_vt(θs, μs, β=0.85, c=2)
1180
- print("continuation value sequence = ", v_t)
1182
+ v_t = compute_vt(μs, β=0.85, c=2)
1181
1183
```
1182
1184
1183
-
1184
1185
The initial continuation value $v_0$ should equals the optimized value of the Ramsey planner's criterion $V$ defined
1185
1186
in equation {eq}`eq:RamseyV`.
1186
1187
@@ -1195,6 +1196,26 @@ Also, please add a graph of $v_t$ against $t$ for $t=0, \ldots, T$.
1195
1196
1196
1197
**End of note to Humphrey**
1197
1198
1199
+ Indeed, we find that the deviation is very small
1200
+
1201
+ ```{code-cell} ipython3
1202
+ print(f'deviation = {np.linalg.norm(v_t[0] - V_R)}')
1203
+ ```
1204
+
1205
+ We can also verify this by inspecting a graph of $v_t$ against $t$ for $t=0, \ldots, T$ along with the value attained by a restricted Ramsey planner $V^{CR}$ and the optimized value of the ordinary Ramsey planner's criterion $V$
1206
+
1207
+ ```{code-cell} ipython3
1208
+ plt.scatter(Ts, v_t, label='$v_t$')
1209
+ plt.axhline(V_R, color='C2', linestyle='--', label='$V$')
1210
+ plt.axhline(V_CR, color='C1', linestyle='--', label='$V^{CR}$')
1211
+ plt.xlabel(r'$t$')
1212
+ plt.ylabel(r'$v_t$')
1213
+ plt.legend()
1214
+
1215
+ plt.tight_layout()
1216
+ plt.show()
1217
+ ```
1218
+
1198
1219
Next we ask Python to regress $v_t$ against a constant, $\theta_t$, and $\theta_t^2$.
1199
1220
1200
1221
$$
@@ -1240,6 +1261,7 @@ plt.legend()
1240
1261
plt.tight_layout()
1241
1262
plt.show()
1242
1263
```
1264
+
1243
1265
The highest continuation value $v_0$ at $t=0$ appears at the peak of the graph.
1244
1266
1245
1267
Subsequent values of $v_t$ for $t \geq 1$ appear to the left and converge monotonically from above to $v_T$ at time $T$.
0 commit comments