Skip to content

Commit 224caef

Browse files
Tom's July 18 edits of calvoML lecture
1 parent 9092422 commit 224caef

File tree

3 files changed

+122
-1035
lines changed

3 files changed

+122
-1035
lines changed

lectures/_toc.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,6 @@ parts:
6767
- file: dyn_stack
6868
- file: calvo
6969
- file: calvo_gradient
70-
- file: calvo_gradient_old
7170
- file: opt_tax_recur
7271
- file: amss
7372
- file: amss2

lectures/calvo_gradient.md

Lines changed: 122 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -362,7 +362,7 @@ $$
362362
\tilde V = \sum_{t=0}^\infty \beta^t (
363363
h_0 + h_1 \tilde\theta_t + h_2 \tilde\theta_t^2 -
364364
\frac{c}{2} \mu_t^2 )
365-
$$
365+
$$ (eq:valueformula101)
366366
367367
or more precisely
368368
@@ -1008,28 +1008,43 @@ print(f'deviation = {np.linalg.norm(closed_grad - (- grad_J(jnp.ones(T))))}')
10081008
10091009
## Informative regressions
10101010
1011-
In the interest of looking for some parameters that might help us learn about the structure of
1012-
the Ramsey plan, we shall compute some least squares linear regressions of particular components of $\vec \theta$ and $\vec \mu$ on others.
1011+
To help us learn about the structure of the Ramsey plan, we shall compute some least squares linear regressions of particular components of $\vec \theta$ and $\vec \mu$ on others.
10131012
1014-
These regressions will reveal structure that is hidden within the $\vec \mu^R, \vec \theta^R$ sequences associated with the Ramsey plan.
1013+
Our hope is that these regressions will reveal structure hidden within the $\vec \mu^R, \vec \theta^R$ sequences associated with a Ramsey plan.
10151014
1016-
It is worth pausing here and noting the roles played by human intelligence and artificial intelligence (ML) here.
1015+
It is worth pausing here to think about roles played by **human** intelligence and **artificial** intelligence here.
10171016
1018-
AI (a.k.a. ML) is running the regressions for us.
1017+
Artificial intelligence (AI a.k.a. ML) is running the regressions.
10191018
10201019
But you can regress anything on anything else.
10211020
1022-
Human intelligence is telling us which regressions to run.
1021+
Human intelligence tell us which regressions to run.
10231022
1024-
And when we have those regressions in hand, considerably more human intelligence is required fully to
1025-
appreciate what they reveal about the structure of the Ramsey plan.
1023+
Furthermore, once we have those regressions in hand, considerably more human intelligence is required fully to appreciate what they reveal about the structure of the Ramsey plan.
10261024
10271025
```{note}
1028-
At this point, an advanced reader might want to read Chang {cite}`chang1998credible` and think about why he Chang takes
1029-
$\theta_t$ as a key state variable.
1026+
At this point, it is worthwhile to read how Chang {cite}`chang1998credible` chose
1027+
$\theta_t$ as his key state variable.
10301028
```
10311029
10321030
1031+
**REQUEST FOR HUMPHREY, JULY 18**
1032+
1033+
Please simply plot $\mu_t$ and $\theta_t$ for $t =0, \ldots, T$ against $t$ in the same graph with
1034+
$t$ on the $x$ axis. These are the data that we'll be running the regressions on.
1035+
1036+
1037+
**END OF REQUEST FOR HUMPHREY, JULY 18**
1038+
1039+
1040+
1041+
1042+
We begin by regressing $\mu_t$ on $\theta_t$.
1043+
1044+
This might seem strange because, first of all, equation {eq}`eq_grad_old3` asserts that inflation at time $t$ is determined $\{\mu_s\}_{s=t}^\infty$
1045+
1046+
Nevertheless, we'll run this regression anyway and provide a justification later.
1047+
10331048
```{code-cell} ipython3
10341049
# Compute θ using optimized_μ
10351050
θs = np.array(compute_θ(optimized_μ))
@@ -1044,6 +1059,18 @@ results1 = model1.fit()
10441059
print("Regression of μ_t on a constant and θ_t:")
10451060
print(results1.summary(slim=True))
10461061
```
1062+
Our regression tells us that along the Ramsey outcome $\vec \mu, \vec \theta$ the linear function
1063+
1064+
$$
1065+
\mu_t = .0645 + 1.5995 \theta_t
1066+
$$
1067+
1068+
fits perfectly.
1069+
1070+
Let's plot this function and the points $(\theta_t, \mu_t)$ that lie on it for $t=0, \ldots, T$.
1071+
1072+
1073+
10471074
10481075
```{code-cell} ipython3
10491076
plt.scatter(θs, μs)
@@ -1054,6 +1081,15 @@ plt.legend()
10541081
plt.show()
10551082
```
10561083
1084+
The time $0$ pair $\theta_0, \mu_0$ appears as the point on the upper right.
1085+
1086+
Points for succeeding times appear further and further to the lower left and eventually converge to
1087+
$\bar \mu, \bar \mu$.
1088+
1089+
Next, we'll run a linear regression of $\theta_{t+1}$ against $\theta_t$.
1090+
1091+
We'll include a constant.
1092+
10571093
```{code-cell} ipython3
10581094
# Second regression: θ_{t+1} on a constant and θ_t
10591095
θ_t = np.array(θs[:-1]) # θ_t
@@ -1066,6 +1102,15 @@ results2 = model2.fit()
10661102
print("\nRegression of θ_{t+1} on a constant and θ_t:")
10671103
print(results2.summary(slim=True))
10681104
```
1105+
We find that the regression line fits perfectly and thus discover the affine relationship
1106+
1107+
$$
1108+
\theta_{t+1} = - .0645 + .4005 \theta_t
1109+
$$
1110+
1111+
that prevails along the Ramsey outcome for inflation.
1112+
1113+
Let's plot $\theta_t$ for $t =0, 1, \ldots, T$ along the line.
10691114
10701115
```{code-cell} ipython3
10711116
plt.scatter(θ_t, θ_t1)
@@ -1078,12 +1123,32 @@ plt.tight_layout()
10781123
plt.show()
10791124
```
10801125
1081-
Now to learn about the structure of the optimal value $V$ as a function of $\vec \mu, \vec \theta$,
1082-
we'll run some more regressions.
1126+
Points for succeeding times appear further and further to the lower left and eventually converge to
1127+
$\bar \mu, \bar \mu$.
1128+
1129+
### Continuation values
1130+
1131+
1132+
We first define the following generalization of formula
1133+
{eq}`eq:valueformula101` for the value of the Ramsey planner.
1134+
1135+
Formula tells the Ramsey planner's value at time $0$, the time at which the Ramsey planner
1136+
chooses the sequence $\vec \mu$ once and for all.
1137+
1138+
We define the Ramsey planner's **continuation value** at time $s \in [0, \ldots, T-1]$ as
1139+
1140+
1141+
$$
1142+
v_t = \sum_{s=t}^{T-1} \beta^t (h_0 + h_1 \tilde\theta_s + h_2 \tilde\theta_t^s -
1143+
\frac{c}{2} \mu_s^2 ) + \frac{\beta^{T-t}}{1-\beta} (h_0 + h_1 \bar \mu + h_2 \bar \mu^2 - \frac{c}{2} \bar \mu^2 )
1144+
$$
1145+
1146+
To learn about the structure of the continuation values and how they relate to $\theta_t$,
1147+
we'll run regressions.
10831148
10841149
+++
10851150
1086-
First, we modified the function `compute_V_t` to return a sequence of $\vec v_t$.
1151+
First, we modify the function `compute_V_t` to return a sequence of continuation values $\vec v_t$.
10871152
10881153
```{code-cell} ipython3
10891154
def compute_V_t(μ, β, c, α=1, u0=1, u1=0.5, u2=3):
@@ -1104,6 +1169,7 @@ def compute_V_t(μ, β, c, α=1, u0=1, u1=0.5, u2=3):
11041169
11051170
return V_t
11061171
```
1172+
Now let's run a regression of $v_t$ on a constant, $\theta_t$, and $\theta_t^2$ and see how it fits.
11071173
11081174
```{code-cell} ipython3
11091175
# Compute v_t
@@ -1130,6 +1196,23 @@ X_vt = sm.add_constant(X)
11301196
model3 = sm.OLS(v_ts, X_vt).fit()
11311197
```
11321198
1199+
1200+
**REQUEST FOR HUMPHREY**
1201+
1202+
please write out a cell to print out the regression results -- i.e., the quadratic affine function coefficients, as you did earlier.
1203+
1204+
1205+
**END OF REQUEST FOR HUMPHREY**
1206+
1207+
We discover that the fit is perfect and that continuation values and inflation rates satisfy
1208+
the following relationship along a Ramsey outcome path:
1209+
1210+
$$
1211+
v_t = XXX + XXXX \theta_t + XXXX \theta_t^2
1212+
$$
1213+
1214+
Let's plot continuation values as a function of $\theta_t$ for $t =0, 1, \ldots, T$.
1215+
11331216
```{code-cell} ipython3
11341217
plt.figure()
11351218
plt.scatter(θs, v_ts)
@@ -1139,9 +1222,13 @@ plt.ylabel('$v_t$')
11391222
plt.legend()
11401223
plt.show()
11411224
```
1225+
In this graph, $\theta_t, v_t$ pairs start at the upper right at $t=0$ and move along downward along the smooth curve until they converge to $\bar \mu, v_T$ at $t=0$.
1226+
1227+
### What has machine learning taught us?
11421228
1143-
Using a different and more structured computational strategy, this quantecon lecture {doc}`calvo` represented
1144-
a Ramsey plan recursively via the following system of linear equations:
1229+
Assembling our regression findings, we have discovered by somehow guessing useful regressions to
1230+
run for our single Ramsey outcome path $\vec \mu^R, \vec \theta^R$ that along that path
1231+
the following relationships prevail:
11451232
11461233
11471234
@@ -1151,14 +1238,28 @@ a Ramsey plan recursively via the following system of linear equations:
11511238
\begin{aligned}
11521239
\theta_0 & = \theta_0^R \\
11531240
\mu_t & = b_0 + b_1 \theta_t \\
1241+
\theta_{t+1} & = d_0 + d_1 \theta_t \\
11541242
v_t & = g_0 +g_1\theta_t + g_2 \theta_t^2 \\
1155-
\theta_{t+1} & = d_0 + d_1 \theta_t , \quad d_0 >0, d_1 \in (0,1) \\
11561243
\end{aligned}
11571244
```
11581245
1159-
where $b_0, b_1, g_0, g_1, g_2$ were positive parameters that the lecture computed with Python code.
1246+
where the initial value $\theta_0^R$ was computed along with other components of $\vec \mu^R, \vec \theta^R$ when we computed the Ramsey plan, and where $b_0, b_1, g_0, g_1, g_2$ are parameters whose values we estimated with our regressions.
1247+
1248+
1249+
We have discovered this representation by running some carefully chosen regressions and staring at the results, noticing that the $R^2$ of unity tell us that the fits are perfect.
1250+
1251+
We have learned something about the structure of the Ramsey problem, but it is challenging to say more using the ideas that we have deployed in this lecture.
1252+
1253+
There are many other linear regressions among components of $\vec \mu^R, \theta^R$ that would also have given us perfect fits.
1254+
1255+
For example, we could have regressed $\theta_t$ on $\mu_t$ and gotten the same $R^2$ value.
1256+
1257+
Wouldn't that direction of fit have made more sense?
1258+
1259+
To answer that question, we'll have to deploy more economic theory.
1260+
1261+
We do that in this quantecon lecture {doc}`calvo`.
11601262
1161-
By running regressions on the outcomes $\vec \mu^R, \vec \theta^R$ that we have computed with the brute force gradient descent method in this lecture, we have recovered the same representation.
1263+
There, we'll discover that system {eq}`eq_old9101` is actually a very good way to represent
1264+
a Ramsey plan because it reveals many things about its structure.
11621265
1163-
However, in this lecture we have discovered the representation partly by brute force -- i.e.,
1164-
just by running some well chosen regressions and staring at the results, noticing that the $R^2$ of unity tell us that the fits are perfect.

0 commit comments

Comments
 (0)