Skip to content

Commit e92331d

Browse files
committed
update notes
1 parent aee6065 commit e92331d

File tree

1 file changed

+72
-25
lines changed

1 file changed

+72
-25
lines changed

lectures/calvo_gradient.md

Lines changed: 72 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -609,7 +609,7 @@ compute_V(clq.μ_series, β=0.85, c=2)
609609
### Some regressions
610610
611611
In the interest of looking for some parameters that might help us learn about the structure of
612-
the Ramsey plan, we shall some least squares linear regressions of various components of $\vec \theta$ and $\vec \mu$ on others.
612+
the Ramsey plan, we shall some least squares linear regressions of various components of $\vec \theta$ and $\vec \mu$ on others.
613613
614614
```{code-cell} ipython3
615615
# Compute θ using optimized_μ
@@ -662,8 +662,6 @@ plt.show()
662662
Now to learn about the structure of the optimal value $V$ as a function of $\vec \mu, \vec \theta$,
663663
we'll run some more regressions.
664664
665-
666-
667665
+++
668666
669667
First, we modified the function `compute_V_t` to return a sequence of $\vec v_t$.
@@ -723,7 +721,6 @@ plt.legend()
723721
plt.show()
724722
```
725723
726-
727724
Using a different and more structured computational strategy, this quantecon lecture {doc}`calvo` represented
728725
a Ramsey plan recursively via the following system of linear equations:
729726
@@ -757,11 +754,7 @@ First, recall that a Ramsey planner chooses $\vec \mu$ to maximize the governmen
757754
We now define a distinct problem in which the planner chooses $\vec \mu$ to maximize the government's value function {eq}`eq:Ramseyvalue`subject to equation {eq}`eq:inflation101` and
758755
the additional restriction that $\mu_t = \bar \mu$ for all $t$.
759756
760-
The solution of this problem is a single $\mu$ that this quantecon lecture {doc}`calvo` calls $\mu^{CR}$.
761-
762-
+++
763-
764-
757+
The solution of this problem is a single $\mu$ that this quantecon lecture {doc}`calvo` calls $\mu^{CR}$.
765758
766759
```{code-cell} ipython3
767760
# Initial guess for single μ
@@ -864,7 +857,7 @@ $$
864857
B = (1-\lambda) A^{-1}
865858
$$
866859
867-
Let's check this equation by using it and then comparing outcomes with our earlier results.
860+
Let's check this equation by using it and then comparing outcomes with our earlier results.
868861
869862
```{code-cell} ipython3
870863
λ = clq.α / (1 + clq.α)
@@ -951,6 +944,66 @@ in the code.**
951944
952945
**Response note to Humphrey** Shouldn't it instead be $ \vec \beta \cdot \beta \cdot (\vec \mu @ B)^T(\vec \mu @ B)$?
953946
947+
**Response note to Tom**: Thanks so much for pointing this out, you are right! That is what in my code. Sorry for the typo. I think in every case, we have $\vec{\beta} \cdot \vec{\beta}$. Perhaps we can just define:
948+
949+
$$
950+
\vec{\beta} = \begin{bmatrix} 1 \\ \beta \\ \vdots \\ \beta^{T-1} \\ \frac{\beta^{T}}{(1-\beta)} \end{bmatrix}
951+
$$
952+
953+
Then we have:
954+
955+
$$
956+
\sum_{t=0}^\infty \beta^t \theta_t = \vec{\mathbf{1}} @ (\vec{\beta} \cdot (B @ \vec{\mu}))
957+
$$
958+
959+
and
960+
961+
$$
962+
\sum_{t=0}^\infty \beta^t \theta_t^2 = \vec{\beta} \cdot (\vec{\mu}^T B^T)(B \vec{\mu})
963+
$$
964+
965+
and
966+
967+
$$
968+
\sum_{t=0}^\infty \beta^t \mu_t^2 = \vec{\mu}^T \vec{\beta} \vec{\mu}
969+
$$
970+
971+
It follows that
972+
973+
$$
974+
\begin{aligned}
975+
J = V - h_0 = \sum_{t=0}^\infty \beta^t (h_1 \theta_t + h_2 \theta_t^2 - \frac{c}{2} \mu_t^2) = h_1 \cdot \vec{\mathbf{1}} @ (\vec{\beta} \cdot (B @ \vec{\mu})) + h_2 \cdot \vec{\beta} \cdot (\vec{\mu}^T B^T)(B \vec{\mu}) - \frac{c}{2} \vec{\mu}^T \vec{\beta} \vec{\mu}
976+
\end{aligned}
977+
$$
978+
979+
So
980+
981+
$$
982+
\frac{\partial}{\partial \vec{\mu}} \left( h_1 \sum_{i=1}^{T} \beta_i (B \vec{\mu})_i \right) = h_1 \sum_{i=1}^{T} \beta_i B_{i, \cdot} = h_1 B^T \vec{\beta}
983+
$$
984+
985+
$$
986+
\frac{\partial}{\partial \vec{\mu}} \left( h_2 \vec{\beta} \cdot (\vec{\mu}^T M \vec{\mu}) \right) = h_2 \vec{\beta} \cdot 2M \vec{\mu} = 2 h_2 (\vec{\beta} \cdot M \vec{\mu}) \; \text{where } M = B^T B
987+
$$
988+
989+
$$
990+
\frac{\partial}{\partial \vec{\mu}} \left( -\frac{c}{2} \vec{\mu}^T \vec{\beta} \vec{\mu} \right) = -\frac{c}{2} (2 \vec{\beta} \vec{\mu}) = -c \vec{\beta} \vec{\mu}
991+
$$
992+
993+
It follows that
994+
995+
$$
996+
\frac{\partial J}{\partial \vec{\mu}} = h_1 B^T \vec{\beta} + 2 h_2 (\vec{\beta} \cdot B^T B \vec{\mu}) - c \vec{\beta} \vec{\mu}
997+
$$
998+
999+
But I think it is safe to ask `JAX` to compute the gradient of $J$ with respect to $\vec \mu$, so we can avoid the manual computation above.
1000+
1001+
Please kindly let me know your thoughts.
1002+
1003+
1004+
**End of Humphrey's note**
1005+
1006+
9541007
and
9551008
9561009
@@ -1005,9 +1058,8 @@ $$
10051058
\vec \theta^{R} = B \vec \mu^R
10061059
$$
10071060
1008-
1009-
10101061
```{code-cell} ipython3
1062+
@jit
10111063
def compute_J(μ, β, c, α=1, u0=1, u1=0.5, u2=3):
10121064
T = len(μ) - 1
10131065
@@ -1018,16 +1070,15 @@ def compute_J(μ, β, c, α=1, u0=1, u1=0.5, u2=3):
10181070
10191071
μ_vec = μ.at[-1].set(μ[-1]/(1-λ))
10201072
1021-
A = np.eye(T+1, T+1) - λ*np.eye(T+1, T+1, k=1)
1022-
B = (1-λ) * np.linalg.inv(A)
1023-
1024-
e_vec = np.hstack([np.repeat(1.0, T),
1025-
1/(1-β)])
1026-
β_vec = np.hstack([np.array([β**(t) for t in range(T)]),
1027-
(β**T / (1 - β))])
1073+
A = jnp.eye(T+1) - λ*jnp.eye(T+1, k=1)
1074+
B = (1-λ) * jnp.linalg.inv(A)
10281075
1029-
βθ_sum = np.sum((β_vec * h1) * (B @ μ_vec))
1030-
βθ_square_sum = β_vec * h2 * (B @ μ_vec).T @ (B @ μ_vec)
1076+
β_vec = jnp.hstack([β**jnp.arange(T),
1077+
(β**T/(1-β))])
1078+
1079+
θ = B @ μ_vec
1080+
βθ_sum = jnp.sum((β_vec * h1) * θ)
1081+
βθ_square_sum = β_vec * h2 * θ.T @ θ
10311082
βμ_square_sum = 0.5 * c * β_vec * μ.T @ μ
10321083
10331084
return βθ_sum + βθ_square_sum - βμ_square_sum
@@ -1042,10 +1093,6 @@ grad_J = jit(grad(
10421093
lambda μ: -compute_J(μ, β=0.85, c=2)))
10431094
```
10441095
1045-
```{code-cell} ipython3
1046-
1047-
```
1048-
10491096
```{code-cell} ipython3
10501097
%%time
10511098

0 commit comments

Comments
 (0)