@@ -926,37 +926,28 @@ where $F = \frac{c}{2} \cdot \vec{\beta} \cdot \mathbf{I}$ is a $(T+1) \times (T
926
926
It follows that
927
927
928
928
$$
929
- J = V - h_0 = \sum_ {t=0}^\infty \beta^t (h_1 \theta_t + h_2 \theta_t^2 - \frac{c}{2} \mu_t^2) = g^T \vec{\mu} + \vec{\mu}^T M \vec{\mu} - \vec{\mu}^T F \vec{\mu}
930
- $$
931
-
932
- So
933
-
934
- $$
935
- \frac{\partial}{\partial \vec{\mu}} g^T \vec{\mu} = g
929
+ \begin{aligned}
930
+ J = V - h_0 &= \sum_ {t=0}^\infty \beta^t (h_1 \theta_t + h_2 \theta_t^2 - \frac{c}{2} \mu_t^2) \\
931
+ &= g^T \vec{\mu} + \vec{\mu}^T M \vec{\mu} - \vec{\mu}^T F \vec{\mu} \\
932
+ &= g^T \vec{\mu} + \vec{\mu}^T (M - F) \vec{\mu} \\
933
+ &= g^T \vec{\mu} + \vec{\mu}^T G \vec{\mu}
934
+ \end{aligned}
936
935
$$
937
936
938
- $$
939
- \frac{\partial}{\partial \vec{\mu}} \vec{\mu}^T M \vec{\mu} = 2 M \vec{\mu}
940
- $$
937
+ where $G = M - F$.
941
938
942
- $$
943
- \frac{\partial}{\partial \vec{\mu}} \vec{\mu}^T F \vec{\mu} = 2 F \vec{\mu}
944
- $$
939
+ To compute the optimal government plan we want to maximize $J$ with respect to $\vec \mu$.
945
940
946
- Then we have
941
+ We use linear algebra formulas for differentiating linear and quadratic forms to compute the gradient of $J$ with respect to $\vec \mu$
947
942
948
943
$$
949
- \frac{\partial J }{\partial \vec{\mu}} = g + 2 (M + F) \vec{\mu}
944
+ \frac{\partial}{\partial \vec{\mu}} J = g + 2 G \vec{\mu}.
950
945
$$
951
946
952
- To compute the optimal government plan we want to maximize $J$ with respect to $\vec \mu$.
953
-
954
- We use linear algebra formulas for differentiating linear and quadratic forms to compute the gradient of $J$ with respect to $\vec \mu$ and equate it to zero.
955
-
956
- Let $G = 2 (M + F)$ The maximizing $\mu$ is
947
+ Setting $\frac{\partial}{\partial \vec{\mu}} J = 0$, the maximizing $\mu$ is
957
948
958
949
$$
959
- \vec \mu^R = -G^{-1} g
950
+ \vec \mu^R = -\frac{1}{2} G^{-1} g
960
951
$$
961
952
962
953
The associated optimal inflation sequence is
@@ -1021,9 +1012,9 @@ print(f'deviation = {np.linalg.norm(optimized_μ - clq.μ_series)}')
1021
1012
compute_V(optimized_μ, β=0.85, c=2)
1022
1013
```
1023
1014
1024
- We find, with a simple understanding of the structure of the problem, we can speed up our computation significantly .
1015
+ We find that , with a simple understanding of the structure of the problem, we can significantly speed up our computation.
1025
1016
1026
- We can also derive closed-form solution for $\vec \mu$
1017
+ We can also derive a closed-form solution for $\vec \mu$
1027
1018
1028
1019
```{code-cell} ipython3
1029
1020
def compute_μ(β, c, T, α=1, u0=1, u1=0.5, u2=3):
@@ -1039,7 +1030,8 @@ def compute_μ(β, c, T, α=1, u0=1, u1=0.5, u2=3):
1039
1030
g = h1 * B.T @ β_vec
1040
1031
M = B.T @ (h2 * jnp.diag(β_vec)) @ B
1041
1032
F = c/2 * jnp.diag(β_vec)
1042
- return jnp.linalg.solve(2*(M - F), -g)
1033
+ G = M - F
1034
+ return jnp.linalg.solve(2*G, -g)
1043
1035
1044
1036
μ_closed = compute_μ(β=0.85, c=2, T=T-1)
1045
1037
print(f'closed-form μ = \n{μ_closed}')
@@ -1057,7 +1049,7 @@ compute_V(μ_closed, β=0.85, c=2)
1057
1049
print(f'deviation = {np.linalg.norm(B @ μ_closed - θs)}')
1058
1050
```
1059
1051
1060
- We can check the gradient of the analytical solution and the `JAX` computed version
1052
+ We can check the gradient of the analytical solution against the `JAX` computed version
1061
1053
1062
1054
```{code-cell} ipython3
1063
1055
def compute_grad(μ, β, c, α=1, u0=1, u1=0.5, u2=3):
@@ -1075,7 +1067,8 @@ def compute_grad(μ, β, c, α=1, u0=1, u1=0.5, u2=3):
1075
1067
g = h1 * B.T @ β_vec
1076
1068
M = (h2 * B.T @ jnp.diag(β_vec) @ B)
1077
1069
F = c/2 * jnp.diag(β_vec)
1078
- return g + (2*(M - F) @ μ)
1070
+ G = M - F
1071
+ return g + (2*G @ μ)
1079
1072
1080
1073
closed_grad = compute_grad(jnp.ones(T), β=0.85, c=2)
1081
1074
```
0 commit comments