Skip to content

Commit f7b16fb

Browse files
authored
Merge pull request #49 from Tim-Salzmann/v2
Update to v2
2 parents b073ea1 + 83f067a commit f7b16fb

File tree

24 files changed

+1227
-207
lines changed

24 files changed

+1227
-207
lines changed

.github/workflows/ci_v2.yaml

Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
name: L4CasADi v2
2+
3+
on:
4+
push:
5+
branches: [ v2 ]
6+
7+
jobs:
8+
lint:
9+
name: Lint
10+
runs-on: ubuntu-latest
11+
timeout-minutes: 5
12+
steps:
13+
- uses: actions/checkout@v3
14+
with:
15+
ref: 'v2'
16+
- name: Run mypy
17+
run: |
18+
pip install mypy
19+
mypy . --ignore-missing-imports --exclude examples
20+
- name: Run flake8
21+
run: |
22+
pip install flake8
23+
# stop the build if there are Python syntax errors or undefined names
24+
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
25+
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
26+
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
27+
28+
tests:
29+
runs-on: ${{ matrix.runs-on }}
30+
needs: [ lint ]
31+
timeout-minutes: 60
32+
strategy:
33+
fail-fast: false
34+
matrix:
35+
runs-on: [ubuntu-latest, ubuntu-20.04, macos-latest]
36+
37+
name: Tests on ${{ matrix.runs-on }}
38+
steps:
39+
- name: Checkout
40+
uses: actions/checkout@v3
41+
with:
42+
ref: 'v2'
43+
fetch-depth: 0
44+
45+
- name: Install Python
46+
uses: actions/setup-python@v4
47+
with:
48+
python-version: '>=3.9 <3.12'
49+
50+
- name: Install L4CasADi
51+
run: |
52+
python -m pip install --upgrade pip
53+
pip install torch --index-url https://download.pytorch.org/whl/cpu # Ensure CPU torch version
54+
pip install -r requirements_build.txt
55+
pip install . -v --no-build-isolation
56+
57+
- name: Test with pytest
58+
working-directory: ./tests
59+
run: |
60+
pip install pytest
61+
pytest .
62+
63+
test-on-aarch:
64+
runs-on: ubuntu-latest
65+
needs: [ lint ]
66+
timeout-minutes: 60
67+
68+
name: Tests on aarch64
69+
steps:
70+
- name: Checkout
71+
uses: actions/checkout@v3
72+
with:
73+
ref: 'v2'
74+
fetch-depth: 0
75+
- uses: uraimo/run-on-arch-action@v2
76+
name: Install and Test
77+
with:
78+
arch: aarch64
79+
distro: ubuntu20.04
80+
install: |
81+
apt-get update
82+
apt-get install -y --no-install-recommends python3.9 python3-pip python-is-python3
83+
pip install -U pip
84+
apt-get install -y build-essential
85+
86+
run: |
87+
python -m pip install --upgrade pip
88+
pip install torch --index-url https://download.pytorch.org/whl/cpu # Ensure CPU torch version
89+
pip install -r requirements_build.txt
90+
pip install . -v --no-build-isolation
91+
# pip install pytest
92+
# pytest .

README.md

Lines changed: 26 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,32 @@ arXiv: [Learning for CasADi: Data-driven Models in Numerical Optimization](https
2626

2727
Talk: [Youtube](https://youtu.be/UYdkRnGr8eM?si=KEPcFEL9b7Vk2juI&t=3348)
2828

29+
## L4CasADi v2 Breaking Changes
30+
After feedback from first use-cases L4CasADi v2 is designed with efficiency and simplicity in mind.
31+
32+
This leads to the following breaking changes:
33+
34+
- L4CasADi v2 can leverage PyTorch's batching capabilities for increased efficiency. When passing `batched=True`,
35+
L4CasADi will understand the **first** input dimension as batch dimension. Thus, first and second-order derivatives
36+
across elements of this dimension are assumed to be **sparse-zero**. To make use of this, instead of having multiple calls to a L4CasADi function in
37+
your CasADi program, batch all inputs together and have a single L4CasADi call. An example of this can be seen when
38+
comparing the [non-batched NeRF example](examples/nerf_trajectory_optimization/nerf_trajectory_optimization.py) with the
39+
[batched NeRF example](examples/nerf_trajectory_optimization/nerf_trajectory_optimization_batched.py) which is faster by
40+
a factor of 5-10x.
41+
- L4CasADi v2 will not change the shape of an input anymore as this was a source of confusion. The tensor forwarded to
42+
the PyTorch model will resemble the **exact dimension** of the input variable by CasADi. You are responsible to make
43+
sure that the PyTorch model handles a **two-dimensional** input matrix! Accordingly, the parameter
44+
`model_expects_batch_dim` is removed.
45+
- By default, L4CasADi v2 will not provide the Hessian, but the Jacobian of the Adjoint. This is sufficient for most
46+
many optimization problems. However, you can explicitly request the generation of the Hessian by passing
47+
`generate_jac_jac=True`.
48+
49+
[//]: # (L4CasADi v2 can use the new **torch compile** functionality starting from PyTorch 2.4. By passing `scripting=False`. This
50+
will lead to a longer compile time on first L4CasADi function call but will lead to a overall faster
51+
execution. However, currently this functionality is experimental and not fully stable across all models. In the long
52+
term there is a good chance this will become the default over scripting once the functionality is stabilized by the
53+
Torch developers.)
54+
2955
## Table of Content
3056
- [Projects using L4CasADi](#projects-using-l4casadi)
3157
- [Installation](#installation)
@@ -205,14 +231,6 @@ https://github.com/Tim-Salzmann/l4casadi/blob/421de6ef408267eed0fd2519248b2152b6
205231

206232
## FYIs
207233

208-
### Batch Dimension
209-
210-
If your PyTorch model expects a batch dimension as first dimension (which most models do) you should pass
211-
`model_expects_batch_dim=True` to the `L4CasADi` constructor. The `MX` input to the L4CasADi component is then expected
212-
to be a vector of shape `[X, 1]`. L4CasADi will add a batch dimension of `1` automatically such that the input to the
213-
underlying PyTorch model is of shape `[1, X]`.
214-
215-
---
216234

217235
### Warm Up
218236

examples/acados.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ def ocp(self):
128128
ocp.cost.W = np.array([[1.]])
129129

130130
# Trivial PyTorch index 0
131-
l4c_y_expr = l4c.L4CasADi(lambda x: x[0], name='y_expr', model_expects_batch_dim=False)
131+
l4c_y_expr = l4c.L4CasADi(lambda x: x[0], name='y_expr')
132132

133133
ocp.model.cost_y_expr = l4c_y_expr(x)
134134
ocp.model.cost_y_expr_e = x[0]

examples/cpp_usage/generate.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ def forward(self, x):
1010

1111

1212
def generate():
13-
l4casadi_model = l4c.L4CasADi(TorchModel(), model_expects_batch_dim=False, name='sin_l4c')
13+
l4casadi_model = l4c.L4CasADi(TorchModel(), name='sin_l4c')
1414

1515
sym_in = cs.MX.sym('x', 1, 1)
1616

examples/fish_turbulent_flow/utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ def import_l4casadi_model(device):
266266
x = cs.MX.sym("x", 3)
267267
xn = (x - meanX) / stdX
268268

269-
y = l4c.L4CasADi(model, name="turbulent_model", model_expects_batch_dim=True)(xn)
269+
y = l4c.L4CasADi(model, name="turbulent_model", generate_adj1=False, generate_jac_jac=True)(xn.T).T
270270
y = y * stdY + meanY
271271
fU = cs.Function("fU", [x], [y[0]])
272272
fV = cs.Function("fV", [x], [y[1]])

examples/matlab/export.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ def forward(self, x):
1010

1111

1212
def generate():
13-
l4casadi_model = l4c.L4CasADi(TorchModel(), model_expects_batch_dim=False, name='sin_l4c')
13+
l4casadi_model = l4c.L4CasADi(TorchModel(), name='sin_l4c')
1414
sym_in = cs.MX.sym('x', 1, 1)
1515
l4casadi_model.build(sym_in)
1616
return

examples/naive/readme.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,15 +3,15 @@
33

44

55
naive_mlp = l4c.naive.MultiLayerPerceptron(2, 128, 1, 2, 'Tanh')
6-
l4c_model = l4c.L4CasADi(naive_mlp, model_expects_batch_dim=True)
6+
l4c_model = l4c.L4CasADi(naive_mlp)
77

8-
x_sym = cs.MX.sym('x', 2, 1)
8+
x_sym = cs.MX.sym('x', 3, 2)
99
y_sym = l4c_model(x_sym)
1010
f = cs.Function('y', [x_sym], [y_sym])
1111
df = cs.Function('dy', [x_sym], [cs.jacobian(y_sym, x_sym)])
12-
ddf = cs.Function('ddy', [x_sym], [cs.hessian(y_sym, x_sym)[0]])
12+
ddf = cs.Function('ddy', [x_sym], [cs.jacobian(cs.jacobian(y_sym, x_sym), x_sym)])
1313

14-
x = cs.DM([[0.], [2.]])
14+
x = cs.DM([[0., 2.], [0., 2.], [0., 2.]])
1515
print(l4c_model(x))
1616
print(f(x))
1717
print(df(x))

examples/nerf_trajectory_optimization/density_nerf.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ def __init__(self):
4646
[nn.Linear(self.input_ch, W)]
4747
+ [
4848
nn.Linear(W, W)
49-
if i not in self.skips
49+
if i != 4
5050
else nn.Linear(W + self.input_ch, W)
5151
for i in range(D - 1)
5252
]
@@ -60,7 +60,7 @@ def forward(self, x):
6060
for i, l in enumerate(self.pts_linears):
6161
h = self.pts_linears[i](h)
6262
h = F.relu(h)
63-
if i in self.skips:
63+
if i == 4:
6464
h = torch.cat([input_pts, h], -1)
6565

6666
alpha = self.alpha_linear(h)

examples/nerf_trajectory_optimization/nerf_trajectory_optimization.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010

1111
CASE = 1
1212

13+
os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
1314

1415
def polynomial(n, n_eval):
1516
"""Generates a symbolic function for a polynomial of degree n-1"""
@@ -86,7 +87,7 @@ def trajectory_generator_solver(n, n_eval, L, warmup, threshold):
8687
f += cs.sum2(sk**2)
8788

8889
# While having a maximum density (1.) of the NeRF as constraint.
89-
lk = L(pk.T)
90+
lk = L(pk)
9091
g = cs.horzcat(g, lk)
9192
lbg = cs.horzcat(lbg, cs.DM([-10e32]).T)
9293
ubg = cs.horzcat(ubg, cs.DM([threshold]).T)
@@ -175,7 +176,7 @@ def main():
175176
strict=False,
176177
)
177178
# -------------------------- Create L4CasADi Module -------------------------- #
178-
l4c_nerf = l4c.L4CasADi(model)
179+
l4c_nerf = l4c.L4CasADi(model, scripting=False)
179180

180181
# ---------------------------------------------------------------------------- #
181182
# NLP warmup #

0 commit comments

Comments
 (0)