About implementation of CompileGraphModel #532
Replies: 2 comments
-
Hi Hyuntae, Thanks for the interesting question, I'm not sure off the top of my head... I'm curious, are you implementing something specific in or with |
Beta Was this translation helpful? Give feedback.
-
Hi @hyuntae-cho , The goal of the func(**true_inputs, **trainable_params, **buffers) -> **true_outputs for the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I've been exploring how torch.compile is applied to NequIP and had a question regarding the ListInputOutputStateDictWrapper in nequip/nn/compile.py.
It looks like the forward function copies the parameters and buffers from the original model. However, when I insert the following line:
right after this block:
nequip/nequip/nn/compile.py
Lines 174 to 180 in 60eae37
the printed result is True, indicating that the compiled model shares the same parameter tensors (by memory address) as the original model.
I even tried removing the parameter copying code:
nequip/nequip/nn/compile.py
Lines 87 to 91 in 60eae37
and still obtained the same results.
It seems this doesn't impact computational flow or cost, so I’m curious—was there a specific reason for implementing the wrapper with explicit copying of parameters and buffers? Is it for compatibility with an older PyTorch version or to cover edge cases?
Thanks in advance for your insight!
Best,
Hyuntae Cho
Beta Was this translation helpful? Give feedback.
All reactions