NequIP -> Allegro fine-tune training? #95
Unanswered
turbosonics
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Last year when I try to use NequIP and Allegro, I read from somewhere that, training from scratch using NequIP then continue the fine-tune training using Allegro from the NequIP model with the same training set & same hyperparameters can be helpful for Allegro model development. (Using initialize_from_state and initial_model_state)
But I cannot find that information now, or maybe this is something I read/remember wrong.
So, I hope to make sure about this again. Would NequIP->Allegro fine tuning (NequIP initial training from scratch then fine-tune using Allegro) be better than training just from Allegro from scratch? Or it doesn't matter that much?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions