How to change model size for latent diffusion #8390
Unanswered
karllandheer
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to modify the tutorial shown here:
https://github.com/Project-MONAI/GenerativeModels/blob/main/tutorials/generative/2d_diffusion_autoencoder/2d_diffusion_autoencoder_tutorial.ipynb
to accomodate my data which is larger in size. For example, my images are 12 channels with an image size of 256x256. Even with a batch size of 1 this gives OOM for GPUs with 16GB RAM. Does anyone know how to address this? One way is to do random cropping transforms, however this then makes inference a bit more confusing. Alternatively I could reduce the model size of Diffusion_AE, however I'm not 100% sure how to do that. Any ideas?
Beta Was this translation helpful? Give feedback.
All reactions