Share buffer between CUDA and wgpu #7988
Unanswered
anderspitman
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a CUDA codebase that I want to explore porting to WebGPU, mostly for portability reasons and to avoid vendor lockin. It would be very difficult to port the entire codebase at once, especially because we currently rely on CUDA's nvcomp library for gzip decompression.
It would be nice if I could leave the beginning stages of our pipeline in CUDA, then export a raw pointer to the results in GPU memory, then run WebGPU shaders to operate on the data for the following stages. Is anything like this possible with wgpu?
I believe this is an example of basically what I'm going for but with CUDA+Vulkan:
https://www.gpultra.com/blog/vulkan-cuda-memory-interoperability/
Beta Was this translation helpful? Give feedback.
All reactions