-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebGPU Compute shader execution? #8314
Comments
Just to add. I have successfully achieved the OIDN denoiser in the browser. Right now I'm mostly focused on the WebGL backend as the most common pathtracers use that still. However, I will be working with that team to port their pathtracer to WebGPU. Getting this working there would be interesting as well. |
While looking into the WebGPU backend and execution example I am left with a few questions.
I am currently working on porting the Open Image Denoise models to work on Tensorflow.js. This has been done by someone already but they aren't up to sharing yet. This has also been done with Cuda Compute and some Rust/HLSL compute shaders.
The pipeline currently would be:
With native libraries you can execute OIDN directly in compute shaders (but is a total pain to setup) and other examples (CUDA/HLSL) also execute the DNN on the compute shaders without the CPU return.
I am curious to see if there is any existing methods to reduce the round trips between the CPU/GPU.
Even simply executing the tensorflow process from a compute shader would be massive as the only thing returned to the CPU would be the final buffer...
I don't have the skills nor understanding yet to make this work, but is something I figured I would ask.
The text was updated successfully, but these errors were encountered: