-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple viewports are not supported by WebGPU #4806
Comments
cc @toji as the author of https://github.com/immersive-web/WebXR-WebGPU-Binding/blob/main/explainer.md#rendering For this to be useful for VR, we might need to introduce an |
WebXR"s current pattern of rendering to multiple viewports of a single texture was something of a compromise to get around WebGL shortcomings. It"s not necessarily a pattern we want to carry forward. For the WebXR/WebGPU bindings, we envisioned each XR view being associated with a separate texture or (more likely) a different array layer in a array texture. That"s how some of the native multiview rendering algorithms work, though I"ll admit to not being familiar with any facilities the Vision Pro may have for that. As such I think only allowing for multiple viewports is unlikely to significantly improve WebXR performance. That said, I DO think that multiple viewports is an interesting feature outside of that specific use case! So please don"t take this comment as an indication that I don"t think the feature is worth investigating. |
Indeed this is how visionPro works: a single MTLTexture with 2 array layers. So I am happy this multi-texture approach is specified in the example :)
There"s a good public example here: https://github.com/metal-by-example/metal-spatial-rendering which shows how multiple viewports + vertex amplification reduce the pass count from 2 -> 1. Either with 2 passes or 1 we could still use a single command buffer, but fewer passes is more optimal on TBDR architectures like visionPro or most mobile devices really. |
Thanks for the Metal example! I"m interested to read up on it some more! From first glance, at least, it seems like multiple viewports is one piece of the overall feature set that would need to be implemented in order to support this type of multiview rendering in WebGPU, the other being adding layer selection to the vertex shader output. Vertex amplification seems like a very helpful addition as well, though I think you could probably fake it with instancing shenanigans. From the sounds of the Metal docs actual vertex amplification reduces the vertex fetching, though, so that"s a perf win. |
Whoops, I only noticed this issue after I filed a proposal in #4812. So I guess that"s my proposal for this bug :) |
That"s a nice investigation! We could probably do clamping of the indices in the shader if there isn"t a consistent behavior otherwise, and we"ll indeed have to find how to handle the viewport clamping in this case. However I"m trying to understand what is the use of this extension on its own. It seems that it is mostly for VR rendering in a single-pass instead of multiple passes, but at the same time we would need D3D12 ViewInstance / Metal VertexAmplification / Vulkan MultiView to solve that problem since the geometry looks different in each view, so another investigation is needed there? |
For stereo rendering, you can use instancing, and then output the RT index / viewport index from your vertex shader based on the instance. But I"ve used this feature for things like shadow map cascades before. |
Correct, vertex amplification is not needed, it is only an optimization, the existing |
Ok, so what you gain is dividing by ~2 the cost of encoding and submitting commands. Even with a giant render bundle there would not be this gain, so it seems to be useful on its own. Thanks for the details! |
This is not the only cost, on some platforms there are memory costs from the multipass approach. The encoding and submitting are the web framework + graphics driver costs. I suppose a benchmark would be good to illustrate the extent of the cost. I can look into that at some point. |
#135 is already closed, but in the comment for that issue it discusses multiple viewports. As already mentioned in other issues, this feature was not considered as WebGPU does not support Geometry shaders. However as also pointed out they are useful for VR.
This would be beneficial in supporting @toji"s example over in https://github.com/immersive-web/WebXR-WebGPU-Binding/blob/main/explainer.md#rendering - instead of 2 render passes we really only need 1 if we could set multiple viewports.
It could be as simple as perhaps changing:
https://www.w3.org/TR/webgpu/#dom-gpurenderpassencoder-setviewport
to:
which is backwards compatible and doesn"t involve passing an array of viewports.
The text was updated successfully, but these errors were encountered: