Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple viewports are not supported by WebGPU #4806

Open
mwyrzykowski opened this issue Aug 8, 2024 · 10 comments
Open

Multiple viewports are not supported by WebGPU #4806

mwyrzykowski opened this issue Aug 8, 2024 · 10 comments
Labels
api WebGPU API
Milestone

Comments

@mwyrzykowski
Copy link

#135 is already closed, but in the comment for that issue it discusses multiple viewports. As already mentioned in other issues, this feature was not considered as WebGPU does not support Geometry shaders. However as also pointed out they are useful for VR.

This would be beneficial in supporting @toji"s example over in https://github.com/immersive-web/WebXR-WebGPU-Binding/blob/main/explainer.md#rendering - instead of 2 render passes we really only need 1 if we could set multiple viewports.

It could be as simple as perhaps changing:
https://www.w3.org/TR/webgpu/#dom-gpurenderpassencoder-setviewport

undefined setViewport(float x, float y,
        float width, float height,
        float minDepth, float maxDepth);

to:

undefined setViewport(float x, float y,
        float width, float height,
        float minDepth, float maxDepth, optional GPUIndex32 index = 0);

which is backwards compatible and doesn"t involve passing an array of viewports.

@mwyrzykowski mwyrzykowski added the api WebGPU API label Aug 8, 2024
@mwyrzykowski
Copy link
Author

cc @toji as the author of https://github.com/immersive-web/WebXR-WebGPU-Binding/blob/main/explainer.md#rendering

For this to be useful for VR, we might need to introduce an amplification_id concept or similar to WGSL, I"m not aware of the HLSL / Vulkan mappings off hand or if there is wide support for this.

@toji
Copy link
Member

toji commented Aug 8, 2024

WebXR"s current pattern of rendering to multiple viewports of a single texture was something of a compromise to get around WebGL shortcomings. It"s not necessarily a pattern we want to carry forward.

For the WebXR/WebGPU bindings, we envisioned each XR view being associated with a separate texture or (more likely) a different array layer in a array texture. That"s how some of the native multiview rendering algorithms work, though I"ll admit to not being familiar with any facilities the Vision Pro may have for that. As such I think only allowing for multiple viewports is unlikely to significantly improve WebXR performance.

That said, I DO think that multiple viewports is an interesting feature outside of that specific use case! So please don"t take this comment as an indication that I don"t think the feature is worth investigating.

@mwyrzykowski
Copy link
Author

For the WebXR/WebGPU bindings, we envisioned each XR view being associated with a separate texture or (more likely) a different array layer in a array texture. That"s how some of the native multiview rendering algorithms work, though I"ll admit to not being familiar with any facilities the Vision Pro may have for that.

Indeed this is how visionPro works: a single MTLTexture with 2 array layers. So I am happy this multi-texture approach is specified in the example :)

As such I think only allowing for multiple viewports is unlikely to significantly improve WebXR performance.

There"s a good public example here: https://github.com/metal-by-example/metal-spatial-rendering

which shows how multiple viewports + vertex amplification reduce the pass count from 2 -> 1. Either with 2 passes or 1 we could still use a single command buffer, but fewer passes is more optimal on TBDR architectures like visionPro or most mobile devices really.

@toji
Copy link
Member

toji commented Aug 8, 2024

Thanks for the Metal example! I"m interested to read up on it some more!

From first glance, at least, it seems like multiple viewports is one piece of the overall feature set that would need to be implemented in order to support this type of multiview rendering in WebGPU, the other being adding layer selection to the vertex shader output. Vertex amplification seems like a very helpful addition as well, though I think you could probably fake it with instancing shenanigans. From the sounds of the Metal docs actual vertex amplification reduces the vertex fetching, though, so that"s a perf win.

@magcius
Copy link

magcius commented Aug 9, 2024

Whoops, I only noticed this issue after I filed a proposal in #4812. So I guess that"s my proposal for this bug :)

@Kangz
Copy link
Contributor

Kangz commented Aug 20, 2024

That"s a nice investigation! We could probably do clamping of the indices in the shader if there isn"t a consistent behavior otherwise, and we"ll indeed have to find how to handle the viewport clamping in this case.

However I"m trying to understand what is the use of this extension on its own. It seems that it is mostly for VR rendering in a single-pass instead of multiple passes, but at the same time we would need D3D12 ViewInstance / Metal VertexAmplification / Vulkan MultiView to solve that problem since the geometry looks different in each view, so another investigation is needed there?

@magcius
Copy link

magcius commented Aug 20, 2024

For stereo rendering, you can use instancing, and then output the RT index / viewport index from your vertex shader based on the instance. But I"ve used this feature for things like shadow map cascades before.

@mwyrzykowski
Copy link
Author

Correct, vertex amplification is not needed, it is only an optimization, the existing instance_index from WGSL can achieve the same.

@Kangz
Copy link
Contributor

Kangz commented Aug 21, 2024

For stereo rendering, you can use instancing, and then output the RT index / viewport index from your vertex shader based on the instance. But I"ve used this feature for things like shadow map cascades before.

Ok, so what you gain is dividing by ~2 the cost of encoding and submitting commands. Even with a giant render bundle there would not be this gain, so it seems to be useful on its own. Thanks for the details!

@mwyrzykowski
Copy link
Author

Ok, so what you gain is dividing by ~2 the cost of encoding and submitting commands. Even with a giant render bundle there would not be this gain, so it seems to be useful on its own. Thanks for the details!

This is not the only cost, on some platforms there are memory costs from the multipass approach. The encoding and submitting are the web framework + graphics driver costs.

I suppose a benchmark would be good to illustrate the extent of the cost. I can look into that at some point.

@kainino0x kainino0x added this to the Milestone 2 milestone Sep 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api WebGPU API
Projects
None yet
Development

No branches or pull requests

5 participants