Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D3D11 Vulkan interop #16426

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

amerkoleci
Copy link
Contributor

What does the pull request do?

Implement D3D11 (even D3D12 in future) inside Avalonia with Vulkan backend support

What is the current behavior?

Currently is doesn't support Vulkan backend when using D3D11

Checklist

Breaking changes

Obsoletions / Deprecations

Fixed issues

…lKeyedMutex and import from VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_BIT
…upport, using IsVulkanBacked, Angle OpenGL is still supported.
@amerkoleci amerkoleci changed the title D3d11 Vulkan interop D3D11 Vulkan interop Jul 23, 2024
@amerkoleci
Copy link
Contributor Author

amerkoleci commented Jul 23, 2024

fixes #16411

image

@cla-avalonia
Copy link
Collaborator

cla-avalonia commented Jul 23, 2024

  • All contributors have signed the CLA.

@rabbitism

This comment has been minimized.

@amerkoleci
Copy link
Contributor Author

fixes #16426

should be 16411?

Yes, my bad :)

@amerkoleci
Copy link
Contributor Author

@cla-avalonia agree

/// <param name="acquire">The acquire callback to call before accessing the image</param>
/// <param name="release">The release callback to call after accessing the image </param>
/// <returns>A task that completes when update operation is completed and user code is free to destroy or dispose the image</returns>
public Task UpdateWithExternalKeyedMutexAsync(ICompositionImportedGpuImage image, Action acquire, Action release)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that it won't work properly.

The mutex controls resources on the GPU side and should be locked/released as a part of the GPU command stream, not on the command issuer side. If you simply lock/unlock in the renderer, it won't affect the enqueued Vulkan commands, those would still be submitted to the queue without locks.

At most you'll stall the CPU.

Please, use proper Vulkan synchronization primitives that are capable of using DXGI Keyed Mutexes.

@kekekeks
Copy link
Member

See

if (keyedMutex != null)
mutex = new Win32KeyedMutexAcquireReleaseInfoKHR
{
SType = StructureType.Win32KeyedMutexAcquireReleaseInfoKhr,
AcquireCount = keyedMutex.AcquireKey.HasValue ? 1u : 0u,
ReleaseCount = keyedMutex.ReleaseKey.HasValue ? 1u : 0u,
PAcquireKeys = &acquireKey,
PReleaseKeys = &releaseKey,
PAcquireSyncs = &devMem,
PReleaseSyncs = &devMem,
PAcquireTimeouts = &timeout
};
fixed (Semaphore* pWaitSemaphores = waitSemaphores, pSignalSemaphores = signalSemaphores)
{
fixed (PipelineStageFlags* pWaitDstStageMask = waitDstStageMask)
{
var commandBuffer = InternalHandle;
var submitInfo = new SubmitInfo
{
PNext = keyedMutex != null ? &mutex : null,
SType = StructureType.SubmitInfo,
WaitSemaphoreCount = waitSemaphores != null ? (uint)waitSemaphores.Length : 0,
PWaitSemaphores = pWaitSemaphores,
PWaitDstStageMask = pWaitDstStageMask,
CommandBufferCount = 1,
PCommandBuffers = &commandBuffer,
SignalSemaphoreCount = signalSemaphores != null ? (uint)signalSemaphores.Length : 0,
PSignalSemaphores = pSignalSemaphores,
};
_api.ResetFences(_device, 1, fence.Value);
_api.QueueSubmit(_queue, 1, submitInfo, fence.Value);
}
}
_commandBufferPool.DisposeCommandBuffer(this);

buffer.Submit(null,null,null, null, new VulkanCommandBufferPool.VulkanCommandBuffer.KeyedMutexSubmitInfo
{
AcquireKey = 0,
DeviceMemory = _image.DeviceMemory
});

https://github.com/AvaloniaUI/Avalonia/blob/master/samples/GpuInterop/VulkanDemo/VulkanSwapchain.cs#L114-L121

@amerkoleci
Copy link
Contributor Author

See

if (keyedMutex != null)
mutex = new Win32KeyedMutexAcquireReleaseInfoKHR
{
SType = StructureType.Win32KeyedMutexAcquireReleaseInfoKhr,
AcquireCount = keyedMutex.AcquireKey.HasValue ? 1u : 0u,
ReleaseCount = keyedMutex.ReleaseKey.HasValue ? 1u : 0u,
PAcquireKeys = &acquireKey,
PReleaseKeys = &releaseKey,
PAcquireSyncs = &devMem,
PReleaseSyncs = &devMem,
PAcquireTimeouts = &timeout
};
fixed (Semaphore* pWaitSemaphores = waitSemaphores, pSignalSemaphores = signalSemaphores)
{
fixed (PipelineStageFlags* pWaitDstStageMask = waitDstStageMask)
{
var commandBuffer = InternalHandle;
var submitInfo = new SubmitInfo
{
PNext = keyedMutex != null ? &mutex : null,
SType = StructureType.SubmitInfo,
WaitSemaphoreCount = waitSemaphores != null ? (uint)waitSemaphores.Length : 0,
PWaitSemaphores = pWaitSemaphores,
PWaitDstStageMask = pWaitDstStageMask,
CommandBufferCount = 1,
PCommandBuffers = &commandBuffer,
SignalSemaphoreCount = signalSemaphores != null ? (uint)signalSemaphores.Length : 0,
PSignalSemaphores = pSignalSemaphores,
};
_api.ResetFences(_device, 1, fence.Value);
_api.QueueSubmit(_queue, 1, submitInfo, fence.Value);
}
}
_commandBufferPool.DisposeCommandBuffer(this);

buffer.Submit(null,null,null, null, new VulkanCommandBufferPool.VulkanCommandBuffer.KeyedMutexSubmitInfo
{
AcquireKey = 0,
DeviceMemory = _image.DeviceMemory
});

https://github.com/AvaloniaUI/Avalonia/blob/master/samples/GpuInterop/VulkanDemo/VulkanSwapchain.cs#L114-L121

Thanks! updated

@avaloniaui-bot
Copy link

You can test this PR using the following package version. 11.2.999-cibuild0050408-alpha. (feed url: https://nuget-feed-all.avaloniaui.net/v3/index.json) [PRBUILDID]

@maxkatz6 maxkatz6 requested a review from kekekeks July 24, 2024 22:01
@avaloniaui-bot
Copy link

You can test this PR using the following package version. 11.2.999-cibuild0050444-alpha. (feed url: https://nuget-feed-all.avaloniaui.net/v3/index.json) [PRBUILDID]

@amerkoleci
Copy link
Contributor Author

Hi,
Are there any plans to merge this?

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants