Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating DirectML Device crashes at DMLCreateDevice1 #3360

Closed
RobinKa opened this issue Mar 29, 2020 · 3 comments
Closed

Creating DirectML Device crashes at DMLCreateDevice1 #3360

RobinKa opened this issue Mar 29, 2020 · 3 comments
Labels
ep:DML issues related to the DirectML execution provider

Comments

@RobinKa
Copy link

RobinKa commented Mar 29, 2020

Describe the bug
When trying to use the DirectML provider a crash happens when calling DMLCreateDevice1 here. Replacing it with DMLCreateDevice works although I don't know if there are any unintended consequences:

THROW_IF_FAILED(DMLCreateDevice(d3d12_device.Get(),
                              flags,
                              IID_PPV_ARGS(&dml_device)));

Urgency
DirectML does not work (for me) without that fix. For me personally not having DML is a big issue as I need my program to run on all GPU vendors' devices.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 Pro 1903
  • ONNX Runtime installed from (source or binary): source with use_dml, build_wheel
  • ONNX Runtime version: 1.2.0 (current master)
  • Python version: 3.7.3
  • Visual Studio version (if applicable): 2019
  • GCC/Compiler version (if compiling from source): -
  • CUDA/cuDNN version: -
  • GPU model and memory: GTX 1070 (8gb)

To Reproduce
Build onnxruntime with build.bat --config RelWithDebInfo --parallel --use_dml --cmake_generator "Visual Studio 16 2019. Try adding the DML provider (eg. start with this sample and add OrtSessionOptionsAppendExecutionProvider_DML(session_options, 0);. The program will crash at this line (although I noticed when using a nonsensical device like 100 it goes through, but seems to use CPU still).
Alternatively, python can be used using my recent PR and the following code will crash without a message at set_providers():

import onnxruntime as rt
session_opts = rt.SessionOptions()
session_opts.enable_mem_pattern = False
session = rt.InferenceSession("model.onnx", session_opts)
session.set_providers(["DmlExecutionProvider"])

Expected behavior
It should not crash and use the DirectML execution provider.

@hariharans29 hariharans29 added the ep:DML issues related to the DirectML execution provider label Mar 31, 2020
@fdwr
Copy link
Contributor

fdwr commented Apr 1, 2020

@RobinKa : Does the crash on your machine occur when calling DMLCreateDevice1 with DML_FEATURE_LEVEL_1_0? (DMLCreateDevice is basically DMLCreateDevice1 except it defaults to feature level 1.0) When running under the debugger, do you see in the output L"DirectML: The requested minimum feature level is not supported by this device.\n"), and get a DXGI_ERROR_UNSUPPORTED? Thanks.

@RobinKa
Copy link
Author

RobinKa commented Apr 1, 2020

Passing DML_FEATURE_LEVEL_1_0 produces the same error:
image There's also no DXGI_ERROR_UNSUPPORTED error in the output.

But I just noticed, when I use the DirectML.dll provided with onnxruntime (in onnxruntime\build\Windows\packages\DirectML.0.0.1\bin\x64) it works, previously I was using the dll that comes with Windows (so, the one from %windir%/system32).

@fdwr
Copy link
Contributor

fdwr commented Apr 1, 2020

@RobinKa : Thanks. Oh yes, you must use the preview DirectML.dll that came with this version of the ONNX Runtime package. The older OS version is not compatible with this package because ORT relies on some newer API's in preview DML that don't exist in the OS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:DML issues related to the DirectML execution provider
Projects
None yet
Development

No branches or pull requests

3 participants