-
-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export PyTorch to ONNX #155
Comments
This isn"t a bug, it"s just functionality not implemented since it"s non-trivial. See #89 and #32 ... I"ll leave this one open so another issue isn"t created. I have no plans to tackled this in the near future, it would not be a learning experience for me. Others have asked, nobody has offered any help or code. If someone gets it working with included demo export & inference script I"d accept a PR. |
Hi, |
I have tried using opset 11 as well but didn"t work. Do you think it is the Swish operator that is causing this error in my execution where it says "_is" operator is not supported by ONNX currently? Or is it something else? Error log doesn"t display which method or logic failed converting to ONNX. Is there a way I can know that so that I can bypass it by rewriting the logic in a way acceptable to ONNX? @Ekta246 @rwightman Btw, I have also tried changing config.act_type to both "silu" and "relu" as against "swish". Neither of these ways helped |
Just a heads up!
|
Hi @saikrishna-pallerla Any success since your last post? I"ve run into the same problem. Another user said he"s seen success with non-tf backbones but so far it did not help me. |
I got exact issue as raised for ONNX conversion like below. But I could partially make it work so I"m just leaving my learning here in case this becomes useful for someone else. RuntimeError: Exporting the operator __is_ to ONNX opset version 13 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub. What I was trying to export was from effdet import get_efficientdet_config, EfficientDet, DetBenchPredict
config = get_efficientdet_config("efficientdet_d1")
net = EfficientDet(config, pretrained_backbone=True)
net = DetBenchPredict(net)
net.eval()
torch.onnx.export(net.cuda(), # model being run
(torch.randn(1, 3, 512, 512).cuda(), {"img_info": None}), # model input (or a tuple for multiple inputs)
"effdet_all.onnx", # where to save the model (can be a file or file-like object)
input_names = ["input"], # the model"s input names
output_names = ["output"],
opset_version=13, verbose=True) # the model"s output names I used torch==1.9.1 Next, I narrowed down the conversion target to figure out where the error is coming from. I extracted model = net.model
model.eval()
torch.onnx.export(model.cuda(), # model being run
torch.randn(1, 3, 512, 512).cuda(), # model input (or a tuple for multiple inputs)
"effdet_modelpart.onnx", # where to save the model (can be a file or file-like object)
input_names = ["input"], # the model"s input names
output_names = ["output"],
opset_version=13, verbose=True) # the model"s output names I saw that other people were reporting that type of activation layer affects ONNX exportability but I didn"t find a way to specify exportable=True for non-backbone part of if is_exportable() and name in ("silu", "swish"):
# FIXME PyTorch SiLU doesn"t ONNX export, this is a temp hack
return swish From this result, it"s clear that error is coming from post processing stage in forward method of So current potential workaround would be:
I"m still new to both ONNX and this codebase so I haven"t given enough thoughts onto which is easier though. But I assume |
Has anyone been able to successfully convert "tf_efficientnetv2_s" from timm to Onnx? |
I have (hopefully) working onnx exports detailed here #302 |
I am trying to get the model trained on a custom dataset using PyTorch framework exported to ONNX and to further convert to TensorRT and run on jetson nano. However, I am unable to convert the model to ONNX. Below is the code I am using:
But, it throws the below error
Can someone help me understand the issue and help fix it? @rwightman It would be great if you can provide guidance here
The text was updated successfully, but these errors were encountered: