onnxruntime: DirectML Runtime error in GPT-2-LM-HEAD model with variable sized input.
Describe the bug When performing inference with GPT-2-LM-Head (https://github.com/onnx/models/tree/master/text/machine_comprehension/gpt-2), DirectML execution provider fails if the input is not of size 8 with the message:
2020-12-08 14:07:36.3271185 [I:onnxruntime:, sequential_executor.cc:156 onnxruntime::SequentialExecutor::Execute] Begin execution
2020-12-08 14:07:36.4106464 [E:onnxruntime:, sequential_executor.cc:318 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running MatMul node. Name:'MatMul_2533' Status Message:
D:\5\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1736)\onnxruntime.dll!00007FFFB08C7273: (caller: 00007FFFB08C6FE8) Exception(1) tid(65d0) 80070057 The parameter is incorrect.
Both CPU and CUDA providers succeed with the model and varying input sizes. I am using the C++ API.
Urgency Low, we can fallback to CPU and CUDA for the time being.
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
- ONNX Runtime installed from (source or binary): binary 1.5.2
- ONNX Runtime version: Microsoft.ML.OnnxRuntime.DirectML 1.5.2
- Python version: N/A
- Visual Studio version (if applicable): VS 2019
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version: 10.2
- GPU model and memory: RTX2070 Super
To Reproduce
- Load and execute a session with GPT-2-LM-Head
- Confirm inference works for input
input1: [3792, 428, 262, 1103, 1517, 393, 318, 428] - Confirm the error appears for input
input1: [3792, 428, 262, 1103, 1517, 393, 318, 428, 655]orinput1: [3792, 428, 262, 1103, 1517, 393, 318](length 9 and 7 respectively).
Expected behavior The inference is performed without error and populates the output.
Additional context GPT-2-LM-HEAD is using Onnx Version 1.6, so is expected to work with DirectML.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 17 (3 by maintainers)
Well, what do you know, that fixed it! Thank you @Coice, that was of great help!