onnx: Optimizer crashes on onnx model converted from Tensorflow graph
Hi, I am trying to invoke the ONNX optimizer following in the instructions in the following post:
https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md#optimizing-an-onnx-model
The ONNX model is converted from a `TensorFlow .pb file:
onnx_model = tensorflow_graph_to_onnx_model(graph_def, outputs, opset=7,
ignore_unimplemented=True)
all_passes = optimizer.get_available_passes()
print("Available optimization passes:")
for p in all_passes:
print(p)
print()
# Apply the optimization on the original model
passes = ['fuse_consecutive_transposes']
optimized_onnx_model = optimizer.optimize(onnx_model, passes)
I am getting a crash in the optimizer:
File "/home/xxx/PycharmProjects/virtualenv/local/lib/python3.5/site-packages/onnx/optimizer.py", line 52, in optimize
optimized_model_str = C.optimize(model_str, passes)
IndexError: _Map_base::at
onnx version: 1.3.0
onnx-tf verison: 1.1.2
Thanks!
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 5
- Comments: 15 (2 by maintainers)
hi all I finally resolved this issue
you have to add keep_initializers_as_inputs=True when exporting onnx
since this commit
https://github.com/pytorch/pytorch/commit/7583519b870e33ee3182f330c1bb8663559697b6
i have the same problem.The error is invalid unordered_map<K, T> key
Even if the underlying model is not correct, the error message should be better than
IndexError: _Map_base::at!I also encountered the same problem in pytorch-onnx-caffe2. At last, I found that the reason is that the CPU is used when exporting the model, and CUDA is used by default when importing to caffe2, resulting in errors. As long as the device used in the conversion was changed, the problem was solved.
Please note that ONNX optimizer has been moved to another repo https://github.com/onnx/optimizer since ONNX 1.9. If you still have this issue, please raise an issue there. Thank you!