edgetpu: USB accelerator cannot run `edgetpu` model
My USB accelerator cannot run edgetpu
compiled model anymore, while it still can run the not-compiled tflite
model.
Everything is tested with classification
example from https://github.com/google-coral/tflite.
This is the error:
INFO: Initialized TensorFlow Lite runtime.
Traceback (most recent call last):
File "classify_image.py", line 118, in <module>
main()
File "classify_image.py", line 96, in main
interpreter.allocate_tensors()
File "/home/ds017/.pyenv/versions/coral35/lib/python3.5/site-packages/tflite_runtime/interpreter.py", line 244, in allocate_tensors
return self._interpreter.AllocateTensors()
File "/home/ds017/.pyenv/versions/coral35/lib/python3.5/site-packages/tflite_runtime/interpreter_wrapper.py", line 114, in AllocateTensors
return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.
Some time ago everything was running fine. What might be wrong?
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 59
@feranick @gasgallo @alexanderfrey The fix is to just upgrade the tflite_runtime package!!
Sorry guys for delaying responses, we’ve getting tons of issues, but if you are planning to build your own
tflite_package
or even tensorflow pip package, please use this commit!@feranick I’m aware, we are working to get all these fix! Thanks
@feranick @gasgallo @alexanderfrey Update: We’ll fix the google-coral/tflite repo to be aligned with the new changes soon. Stay tuned for updates on new release @ our /news page!
@Namburger is there anything special about the way this tflite_runtime is built? I’m trying to build
tflite_runtime
frommaster
because I need a recent fix (https://github.com/tensorflow/tensorflow/issues/33691). I’m using the following:But the resulting pip package gives me the following error when running inference:
With the
tflite_runtime
you mentioned, the one from here, it works but the resizing has the bug so the output is incorrect. I’m guessing the version you posted is based on branchv2.1.0
? How can I make a pip wheel based onmaster
?My fault, problem solved!
I was using:
instead of use
tflite_runtime
:Thanks. All seems to be working and this issue can be closed.
@feranick @Namburger Problem is solved once you install the updated tflite_runtime package 2.1.0 as noted in the news: https://coral.ai/news/updates-01-2020/ That was an obvious one…
Thanks guys for the great work, can’t wait to see the new hardware 👍 Alexander
I confirm @Namburger solution works. This should close the issue. Thanks very much!
I confirm that version 13.0 or the runtime (libedgetpu1) fails on all models that previously worked on runtime version 12.1. This is regardless on the TF version the initial tflite models were generated and the edgetpu-compiler used for the conversion.
As a note: the older runtimes are no longer available so inference is currently broken unless someone does not update to the new runtimes. This is hard, as the new ones are currently pushed through the apt upgrade process.