edgetpu: USB accelerator cannot run `edgetpu` model

My USB accelerator cannot run edgetpu compiled model anymore, while it still can run the not-compiled tflite model.

Everything is tested with classification example from https://github.com/google-coral/tflite.

This is the error:

INFO: Initialized TensorFlow Lite runtime.
Traceback (most recent call last):
  File "classify_image.py", line 118, in <module>
    main()
  File "classify_image.py", line 96, in main
    interpreter.allocate_tensors()
  File "/home/ds017/.pyenv/versions/coral35/lib/python3.5/site-packages/tflite_runtime/interpreter.py", line 244, in allocate_tensors
    return self._interpreter.AllocateTensors()
  File "/home/ds017/.pyenv/versions/coral35/lib/python3.5/site-packages/tflite_runtime/interpreter_wrapper.py", line 114, in AllocateTensors
    return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.

Some time ago everything was running fine. What might be wrong?

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 59

Most upvoted comments

@feranick @gasgallo @alexanderfrey The fix is to just upgrade the tflite_runtime package!!

$ pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0-cp37-cp37m-linux_aarch64.whl
Collecting tflite-runtime==2.1.0 from https://dl.google.com/coral/python/tflite_runtime-2.1.0-cp37-cp37m-linux_aarch64.whl
  Downloading https://dl.google.com/coral/python/tflite_runtime-2.1.0-cp37-cp37m-linux_aarch64.whl (1.9MB)
    100% |████████████████████████████████| 1.9MB 202kB/s 
Requirement already satisfied: numpy>=1.12.1 in /usr/lib/python3/dist-packages (from tflite-runtime==2.1.0) (1.16.2)
Installing collected packages: tflite-runtime
  Found existing installation: tflite-runtime 1.15.0
    Uninstalling tflite-runtime-1.15.0:
      Successfully uninstalled tflite-runtime-1.15.0
Successfully installed tflite-runtime-2.1.0
$ python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
13.5ms
3.1ms
3.1ms
3.0ms
3.1ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.76562

Sorry guys for delaying responses, we’ve getting tons of issues, but if you are planning to build your own tflite_package or even tensorflow pip package, please use this commit!

@feranick I’m aware, we are working to get all these fix! Thanks

@feranick @gasgallo @alexanderfrey Update: We’ll fix the google-coral/tflite repo to be aligned with the new changes soon. Stay tuned for updates on new release @ our /news page!

@Namburger is there anything special about the way this tflite_runtime is built? I’m trying to build tflite_runtime from master because I need a recent fix (https://github.com/tensorflow/tensorflow/issues/33691). I’m using the following:

tensorflow/lite/tools/pip_package/build_pip_package.sh

But the resulting pip package gives me the following error when running inference:

RuntimeError: Internal: Unsupported data type in custom op handler: 39898280Node number 7 (EdgeTpuDelegateForCustomOp) failed to prepare.

With the tflite_runtime you mentioned, the one from here, it works but the resizing has the bug so the output is incorrect. I’m guessing the version you posted is based on branch v2.1.0? How can I make a pip wheel based on master?

My fault, problem solved!

I was using:

tf.lite.Interpreter(model_path, experimental_delegates=[tf.lite.experimental.load_delegate("libedgetpu.so.1")]) 

instead of use tflite_runtime:

from tflite_runtime.interpreter import load_delegate
from tflite_runtime.interpreter import Interpreter

Interpreter(model_path, experimental_delegates=[load_delegate("libedgetpu.so.1")])

Thanks. All seems to be working and this issue can be closed.

@feranick @Namburger Problem is solved once you install the updated tflite_runtime package 2.1.0 as noted in the news: https://coral.ai/news/updates-01-2020/ That was an obvious one…

Thanks guys for the great work, can’t wait to see the new hardware 👍 Alexander

I confirm @Namburger solution works. This should close the issue. Thanks very much!

I confirm that version 13.0 or the runtime (libedgetpu1) fails on all models that previously worked on runtime version 12.1. This is regardless on the TF version the initial tflite models were generated and the edgetpu-compiler used for the conversion.

As a note: the older runtimes are no longer available so inference is currently broken unless someone does not update to the new runtimes. This is hard, as the new ones are currently pushed through the apt upgrade process.