tflite: ValueError: Didn't find custom op for name 'edgetpu-custom-op' with version 1

I am trying to run my first test of the USB Accelerator using the classify_image.py script and I’m getting an error trying to initialize the the Interpreter:

$ python3 classify_image.py \                                                                      
--model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \                                
--labels models/inat_bird_labels.txt \                                                              
--image images/parrot.jpg                                                                           
Initializing TF Lite interpreter...                                                                 
Traceback (most recent call last):                                                                  
  File "classify_image.py", line 118, in <module>                                                   
    main()                                                                                          
  File "classify_image.py", line 95, in main                                                        
    experimental_delegates=[load_delegate('libedgetpu.so.1.0')])                                    
  File "/usr/local/lib/python3.5/dist-packages/tflite_runtime/interpreter.py", line 206, in __init__
    model_path))                                                                                    
ValueError: Didn't find custom op for name 'edgetpu-custom-op' with version 1                       
Registration failed.                                                                                

I believe I have installed everything following this tutorial: https://coral.withgoogle.com/docs/accelerator/get-started/ including the installation of the runtime from https://www.tensorflow.org/lite/guide/python. I chose the tflite_runtime-1.14.0-cp35-cp35m-linux_x86_64.whl for my system.

Here are some system details: Ubuntu 16.04.6 LTS 64-bit python 3.5.2

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 1
  • Comments: 31 (1 by maintainers)

Most upvoted comments

Other edgetpu_api TPU code is running fine with this system, so its not that. But I’ve certainly made this mistake multiple times in the past!

I managed to get the broken tflite_runtime removed and now your cut and pasted script runs:

tflite/python/examples/classification$ python3 x.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --image images/parrot.jpg TF VERSION: 1.15.0 Initializing TF Lite interpreter… ----INFERENCE TIME---- Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory. 12.9ms 3.6ms 3.4ms 3.5ms 3.4ms -------RESULTS-------- 923 Ara macao (Scarlet Macaw): 0.78516

But I can’t figure out how to change the downloaded tflite/python/examples/classification program to use tensorflow instead of tflite_runtime (the one that needs --input instead of --image).

Thanks for your help.

@Namburger I installed tf-nightly package using pip3 in a virtual environment and your script is going fine.

TF VERSION:  2.1.0-dev20191024
Initializing TF Lite interpreter...
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
12.9ms
4.6ms
4.3ms
4.4ms
4.8ms
-------RESULTS--------
923 Ara macao (Scarlet Macaw): 0.76562

Thank you very much for your help.

@Namburger Thanks, I merged the changes in your github with the downloaded example from the web-page instructions and it runs fine now.

I’d never would have figured out replacing: tflite.load_delegate() with: tf.compat.v2.lite.experimental.load_delegate()

How will I know if/when the tflite_runtime gets fixed for 2.11.2? or will it be when 2.11.3 comes out?

I have an “if it ain’t broke don’t fix it!” mentality, so I’d stayed with 1.92.2 until I tried Posenet which required 2.11.1 and then the problems started. I got Posenet working fine and then discovered that all my other TPU code was now broken.

Everything is back to working fine now.

@mrazekv No problems, thanks for helping me diagnose the issue. The only difference here is we’re using tf’s:

tf.compat.v2.lite.Interpreter
tf.compat.v2.lite.experimental.load_delegate

instead of

from tflite_runtime.interpreter import Interpreter
from tflite_runtime.interpreter import load_delegate

I believe tflite_runtime is the issue, but we’re unable to reproduce this on our end o_0

@mrharicot @travisariggs Could you guys try the edgetpu api instead of the tflite_runtime api? https://github.com/google-coral/edgetpu/blob/master/examples/classify_image.py At the moment I’d like to pin point the issue down to see if it’s a tflite problem or libedgetpu problem

I have the exact same issue with the same system: Ubuntu 16.04.6 LTS 64-bit python 3.5.2