rpi-object-tracking: Coral USB is not working

  • Raspberry Pi Deep PanTilt version: 4 with 4GB & 64SD
  • Python version: 3.7
  • Operating System: Debian Buster

Description

Ran “rpi-deep-pantilt detect --edge-tpu --loglevel=INFO” after installing Coral USB per instruction & Googl’s (Note it is now 2.1 version Runtime)"

Error: “RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.”

It works without “–edge-tpu”

What I Did: multiple re-install & reboot & same result

Paste the command(s) you ran and the output.: "rpi-deep-pantilt detect --edge-tpu" and "rpi-deep-pantilt detect --edge-tpu --loglevel=INFO" ----generate same error.

Whole output of error from CMD
(.venv) pi@raspberrypi:~/rpi-deep-pantilt $ rpi-deep-pantilt detect --edge-tpu
INFO: Initialized TensorFlow Lite runtime.
Traceback (most recent call last):
  File "/home/pi/rpi-deep-pantilt/.venv/bin/rpi-deep-pantilt", line 8, in <module>
    sys.exit(main())
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 107, in main
    cli()
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 52, in detect
    model = SSDMobileNet_V3_Coco_EdgeTPU_Quant()
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/ssd_mobilenet_v3_coco.py", line 56, in __init__
    self.tflite_interpreter.allocate_tensors()
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 244, in allocate_tensors
    return self._interpreter.AllocateTensors()
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.

If there was a crash, please include the traceback here.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 20 (7 by maintainers)

Most upvoted comments

Complementing to IanShow15’s contribution, the following worked for my RPI4:

  1. in your venv, install this pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_armv7l.whl

  2. the same procedure as lanShow15, but at the last step replace line 50-55 with the following instead: self.tflite_interpreter = tflite.Interpreter( model_path=self.model_path, experimental_delegates=[ tflite.load_delegate(self.EDGETPU_SHARED_LIB) ])

@Martin2kid The problem here is that this repo is using tf.lite.experimental.load_delegate which is a tensorflow api: https://github.com/leigh-johnson/rpi-deep-pantilt/blob/master/rpi_deep_pantilt/detect/facessd_mobilenet_v2.py#L53

I suggest using the tflite_runtime api instead which is the reason why it was upgraded recently in the first place. You can see here as an example to do this: https://github.com/google-coral/tflite/blob/master/python/examples/detection/detect_image.py#L57

Nam Vu, Great & appreciate your comment and link!!!

@Namburger

No worries / no apology necessary! Thank you for jumping in and helping everyone in this issue. ❤️

Let me know if there’s anything I can do to assist you in the meantime!

The Google GDE program operates behind Google NDAs (we don’t work for or otherwise represent Google though). GDEs often organize/participate in early access programs if you’re looking for extra feedback, testing, support before open sourcing. I have a couple USB accelerators, the Dev Board, and I’m more than happy to do bonkers things like try the SOM on an RPI running an aarch64 distribution like Mendel, Fedora, Arch, etc.

Shoot me an email hi@leighjohnson.me if you want to connect and chat more about working with the GDE program.

for anyone that needs a temporary fix, you can do the following:

  1. in your venv install this pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0-cp37-cp37m-linux_aarch64.whl
  2. go into your python environment’s lib/site-packages and find rpi-deep-pantilt, edit both facessd_mobilenet_v2.py and ssd_mobilenet_v3_coco.py with the following:
  • add import tflite_runtime.interpreter as tflite at the top

  • replace line 50-55 self.tflite_interpreter = tf.lite.Interpreter( model_path=self.model_path, experimental_delegates=[ tf.lite.experimental.load_delegate(self.EDGETPU_SHARED_LIB) ] )

  • with the following self.tflite_interpreter = tflite.Interpreter( model_path=model_file, experimental_delegates=[ tflite.load_delegate(EDGETPU_SHARED_LIB) ])

Thank you @namburger! 🙏 Appreciate the example code. I’ll fix this in my next release.

@leigh-johnson I mentioned you to our FAE team and our Developer Advocates, they’ll contact you if we need your helps!

@leigh-johnson

I’m also looking into why edgetpulib needs to package its own Interpreter and Delegate base classes, this is non-standard.

This is indeed non standard and we apologize. The issue is that libedgetpu started depending on tensorflow as a dependency, so you’d need the exact tensorflow commit that we used to build libedgetpu in order to be compatible. That’s why we packaged tflite_runtime package that is built from the same commit as libedgetpu.so. The deeper reason why this is so is because our library isn’t open source which means users cannot build their own .so to fit with the tensorflow package. We are working diligently on this issue. Thanks for this repo

Hey y’all, this should be fixed in versions >= 1.2.0 - let me know experience any issues!

I’m also looking into why edgetpulib needs to package its own Interpreter and Delegate base classes, this is non-standard.

My understanding is that these interfaces are provided by TensorFlow’s tensorflow.lite lib, and that tf.lite.experimental.load_delegate can be used to load a library that implements a custom delegate class. The custom delegate is responsible for registering a kernel node, which parses a TensorFlow graph and “claims” operations it knows how to execute.

The Unsupported data type in custom op handler error raised by using libedgetpu’s shared object with tensorflow.lite smells like a SWIG typemap issue, but I’d need more context from Coral folks to get to the bottom of that issue.

@Martin2kid The problem here is that this repo is using tf.lite.experimental.load_delegate which is a tensorflow api: https://github.com/leigh-johnson/rpi-deep-pantilt/blob/master/rpi_deep_pantilt/detect/facessd_mobilenet_v2.py#L53

I suggest using the tflite_runtime api instead which is the reason why it was upgraded recently in the first place. You can see here as an example to do this: https://github.com/google-coral/tflite/blob/master/python/examples/detection/detect_image.py#L57