edgetpu: Can not allocate tensor on Windows.
I wrote object detector using webcam and acceralated with Coral EdgeTPU based on my code.
https://github.com/mattn/webcam-detect-tflite
Modified for edgetpu
https://gist.github.com/819be2f3c70379a659984aa199d756e0
Compilation was succeeded but when I run the app, AllocateTensor failed.
Loading model: mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite
ERROR: Internal: :159 batches * single_input_size != input->bytes (307200 != 8136)
ERROR: Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.
What is wrong? I can confirm that my code which do similar thing on python works correctly. I tried master branch of tensorflow and d855adfc5a0195788bf5f92c3c7352e638aa1109 both.
EDIT: FYI, this cpp code works fine on Linux.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 18
@mattn unfortunately no, but I closed because this can’t be fixed due to libedgetpu being closed source. Next release is going to be compatible with newer tensorflow commit. FYI we are in the talk for open sourcing the libraries, at that time all these issues with go away!
Awesome thanks @Namburger DOODS is now using the open source library!
Will check it out. Thanks!
@snowzach
Humn, I actually never heard of this, do you have some reference?
No timeline on an official release, unfortunately 😕 However, libedgetpu just became open sourced! So in theory you can 1) build libedgetpu 2) build tensorflow-lite.a on the same commit and things should works in harmony. Tip of this
libedgetpurepo is currently on f394a768 which is much newer than the previous release. Here is a quick guide for doing this on an x86 machine with a USB accelerator. I haven’t tested the cross compilation yet, but you should be able to domake CPU=aarch64to produce libedgetpu for the devboard (build was successful for me, just haven’t tested).I found doods 2 hours ago. 😃