tensorflow: Dev-board Interpretere Runtime Error
I am working on the Coral dev board. I’m trying to deploy a segmentation model on it. When I’m running my deep lab segmentation model it is giving me the following error-
Traceback (most recent call last):
File "infer.py", line 17, in <module>
interpreter.allocate_tensors()
File "/home/mendel/.local/lib/python3.5/site-packages/tflite_runtime/interpreter.py", line 244, in allocate_tensors
return self._interpreter.AllocateTensors()
File "/home/mendel/.local/lib/python3.5/site-packages/tflite_runtime/interpreter_wrapper.py", line 114, in AllocateTensors
return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: :71 tf_lite_type != kTfLiteUInt8 (9 != 3)Node number 79 (EdgeTpuDelegateForCustomOp) failed to prepare.
The model and script are working fine if I don’t make it TPU compatible using edgetpu_compiler.
Code to reproduce the issue
from tqdm import tqdm
import numpy as np
from tflite_runtime.interpreter import Interpreter
from tflite_runtime.interpreter import load_delegate
test_data = np.random.rand(480,480,3)
img = np.array([test_data], dtype=np.float32)
interpreter = Interpreter(
model_path="deep_lab_quant_edgetpu.tflite",
experimental_delegates=[load_delegate('libedgetpu.so.1.0')])
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.set_tensor(input_details[0]['index'], img)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 18
I get the same error for my custom image processing model. I don’t truly know the source of the error, but have found that when reducing the size of the model (number of channels in each layer) and/or the input tensor (height and width dimensions), the error goes away.
However, the output of the edgetpu compiler suggests that even my “full-size” model easily fits on the TPU:
Here are my system specs:
@Namburger Thanks for the suggestion. Our code worked after downgrading to tf 1.15 and retraining the Keras model.
@mattroos thanks for your suggestion. This did not solve my error though! Below is the output log of edgetpu compilation of .tflite model. I’m using deeplab for segmentation. let me know if you are able to capture any error here. Also, can you suggest me any other model which can be run on coral dev board? Thanks.