tensorflow: TFLite GPU Delegate error Initializing
Dear Tensorflow developers,
After trying out the gpu-delegate demo on Android, I reimplemented in my app to try it out on my own model.
My model is the stripped ssd as the official tflite file did.
At the interpreter initialization stage, the gpu delegate seems not working.
On logcat I saw : Failed to apply delegate: GpuDelegate Prepare: Node is already a consumer of the valueNode number 77 (GpuDelegate) failed to prepare.
But it worked fine on the official mobilenet-ssd model.
What does this error message suppose to mean? Or is there anything I need to be careful on TFLite converting? Anything would be helpful. Thanks!
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 33 (10 by maintainers)
@lcycoding If network weights etc. are of your concern, you can also send me the network architecture pre-training, if that makes the decision easier 😃
To all developers wants to migrate gpu delegate into your app:
Thanks to @impjdi for excellent answers!
@lcycoding @impjdi Why is repeated input source not allowed? Is there any good reason to not support it?
@yxchng I forgot the details, but it’s possible that it’s not allowed, due to OpenGL restrictions.
@lcycoding
Sorry for the late reply. Somehow didn’t get the email from github directly. 😦
Sounds good to me. Please let us know if you run into the issue again. It could be a toco issue or GPU backend issue, so anything that will let us reproduce the issue would be great.
I have to admit I’m not super familiar with TFLite’s Java interfaces, but looks like
interpreter.runForMultipleInputsOutputs
does a lot of things. For starters, can you comment out the input / output tensorByteBuffer
thing for now, i.e. leave input tensors uninitialized, and just callinterpreter.run
instead ofinterpreter.runForMultipleInputsOutputs
? If that works, you know the GPU backend is working fine, but it’s somewhere in the method that is waiting for additional action to happen.