tensorflow: TFLite GPU Delegate error Initializing

Dear Tensorflow developers,

After trying out the gpu-delegate demo on Android, I reimplemented in my app to try it out on my own model.

My model is the stripped ssd as the official tflite file did. At the interpreter initialization stage, the gpu delegate seems not working. On logcat I saw : Failed to apply delegate: GpuDelegate Prepare: Node is already a consumer of the valueNode number 77 (GpuDelegate) failed to prepare.

But it worked fine on the official mobilenet-ssd model.

What does this error message suppose to mean? Or is there anything I need to be careful on TFLite converting? Anything would be helpful. Thanks!

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 33 (10 by maintainers)

Most upvoted comments

@lcycoding If network weights etc. are of your concern, you can also send me the network architecture pre-training, if that makes the decision easier 😃

  1. This is an internal that may change, but roughly, the following happens:
  • When you call NewGpuDelegate(), a very simple bookkeeping object is created.
  • Then when you call interpreter->ModifyGraphWithDelegate, the GPU backend inspects your TFLite graph def and tells the TFLite framework which ops it can handle and which it can’t. Based on this, TFLite CPU and GPU communicate which parts should be handled by the CPU and which parts by the GPU.
  • Then, the GPU backend reshapes the subgraph that it is supposed to handle to a more GPU-friendly form with some optimizations. After this restructuring, we decide which shader programs need to be prepared and compiled. When all of these are compiled, the GPU backend is ready.
  • Of course, I understand that you are using the Java interface (based on the log you pasted), but these C++ functions are called through JNI.
  1. It is okay for you to run other shader code. What you need to be aware is to keep the GL context consistent. I will help you if you run into other GL related conflicts, but let’s resolve the network issue first 😃

To all developers wants to migrate gpu delegate into your app:

  1. Make sure single operation does not have repeated input source.
  2. Make sure your init thread and inference thread are the same 😛

Thanks to @impjdi for excellent answers!

@lcycoding @impjdi Why is repeated input source not allowed? Is there any good reason to not support it?

@yxchng I forgot the details, but it’s possible that it’s not allowed, due to OpenGL restrictions.

@lcycoding

Sorry for the late reply. Somehow didn’t get the email from github directly. 😦

We decided to solve our pipeline’s issue first since our model might still be changing.

Sounds good to me. Please let us know if you run into the issue again. It could be a toco issue or GPU backend issue, so anything that will let us reproduce the issue would be great.

But right after the program run into interpreter.runMultipleInputsOutputs part, my thread stucked…

I have to admit I’m not super familiar with TFLite’s Java interfaces, but looks like interpreter.runForMultipleInputsOutputs does a lot of things. For starters, can you comment out the input / output tensor ByteBuffer thing for now, i.e. leave input tensors uninitialized, and just call interpreter.run instead of interpreter.runForMultipleInputsOutputs? If that works, you know the GPU backend is working fine, but it’s somewhere in the method that is waiting for additional action to happen.