tensorflow: Running own TensorFlow model on Android gives native inference error: “Session was not created with a graph before Run()!”

I was able to run the Inception-v3 model on Android just fine, and I now want to run my own trained TensorFlow model on Android. I’m following the approach from TensorFlow’s image recognition tutorial and the Android TensorFlow demo, and adapting as necessary. My changes include: (a) integrating Android OpenCV as part of the bazel build (b) using own model and label file and © adjusting parameters (img_size, input_mean, input_std, etc.) accordingly.

From Android logcat, running my model with the tensorflow android demo app gives:

E/native: tensorflow_inference_jni.cc:202 Error during inference: Invalid argument: Session was not created with a graph before Run()!
...
E/native: tensorflow_inference_jni.cc:159 Output [output/Softmax:0] not found, aborting!

What related GitHub issues or StackOverflow threads have you found by searching the web for your problem?

Own (duplicate) SO thread: http://stackoverflow.com/questions/40555749/running-own-tensorflow-model-on-android-gives-native-inference-error-session-w

Environment info

OS X Yosemite (10.10.5), LGE Nexus 5 (Android 6.0.1), Android SDK 23, Android OpenCV SDK 23, Bazel 0.4.0.

Steps taken

  1. Saved own model’s checkpoint (.ckpt) and graph definition (.pb) files separately using tf.train.Saver() then tf.train.write_graph()
  2. Froze graph using freeze_graph.py (using bazel), gives 227.5 MB file
  3. Optimized the graph using optimize_for_inference.py (additionally tried strip_unused.py)
  4. Copied frozen, optimized, or stripped graph to android/assets
  5. Doubled the total byte limit using coded_stream.SetTotalBytesLimit() in jni_utils.cc to handle my large model size
  6. Built the tensorflow android app using bazel
  7. Installed on android device using adb and bazel

As a sanity check, I have tested my model in C++ built with bazel following the tutorial here label_image, and my model correctly outputs a prediction. I have also tried playing with the order by which I save my graph def and checkpoint files before freezing, but no change.

Any help would be great. cc @drpngx @andrewharp

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 25 (12 by maintainers)

Most upvoted comments

I had the same issue. Cost me several hours to find that out, but finally solved it. Maybe you did the same error.

I changed the modelfile to:

private static final String MODEL_FILE = "my_frozen_graph.pb";

assetManager.open call said it can open and read the file and tensorflow reported a success (0) and no exception and no debug message when calling inferenceInterface.initializeTensorflow(assetManager, modelFilename)

So I wrongly assumed that loading worked. It was no error with the input and output naming or the pb file (frozen with the freeze python script) it simply was tensorflow not finding the pb file but giving no error message.

THE SOLUTION was to change the model file to a path that the assetManager.open call does not find (on my phone) but tensorflow does.

private static final String MODEL_FILE = "file:///android_assets/my_frozen_graph.pb";

A suggestion for tensorflow would be to improve the API to correctly report, if loading worked or if e.g. the file could not be found.