tensorflow: freeze_graph not initializing tables
I am not sure if this is an actual bug or if its expected but undocumented behavior.
I have a model that uses multiple lookup tables created via string_to_index. I freeze the model like so:
bazel-bin/tensorflow/python/tools/freeze_graph --input_graph=/tmp/tf/graph.pbtxt --input_checkpoint=/tmp/tf/model.ckpt-0 --output_graph=/tmp/ticker_classifier.pb --output_node_names=sigmoid --initializer_nodes=init_all_tables
However when the model is reloaded and I attempt to run it I get an error “Table not initialized.” I get exactly the same resulting file whether I specify initializer_nodes or not. The behavior I was expecting was for the model to contain the lookup tables in a ready to use state for inference but I don’t know if that is an unreasonable expectation.
What related GitHub issues or StackOverflow threads have you found by searching the web for your problem?
I have not seen any issues related to this. I previously posted about this here http://stackoverflow.com/questions/42916383/how-to-properly-freeze-a-tensorflow-graph-containing-a-lookuptable
Environment info
Operating System: MacOS and Linux (CentOS 7)
Installed version of CUDA and cuDNN: None
If installed from source, provide
- The commit hash (
git rev-parse HEAD) 07bb8ea2379bd459832b23951fb20ec47f3fdbd4 - Build label: 0.4.5 Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar Build time: Thu Mar 16 12:19:38 2017 (1489666778) Build timestamp: 1489666778 Build timestamp as int: 1489666778
If possible, provide a minimal reproducible example (We usually don’t have time to read hundreds of lines of your code)
I have been unable to make a small example but I can spend more time on it if needed.
What other attempted solutions have you tried?
The workaround is to add init_all_tables to the output_nodes and then run init_all_tables before feeding the session examples for inference. This does have the side effect of needing to distribute the source files for the tables to the same path on all nodes that was originally used for training.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 5
- Comments: 36 (19 by maintainers)
Commits related to this issue
- Try initializing graph in saved model Does not work, inspired by https://github.com/tensorflow/tensorflow/issues/8665 — committed to Mbompr/deepr by Mbompr 4 years ago
Adding
init_all_tablesto the list of names to export fixes this issue.The call to
extract_sub_graphinsideconvert_variables_to_constantsprunes out this op and its descendants (keys, values) if you don’t includeinit_all_tablesinoutput_node_names. I don’t like the idea of running an initializer op during inference and having a try/except seems hacky to me. Is there another way to do this?@jkiske 's workaround worked for me!
We also built on top of it to make it work with
tf.tables_initializer(), but it requires two other changes:tf.get_collection(tf.GraphKeys.TABLE_INITIALIZERS).tf.tables_initializer()references thetf.GraphKeys.TABLE_INITIALIZERScollection. The Graph does not contain a collection_list, but the MetaGraph does.So here’s a solution that works for us:
Nagging Assignees @petewarden, @suharshs: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.