tensorflow: [TF 2.1] Error when converting LSTM model to a frozen graph using convert_variables_to_constants_v2()
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Mac
- TensorFlow installed from (source or binary): pip install tensorflow
- TensorFlow version (use command below): 2.1
- Python version: 2.7
I tried to froze a LSTM model build with tf.keras by using the convert_variables_to_constants_v2 function with the following code
import tensorflow as tf
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, embedding_dim]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
stateful=False,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
embedding_dim = 100
units = 256
vocab_size = 300
batch_size = 32
model = build_model(vocab_size, embedding_dim, units, batch_size)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
from tensorflow.python.keras.saving import saving_utils as _saving_utils
from tensorflow.python.framework import convert_to_constants as _convert_to_constants
tf.keras.backend.set_learning_phase(False)
func = _saving_utils.trace_model_call(model)
concrete_func = func.get_concrete_function()
frozen_func = _convert_to_constants.convert_variables_to_constants_v2(concrete_func)
produces
Cannot find the Placeholder op that is an input to the ReadVariableOp.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 4
- Comments: 19 (6 by maintainers)
any progress on this issues?
Hmm. I look at the test code of convert_variables_to_constants_v2, It seems like when our model have the LSTM or GRU’s layers, what we need to do is by pass one more argument lower_control_flow=False in order to make the convert_variables_to_constants_v2 work.
The below code is an example to freeze the model, hope it helps