text-summarization-tensorflow: Freeze a model to serve within API
Hi.
I successfully tested a portuguese corpus I prepared and trained ( change line in utils.py for word in word_tokenize(sentence, language='portuguese'): ).
I’d like to have a frozen model in a single .pb file in order to serve within an API. I tried several approaches, like this: https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
But unsuccessfully.
Would you consider providing some method to export a saved model? Or point me to the right direction?
Thanks!
About this issue
- Original URL
- State: open
- Created 6 years ago
- Reactions: 1
- Comments: 15 (8 by maintainers)
@PauloQuerido I too am trying to get a frozen graph to work. I got the .pb file from the link you posted using his freeze_graph function with output_node_names=decoder/decoder/transpose_1 I am now stuck on using the frozen graph, since importing the graph yields me “You must feed a value to tensor, Placeholder_2 and Placeholder_3” which are tensors used in training (I think). It’s weird because in test.py, running model.prediction with only three fed tensors works, but when frozen the model does not like me only using those three. If you are able to progress further than this, please hear me out
@gogasca From my understanding you only specify the last layer(s) from the graph as output nodes, ‘freezing’ everything between input and output node. I only specified decoder/decoder/transpose_1 as output node. And I hoped I could get it to work like this, without success
output = graph.get_tensor_by_name('prefix/decoder/decoder/transpose_1:0') input1 = graph.get_tensor_by_name('prefix/batch_size:0') input2 = graph.get_tensor_by_name('prefix/Placeholder:0') input3 = graph.get_tensor_by_name('prefix/Placeholder_1:0')prediction = self.sess.run(output, feed_dict={ input1: len(batch), input2: batch, input3: batch_x_len})