tensorflow: Keras models converted to Estimators do not write summaries.

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Debian Jessie
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: n/a
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): (‘v1.9.0-0-g25c197e023’, ‘1.9.0’)
  • Python version: 2.7.9
  • Bazel version (if compiling from source): n/a
  • GCC/Compiler version (if compiling from source): n/a
  • CUDA/cuDNN version: n/a
  • GPU model and memory: n/a
  • Exact command to reproduce: See provided gist

Describe the problem

I’ve been setting up Estimators by using Keras layers to define tensors and then feeding them into my own model_fn. This was a pain and prevented me from easily doing stuff like adding regularization, but was necessary until recently due to various bugs within model_to_estimator. Luckily those bugs have been fixed, and I can now use the much more canonical and proper model_to_estimator. Unfortunately there are still some holes…

Anyways, when I did this I would, in defining the model, set various summaries to be collected simply by calling tf.summary.scalar etc. as documented. However, when I take the same tensors and put them through a Keras model and then through model_to_estimator, the summaries I’ve defined do not get written.

I’ve tried a few things here, but largely haven’t met with any success. One of the workarounds I’ve seen suggested is to try and shove the result of tf.summary.merge_all into a metric, but this does not work since Keras wants those to be numeric and summaries are strings. This does not work if I put this in as target_tensors either, as commented out in the gist.

Note: I recognize that this is filed against TF 1.9, but I don’t see anything in the 1.10 changelog to indicate that this was noted or fixed. I am in a weird situation where it is best for me to use 1.9, unfortunately.

Source code / logs

The gist here reproduces this issue by taking a common function to get the tensors for a model, and creating estimators in both scenarios.

Below is the first event with a summary from each case. It shows that in the ‘plain’ case, the summary tag class_norm shows up whereas in the keras case it does not. This is the crux of the issue I’m encountering.

Plain case: {'value': [{'simple_value': 1.0, 'tag': u'enqueue_input/queue/enqueue_input/random_shuffle_queuefraction_over_250_of_750_full'}, {'simple_value': 4.119956016540527, 'tag': u'class_norm'}, {'simple_value': 0.6801050901412964, 'tag': u'loss'}]} Keras case: {'value': [{'simple_value': 1.0, 'tag': u'enqueue_input/queue/enqueue_input/random_shuffle_queuefraction_over_250_of_750_full'}, {'simple_value': 0.8805878758430481, 'tag': u'loss_1'}]}

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 5
  • Comments: 22 (4 by maintainers)

Most upvoted comments

The main advantage of the model_to_estimator is that we can skip the implementation of the model_fn. Writing the pure model_fn is not convenient for who uses the keras model and model_to_estimator to get the estimator. If adding just a summary or warm start an estimator will require an explicit model_fn manually defined the scope of model_to_estimator will be too narrow.