tensorflow: The tutorial "Logging and Monitoring Basics with tf.contrib.learn" has error.
When I used the code snippet in the section “Customizing the Evaluation Metrics with MetricSpec” of the tutorial Logging and Monitoring Basics with tf.contrib.learn. the code snippet is
validation_metrics = {
"accuracy":
tf.contrib.learn.metric_spec.MetricSpec(
metric_fn=tf.contrib.metrics.streaming_accuracy,
prediction_key=tf.contrib.learn.prediction_key.PredictionKey.
CLASSES),
"precision":
tf.contrib.learn.metric_spec.MetricSpec(
metric_fn=tf.contrib.metrics.streaming_precision,
prediction_key=tf.contrib.learn.prediction_key.PredictionKey.
CLASSES),
"recall":
tf.contrib.learn.metric_spec.MetricSpec(
metric_fn=tf.contrib.metrics.streaming_recall,
prediction_key=tf.contrib.learn.prediction_key.PredictionKey.
CLASSES)
}
My tensorflow version is r1.0 . When I run my program, it print the following error:
$ python iris.py
Traceback (most recent call last):
File "iris.py", line 72, in <module>
tf.app.run()
File "/Library/Python/2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "iris.py", line 24, in main
"accuracy": tf.contrib.learn.metric_spec.MetricSpec(
AttributeError: 'module' object has no attribute 'metric_spec'
I found that the class tf.contrib.learn.metric_spec.MetricSpec
has been renamed to tf.contrib.learn.MetricSpec
.
The class tf.contrib.learn.prediction_key.PredictionKey
also has been renamed to tf.contrib.learn.PredictionKey
.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 6
- Comments: 29 (11 by maintainers)
How can I use early_stopping in environment?
AxenGitHub I managed to run validation through training by using experiment see a doc here: https://www.tensorflow.org/api_docs/python/tf/contrib/learn/Experiment
I am not sure how effective is it yet, but it did a job. could you please share your solution for implementing validation monitor with the hooks. I made a question at stackoverflow https://stackoverflow.com/questions/45417502/validation-during-training-of-estimator?noredirect=1#comment77798445_45417502
is there any update regarding ValidationMonitor as hook? The documentation seems to not be updated
No, I did update this tutorial back in December, but haven’t yet switched to use
SessionRunHook
, as I was waiting on an equivalent canned hook forValidationMonitor
. That’s not yet available, correct?In the meantime, for an example of applying a
SessionRunHook
to anEstimator
, you can refer to the tf.layers tutorial (https://www.tensorflow.org/tutorials/layers), which covers how to configure aLoggingTensorHook
.@Moymix you can implement early stopping by using the
continuous_eval_predicate_fn
, available in tf.contrib.learn.Experiment.continuous_eval_on_train_data. For instance, let’s take a batch size of 10 and early stop count of 15. Modifying the example at TF Layers tutorial for a bigger dataset, the code would look like this:However, have in mind that
continuous_eval_predicate_fn
is an experimental function, so it could change at any moment.I am in the same boat as “agniszczotka”. I have successfully used a SummarySaverHook to write some stats to file and display them on tensorboard, but i am wondering how i can evaluate the accuracy improvement through training. Should i put an estimator.evaluate with different “step” parameters to evaluate the accuracy in different moments/checkpoints? In specific, i am trying to replicate this: https://www.tensorflow.org/versions/r1.3/get_started/monitors#evaluating_every_n_steps
I’ve created a
ValidationHook
based on the existingLoggingTensorHook
.You can attach it as a hook whenever you run
Estimator.train()
.Take a look at this example: https://stackoverflow.com/questions/46326848/early-stopping-with-experiment-tensorflow
@agniszczotka @alyaxey Using Experiment works and enables me to run validation along with training. However, I’ve found that the batch size is probably encoded as a constant instead of a symbolic tensor for the input node even though it is coded as a reshape node with variable batch size (i.e, tf.reshape(features[“x”], [-1, …]). As a result, in the Android code, I have to allocate an array of similar size as the batch size to store the output (i.e, fetch()).
@agniszczotka Thanks for your help. When I implement your suggestion, I get the following error:
File ".../anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 253, in train if (config.environment != run_config.Environment.LOCAL and
AttributeError: 'RunConfig' object has no attribute 'environment'
Any idea on how to get around it?Yes. All Monitors are deprecated. Not all of them have a direct equivalent, but there should be hooks for the main use cases. Except ValidationMonitor, as of today.
I’m also following this tutorial and having problems with it. I’m using the latest 1.0.1 release.
Is there any working example for these monitors
CaptureVariable
,PrintTensor
,ValidationMonitor
?@lienhua34 yes it’s correct. The interface has been sealed recently. Welcome to submit a pull request! @martinwicke Does the team have any plan to rewrite Monitor tutorial by Hooks?