tensorflow: tf.train.saver.restore failed error

Cause training a model is time consuming, So Save a Checkpoint on training, but error occurred when to restore. The saver.restore says as follow:

Signature: saver.restore(sess, save_path)
Docstring:
Restores previously saved variables.

This method runs the ops added by the constructor for restoring variables.
It requires a session in which the graph was launched.  The variables to
restore do not have to have been initialized, as restoring is itself a way
to initialize variables.

The `save_path` argument is typically a value previously returned from a
`save()` call, or a call to `latest_checkpoint()`.

Args:
  sess: A Session to use to restore the parameters.
  save_path: Path where parameters were previously saved.

So I used it as following:

with tf.Graph().as_default():
    saver = tf.train.Saver()
    sess = tf.Session()
    Saver.restore(sess, "./MNIST_data/-1")

But got the following err:

ValueError                                Traceback (most recent call last)
<ipython-input-10-4c62153b8108> in <module>()
     31 
     32 with tf.Graph().as_default():
---> 33     saver = tf.train.Saver()
     34     sess = tf.Session()
     35     Saver.restore(sess, "./MNIST_data/-1")

/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.pyc in __init__(self, var_list, reshape, sharded, max_to_keep, keep_checkpoint_every_n_hours, name, restore_sequentially, saver_def, builder)
    678         var_list = variables.all_variables()
    679       if not var_list:
--> 680         raise ValueError("No variables to save")
    681       saver_def = builder.build(
    682           var_list,

ValueError: No variables to save

About this issue

  • Original URL
  • State: closed
  • Created 9 years ago
  • Reactions: 11
  • Comments: 15

Commits related to this issue

Most upvoted comments

add this should be solve

‘from_detection_checkpoint: true’

I am able to restore models within the same Python script, but I am unable to restore in a different script in the method stated above.

@vrv I had same problem and headed here to see if it’s a bug or not,

I have eval.py for calculating my CNN accuracy and used exactly same method to load ckpt file. I got this error but using tf.Graph() doesn’t help. Instead creating a dummy variable solve the error.

p.s just inform you that this code is not working and got stuck in the sess.run(acc)

with tf.Graph().as_default():

    dummy = tf.Variable(0)  # dummy variable !!!
    init_op = tf.initialize_all_variables()

    with tf.Session() as sess:

      sess.run(init_op)

      saver = tf.train.Saver()
      # Start the queue runners.
      coord = tf.train.Coordinator()
      threads = tf.train.start_queue_runners(sess=sess, coord=coord)

      summary_op = tf.merge_all_summaries()

      summary_writer = tf.train.SummaryWriter(FLAGS.eval_dir,
                                              graph_def=sess.graph_def)


      ckpt = tf.train.get_checkpoint_state(checkpoint_dir=FLAGS.checkpoint_dir)
      print ckpt.model_checkpoint_path
      if ckpt and ckpt.model_checkpoint_path:
        saver.restore(sess, ckpt.model_checkpoint_path)
        print('Restored!')

      images, labels = my_input.inputs_val()

      # Build a Graph that computes the logits predictions from the
      # inference model.
      logits = my_cifar.inference(images)

      acc = my_cifar.evaluation(logits, labels)

      tf.scalar_summary('Acc', acc)

      try:
        while not coord.should_stop():
          print('Calculating Acc:')
          acc_r = sess.run(acc)
          print(acc_r)

          # Write results to TensorBoard
          summary_str = sess.run(summary_op)
          summary_writer.add_summary(summary_str)

      except tf.errors.OutOfRangeError:
        print ('Done!')

      finally:
        # When done, ask the threads to stop.
        coord.request_stop()

      coord.join(threads)