tensorflow: Control dependency on identity containing assign not working

I’m running Tensorflow 0.10.

The following code

import tensorflow as tf

x = tf.Variable(0, dtype=tf.int32)

old_val = tf.identity(x)
with tf.control_dependencies([old_val]):
    new_val = tf.assign(x, x + 1)

with tf.Session() as sess:
    sess.run(tf.initialize_all_variables())

    for i in xrange(3):
        print sess.run([old_val, new_val, x])

outputs

[1, 1, 1]
[2, 2, 2]
[3, 3, 3]

From reading the docs on control_dependencies and identity as well as StackOverflow, I expected output

[0, 1, ?]
[1, 2, ?]
[2, 3, ?]

where ? indicates that the variable value is unspecified.

Is this a bug? If this is not a bug, what is the correct way to refer to the value of variable before and after assignment in a single graph?

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 1
  • Comments: 23 (18 by maintainers)

Most upvoted comments

You are right that if the identity op is on a different device than the variable, its output is a copy. We had many discussions on this and even had some proposals that would give the option of stronger memory semantics. I am hopeful that we will get this mess fixed soon.

I’m guessing identity has an optimization to not perform a copy if there is no device transfer. In this case, I do need a copy of x.

Really, what I want is a function that returns both the old value and the new value of a variable. As you noted with your tf.square example, applying a non-identity op to x seems to cause a copy, so I can likely hack around this bug with old_val = x + 0.

edit: I confirmed that replacing old_val = tf.identity(x) with old_val = x + 0 causes old_val to fetch as new_val - 1 (correct behavior) rather than new_val.