tensorflow: BUG: tf.random.normal() has a fixed value in eager mode (TF2.0)

Please go to Stack Overflow for help and support:

https://stackoverflow.com/questions/tagged/tensorflow

If you open a GitHub issue, here is our policy:

  1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.
  3. It shouldn’t be a TensorBoard issue. Those go here.

Here’s why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.


System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary):
  • TensorFlow version (use command below): v2.0.0-rc2-26-g64c3d38 2.0.0
  • Python version: 3.6.9
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:
  • Exact command to reproduce:

You can collect some of this information using our environment capture script:

https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh

You can obtain the TensorFlow version with:

python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"

Describe the problem

Describe the problem clearly here. Be sure to convey here why it’s a bug in TensorFlow or a feature request.

In Tf2.0 eager mode, tf.random.normal() will give the same value over and over again. This happens whether you use a Keras Model or just call the tf.random.normal() tensor repeatedly:

import numpy as np
import tensorflow as tf

id = np.ones(shape=(32,10))
i = tf.keras.layers.Input(shape=(10,), batch_size=32, dtype=tf.float64)
y = tf.random.normal(shape=(32,10), name="noise", dtype=tf.float64)
o = tf.add(i, y)
model = tf.keras.Model(inputs=i, outputs=o)

# same value every time?
model.predict(id)
model.predict(id)
model.predict(id)

This occurs without Keras as well:

x = tf.constant(value=np.ones(shape=(32,10)), dtype=tf.float64)
y = tf.random.normal(shape=(32,10), name="noise", dtype=tf.float64)
z = tf.add(x, y)
print(z)
print(z)
print(z)

If you disable eager mode with tf.compat.v1.disable_eager_execution(), the Keras Model will generate new values each time it’s called (as it should).

Source code / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 30 (16 by maintainers)

Most upvoted comments

Well, to me the behaviour is exactly what I expect of the eager mode. When you have

import tensorflow as tf
y = tf.random.normal(shape=(2,2), name="noise", dtype=tf.float64)
print(y)
print(y)

it should write the same value twice – it was generated once during tf.random.normal call. So in your second example, I would expect three same values.

Similarly in your Keras code, y is generated once and never recomputed – why should it? It is not a function generating random values, it was generated once and then stored in the model. If you want to generate fresh random values, you must change y to be a function of the input. For example, you could write

y = tf.keras.layers.Lambda(lambda _: tf.random.normal(shape=(32,10), name="noise", dtype=tf.float64), dtype=tf.float64)(i)

I assume there is a better way to do it in Keras though.

@foxik’s two comments are exactly right, and I also agree with @markemus that this is a big change. Unfortunately tf.random.normal is very broken in TF2 (its semantics was designed for graph mode, and its statefulness makes it very hard to behave the same in TF1 and TF2). Please see https://www.tensorflow.org/guide/random_numbers for the recommended ways to generate random numbers in TF2. We haven’t migrated Keras to the new RNGs so we can’t deprecate the old RNGs yet.

I agree with @foxik, the current behaviour is strictly logical. As to the proper way to add noise in TF2, as far as I know, it is by using a dedicated layer, e.g. the GaussianNoise one.

Similarly, the examples linked are a bit different from your initial example; if you write within your layer’s call method that it should generate a value from random normal and add it to the input, it will indeed vary on each call. This is typically what would happen if you rewrote the first example’s function as the call method of a custom layer.

This is not related to the seed- the issue is that the same code behaves differently in graph vs eager mode. The same code run in graph mode will return new values each time the tensor is evaluated, but in eager mode it stores those values and returns them repeatedly.

This means that for example everyone using VAEs built in TF1.0 has code that is silently broken in TF2.0 (now that eager mode is the default)- instead of random values on each training step they’re getting a fixed matrix that is instantiated once and evaluated over and over.

Additionally there is no way as far as I can tell to get a tensor with new random values on each evaluation in eager mode.

This same bug actually does exist in eager mode in TF1.0.

EDIT: To be clear, to see the different behavior run the code in the original post twice: once as it is, and once with tf.compat.v1.disable_eager_execution().