tensorflow: Cannot create a stateful RNN with recurrent dropout
System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOSX 10.13.6
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): tf.version.VERSION=2.0.0-dev20190413 tf.version.GIT_VERSION=v1.12.0-12481-gc7ce6f4cd9
- Python version: 3.6.8
- Bazel version (if compiling from source): N/A
- GCC/Compiler version (if compiling from source): N/A
- CUDA/cuDNN version: N/A
- GPU model and memory: N/A
Describe the current behavior
I get an exception when trying to use recurrent_dropout
in a stateful RNN:
.../tensorflow/python/ops/resource_variable_ops.py in __imul__(self, unused_other)
1449
1450 def __imul__(self, unused_other):
-> 1451 raise RuntimeError("Variable *= value not supported. Use "
1452 "`var.assign(var * value)` to modify the variable or "
1453 "`var = var * value` to get a new Tensor object.")
RuntimeError: Variable *= value not supported. Use `var.assign(var * value)` to modify the variable or `var = var * value` to get a new Tensor object.
The full stacktrace is below.
Describe the expected behavior No exception.
Code to reproduce the issue
from tensorflow import keras
model = keras.models.Sequential([
keras.layers.GRU(128, return_sequences=True, stateful=True,
batch_input_shape=[32, None, 5],
recurrent_dropout=0.2)
])
Other info / logs Complete stacktrace:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-3e98e7412ec2> in <module>
4 keras.layers.GRU(128, return_sequences=True, stateful=True,
5 batch_input_shape=[32, None, 5],
----> 6 recurrent_dropout=0.2)
7 ])
.../tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
456 self._self_setattr_tracking = False # pylint: disable=protected-access
457 try:
--> 458 result = method(self, *args, **kwargs)
459 finally:
460 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
.../tensorflow/python/keras/engine/sequential.py in __init__(self, layers, name)
106 if layers:
107 for layer in layers:
--> 108 self.add(layer)
109
110 @property
.../tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
456 self._self_setattr_tracking = False # pylint: disable=protected-access
457 try:
--> 458 result = method(self, *args, **kwargs)
459 finally:
460 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
.../tensorflow/python/keras/engine/sequential.py in add(self, layer)
167 # and create the node connecting the current layer
168 # to the input layer we just created.
--> 169 layer(x)
170 set_inputs = True
171
.../tensorflow/python/keras/layers/recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs)
620
621 if initial_state is None and constants is None:
--> 622 return super(RNN, self).__call__(inputs, **kwargs)
623
624 # If any of `initial_state` or `constants` are specified and are Keras
.../tensorflow/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
631 base_layer_utils.AutoAddUpdates(self,
632 inputs)) as auto_updater:
--> 633 outputs = call_fn(inputs, *args, **kwargs)
634 auto_updater.set_outputs(outputs)
635
.../tensorflow/python/keras/layers/recurrent_v2.py in call(self, inputs, mask, training, initial_state)
328 input_length=timesteps,
329 time_major=self.time_major,
--> 330 zero_output_for_mask=self.zero_output_for_mask)
331 # This is a dummy tensor for testing purpose.
332 runtime = _runtime('unknown')
.../tensorflow/python/keras/backend.py in rnn(step_function, inputs, initial_states, go_backwards, mask, constants, unroll, input_length, time_major, zero_output_for_mask)
3558 # the value is discarded.
3559 output_time_zero, _ = step_function(input_time_zero,
-> 3560 initial_states + constants)
3561 output_ta = tuple(
3562 tensor_array_ops.TensorArray(
.../tensorflow/python/keras/layers/recurrent_v2.py in step(cell_inputs, cell_states)
316
317 def step(cell_inputs, cell_states):
--> 318 return self.cell.call(cell_inputs, cell_states, **kwargs)
319
320 last_output, outputs, states = K.rnn(
.../tensorflow/python/keras/layers/recurrent.py in call(self, inputs, states, training)
1706
1707 if 0. < self.recurrent_dropout < 1.:
-> 1708 h_tm1 *= rec_dp_mask[0]
1709
1710 if self.reset_after:
.../tensorflow/python/ops/resource_variable_ops.py in __imul__(self, unused_other)
1449
1450 def __imul__(self, unused_other):
-> 1451 raise RuntimeError("Variable *= value not supported. Use "
1452 "`var.assign(var * value)` to modify the variable or "
1453 "`var = var * value` to get a new Tensor object.")
RuntimeError: Variable *= value not supported. Use `var.assign(var * value)` to modify the variable or `var = var * value` to get a new Tensor object.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 1
- Comments: 16 (8 by maintainers)
Commits related to this issue
- Fix stateful RNN with recurrent_dropout in 2.0. variable *= tensor will throw error for resource variable in 2.0, and was generate warning for RefVariable in 1.x. Update all "*=" to be more explicit... — committed to tensorflow/tensorflow by qlzh727 5 years ago
- Fix stateful RNN with recurrent_dropout in 2.0. variable *= tensor will throw error for resource variable in 2.0, and was generate warning for RefVariable in 1.x. Update all "*=" to be more explicit... — committed to sleighsoft/tensorflow by qlzh727 5 years ago
- Restrict Keras to <v2.3.0 Both Keras v2.3.0 and v2.3.1 on Traverse (and at least the latter on TigerGPU) die with: WARNING:tensorflow:From /home/kfelker/.conda/envs/frnn/lib/python3.6/site-packages/... — committed to PPPLDeepLearning/plasma-python by felker 4 years ago
@ageron does it still work for you? I tried using recurrent_dropout with a GRU (as you are) and it seems to break for me. The problem seems to be with recurrent_dropout, cos if you switch it out everything seems to work. This problem also exists with LSTMs, and not just GRUs.
Should now be fixed by https://github.com/tensorflow/tensorflow/commit/6a6e8c2586dfd2aeeebe0d94d60dcca4604ab481.
@ageron I don’t see any error with
!pip install tf-nightly
. Gist is here.But I notice error is back with
pip install tf-nightly-gpu-2.0-preview==2.0.0-dev20190518
Thanks!