addons: Missing argument in apply_gradients() in AdamW optimizer
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab
- TensorFlow version and how it was installed (source or binary): tf-nightly 2.2.0-dev20200309 (pip install)
- TensorFlow-Addons version and how it was installed (source or binary): 0.8.3 (pip install)
- Python version: 3.6
- Is GPU used? (yes/no): yes
Describe the bug
When running model.compile() with AdamW optimizer, a type error is thrown saying:
apply_gradients() got an unexpected keyword argument 'all_reduce_sum_gradients'
This can be fixed by adding in the argument to apply_gradients() in tensorflow_addons/optimizers/weight_decay_optimizers.py
Code to reproduce the issue
https://colab.research.google.com/drive/1A6X8yYii5M8BDqwAFvoFglrTIqWvwQLm
Other info / logs
TypeError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:503 train_function *
outputs = self.distribute_strategy.experimental_run_v2(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:920 experimental_run_v2 **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2254 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2615 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:473 train_step **
_minimize(tape, self.optimizer, loss, self.trainable_variables)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1737 _minimize
all_reduce_sum_gradients=False)
TypeError: apply_gradients() got an unexpected keyword argument 'all_reduce_sum_gradients'
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 4
- Comments: 22 (17 by maintainers)
Commits related to this issue
- Add **kwargs to apply_gradients in weight_decay_optimizers. This is a quick fix to restore compatibility with TF 2.2 which adds a new keyword argument. See #1267 for the discussion. — committed to PhilJd/addons by PhilJd 4 years ago
- Add **kwargs to apply_gradients in weight_decay_optimizers. (#1566) * Add **kwargs to apply_gradients in weight_decay_optimizers. This is a quick fix to restore compatibility with TF 2.2 which add... — committed to tensorflow/addons by PhilJd 4 years ago
- Add **kwargs to apply_gradients in weight_decay_optimizers. (#1566) * Add **kwargs to apply_gradients in weight_decay_optimizers. This is a quick fix to restore compatibility with TF 2.2 which add... — committed to jrruijli/addons by PhilJd 4 years ago
You code works with tensorflow==2.1.0 and tfa-nightly.
We currently target the stable version of tensorflow (2.1.0) to, as you can see, keep the sanity of our devs. So we don’t expect everything to work with tf-nightly.
But what you are reporting is concerning. Either :
In all cases let’s keep this issue open.
I solved it, Just upgrade to new version tfa 0.10.0
pip install tensorflow_addons==0.10.0Sure, feel free to ping me if I happen to miss the announcement 😃
I’m getting a similar error,
This is from attempting a custom model
This is with the AdamW optimizer as well.
For AdamW https://github.com/tensorflow/addons/blob/master/tensorflow_addons/optimizers/weight_decay_optimizers.py#L130-L181