tensorflow: Slowdown training LSTM on TensorFlow 2.4+
We are noticing a significant slowdown during training with TensorFlow when using the Tensorflow-Privacy LSTM and GRU optimizers on TF 2.4. In tetsing, this only happens when using the TF-privacy optimizers, but it appears related to a change in the recurrent_v2 module of TensorFlow.
Doing some testing, it looks like the slowdown was introduced in between these two tf-nightly builds.
Describe the expected behavior Training can go from 15 sec/epoch to 2 mins+ per epoch with the latest TF release candidate (tensorflow==2.4.0rc1). tf-nightly==2.4.0.dev20201019 - 15 sec/epoch
Describe the current behavior tf-nightly==2.4.0.dev20201020 and TensorFlow RC1 - 2 mins+/epoch
System information GCP, running on Tesla V100, 16GB RAM, Ubuntu, 8 vCPU, Python 3.8, cuda11, TensorFlow 2.4.0rc0 and nightly installed via PIP.
Standalone code to reproduce the issue https://gist.github.com/zredlined/72305ab04670197869e470b232d22ed4
In tensorflow/python/keras/layers/recurrent_v2.py I think this TensorFlow commit is the culprit-- changing use_new_code() back to True speeds the code back up. The only reference I can find is in the issue above for what looks like an internal Google issue? Any help would be hugely appreciated, on most datasets we have tested with slowdowns are 10-20x. Thanks!
tensorflow/tensorflow@73b7097.
def _use_new_code():
return False # NOTE: changed to False in @73b7097. Changing back to True speeds training up.
Other notes/logs Originally posted at https://github.com/tensorflow/privacy/issues/141, opening an issue here as it appears to be an issue within TensorFlow.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 17 (7 by maintainers)
@amahendrakar Is there any way we can make the codepath for _use_new_code() a user facing option, defaulted to “off” like it is now, but that we can enable to true to speed up eager mode training?
@zredlined Sure, making it as a private flag which users can enable it accordingly seems fine to me. Feel free to submit the PR! Thank you!