addons: TF addons Seq2Seq with Attention OOM when mixed_precision enable

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 18.04
  • TensorFlow version and how it was installed (source or binary): 2.3.0
  • TensorFlow-Addons version and how it was installed (source or binary): 0.10/0.11
  • Python version: 3.7
  • Is GPU used? (yes/no): Yes

Describe the bug

THe model OOM when mixed precision enable, without enable mixed precision, the model training fine.

Code to reproduce the issue

Here is my notebook to reproduce this bug. You can set IS_MIXED_PRECISION = True or False to check by your self. The model i implemented here is Tacotron-2. Note that the way i enable mixed precision works for other model in my framework except Tacotron-2.

seq2seq.zip

Provide a reproducible test case that is the bare minimum necessary to generate the problem.

Other info / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

Hi, i’m creator of TensorFlowTTS (https://github.com/TensorSpeech/TensorFlowTTS). I’m trying to apply mixed precision for Tacotron-2 which implement by tensorflow_addons.seq2seq. Because the model training too slow so i want to use mixed_precision to speed-up the training progress but i failed. Hope you guys can help me fix this bug and that will help improve my user experience (also TF user experience 😄.). Thanks a lot.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 26 (13 by maintainers)

Most upvoted comments

I don’t see anything to fix in Addons, so you may try to raise this to the TensorFlow team.