TTS: [Bug] RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` when training tacotron2

🐛 Description

i got the UserWarning when i try to training tacotron2 with ljspeech. the warning is as below:

/media/DATA-2/TTS/coqui/TTS/TTS/tts/models/tacotron2.py:341: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  alignment_lengths = mel_lengths // self.decoder.r

and after epoch 8/1000 in step 1886/3243 i got error like this:

RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

Is that the cause of the error? how to handle it?

I will display the error in its entirety

   --> STEP: 1886/3243 -- GLOBAL_STEP: 27830
     | > decoder_loss: 5.84848  (6.74442)
     | > postnet_loss: 6.73053  (7.62051)
     | > stopnet_loss: 0.29443  (0.42176)
     | > decoder_coarse_loss: 6.32641  (7.27556)
     | > decoder_ddc_loss: 0.00271  (0.00433)
     | > ga_loss: 0.00282  (0.00469)
     | > decoder_diff_spec_loss: 1.54124  (1.60230)
     | > postnet_diff_spec_loss: 3.17193  (3.19602)
     | > decoder_ssim_loss: 0.91339  (0.91384)
     | > postnet_ssim_loss: 0.92837  (0.92731)
     | > loss: 6.67432  (7.51629)
     | > align_error: 0.98659  (0.97846)
     | > grad_norm: 3.66964  (5.65185)
     | > current_lr: 0.00000 
     | > step_time: 0.72660  (0.52215)
     | > loader_time: 0.00150  (0.00552)

 ! Run is kept in /media/DATA-2/TTS/coqui/TTS/run-April-19-2022_01+40PM-0cf3265a
Traceback (most recent call last):
  File "/media/DATA-2/TTS/coqui/tts_coqui/lib/python3.8/site-packages/trainer/trainer.py", line 1461, in fit
    self._fit()
  File "/media/DATA-2/TTS/coqui/tts_coqui/lib/python3.8/site-packages/trainer/trainer.py", line 1445, in _fit
    self.train_epoch()
  File "/media/DATA-2/TTS/coqui/tts_coqui/lib/python3.8/site-packages/trainer/trainer.py", line 1224, in train_epoch
    _, _ = self.train_step(batch, batch_num_steps, cur_step, loader_start_time)
  File "/media/DATA-2/TTS/coqui/tts_coqui/lib/python3.8/site-packages/trainer/trainer.py", line 1057, in train_step
    outputs, loss_dict_new, step_time = self._optimize(
  File "/media/DATA-2/TTS/coqui/tts_coqui/lib/python3.8/site-packages/trainer/trainer.py", line 946, in _optimize
    outputs, loss_dict = self._model_train_step(batch, model, criterion)
  File "/media/DATA-2/TTS/coqui/tts_coqui/lib/python3.8/site-packages/trainer/trainer.py", line 902, in _model_train_step
    return model.train_step(*input_args)
  File "/media/DATA-2/TTS/coqui/TTS/TTS/tts/models/tacotron2.py", line 344, in train_step
    outputs = self.forward(text_input, text_lengths, mel_input, mel_lengths, aux_input)
  File "/media/DATA-2/TTS/coqui/TTS/TTS/tts/models/tacotron2.py", line 206, in forward
    decoder_outputs, alignments, stop_tokens = self.decoder(encoder_outputs, mel_specs, input_mask)
  File "/media/DATA-2/TTS/coqui/tts_coqui/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/DATA-2/TTS/coqui/TTS/TTS/tts/layers/tacotron/tacotron2.py", line 321, in forward
    decoder_output, attention_weights, stop_token = self.decode(memory)
  File "/media/DATA-2/TTS/coqui/TTS/TTS/tts/layers/tacotron/tacotron2.py", line 284, in decode
    decoder_output = self.linear_projection(decoder_hidden_context)
  File "/media/DATA-2/TTS/coqui/tts_coqui/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/DATA-2/TTS/coqui/TTS/TTS/tts/layers/tacotron/common_layers.py", line 25, in forward
    return self.linear_layer(x)
  File "/media/DATA-2/TTS/coqui/tts_coqui/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/media/DATA-2/TTS/coqui/tts_coqui/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 103, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

Environment

{ “CUDA”: { “GPU”: [ “NVIDIA GeForce GTX 1660 Ti” ], “available”: true, “version”: “10.2” }, “Packages”: { “PyTorch_debug”: false, “PyTorch_version”: “1.11.0+cu102”, “TTS”: “0.6.1”, “numpy”: “1.19.5” }, “System”: { “OS”: “Linux”, “architecture”: [ “64bit”, “ELF” ], “processor”: “x86_64”, “python”: “3.8.0”, “version”: “#118~18.04.1-Ubuntu SMP Thu Mar 3 13:53:15 UTC 2022” } }

Additional context

I’ve reduced the batch size, currently I’m using batch size = 4. Is this also caused by reducing the batch size? But if it’s not reduced it will be OOM.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 1
  • Comments: 18 (2 by maintainers)

Most upvoted comments

I experienced this issue on Linux and I solved it by running

$ unset LD_LIBRARY_PATH
unset LD_LIBRARY_PATH

Hoping to understand this issue deeper. For those who found success in this command, why does this command work for you? I’m reading this link but I’m not following how it is connected to the CUDA error.

unset LD_LIBRARY_PATH

Saved my day! Thanks!

I got this error too. It’s funny that re-executing the code solved the problem for me.

This issue was resolved by adding memory