speechbrain: Loss is not finite and patience is exhausted
Describe the bug
2023-06-14 13:04:41,744 - speechbrain.core - WARNING - Parameter is not finite: Parameter containing:
tensor([nan, nan, nan, ..., nan, nan, nan], device='cuda:0',
requires_grad=True)
2023-06-14 13:04:41,744 - speechbrain.core - WARNING - Parameter is not finite: Parameter containing:
tensor([nan, nan, nan, ..., nan, nan, nan], device='cuda:0',
requires_grad=True)
2023-06-14 13:04:41,745 - speechbrain.core - WARNING - Parameter is not finite: Parameter containing:
tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0',
requires_grad=True)
2023-06-14 13:04:41,746 - speechbrain.core - WARNING - Parameter is not finite: Parameter containing:
tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0',
requires_grad=True)
2023-06-14 13:04:41,747 - speechbrain.core - WARNING - Parameter is not finite: Parameter containing:
tensor([nan, nan, nan, ..., nan, nan, nan], device='cuda:0',
requires_grad=True)
2023-06-14 13:04:41,747 - speechbrain.core - WARNING - Parameter is not finite: Parameter containing:
tensor([nan, nan, nan, ..., nan, nan, nan], device='cuda:0',
requires_grad=True)
2023-06-14 13:04:41,748 - speechbrain.core - ERROR - Exception:
Traceback (most recent call last):
File "train_with_wav2vec2.py", line 374, in <module>
valid_loader_kwargs=hparams["dataloader_opts"],
File "/home/zhengbeida/anaconda3/envs/slurp/lib/python3.7/site-packages/speechbrain/core.py", line 1225, in fit
self._fit_train(train_set=train_set, epoch=epoch, enable=enable)
File "/home/zhengbeida/anaconda3/envs/slurp/lib/python3.7/site-packages/speechbrain/core.py", line 1078, in _fit_train
loss = self.fit_batch(batch)
File "train_with_wav2vec2.py", line 145, in fit_batch
if self.check_gradients(loss):
File "/home/zhengbeida/anaconda3/envs/slurp/lib/python3.7/site-packages/speechbrain/core.py", line 1007, in check_gradients
"Loss is not finite and patience is exhausted. "
ValueError: Loss is not finite and patience is exhausted. To debug, wrap `fit()` with autograd's `detect_anomaly()`, e.g.
Expected behaviour
After 7 epochs, “Loss is not finite and patience is exhausted.” appears, and sometimes it also appears “IndexError: Out of range: piece id is out of range.” How should these two problems be solved?
To Reproduce
I didn’t change the code for now, only the path to save the file and load the dataset.
Versions
SpeechBrain system description
==============================
Python version:
3.7.16 (default, Jan 17 2023, 22:20:44)
[GCC 11.2.0]
==============================
Installed Python packages:
appdirs==1.4.4
attrs==23.1.0
black==19.10b0
certifi @ file:///croot/certifi_1671487769961/work/certifi
cfgv==3.3.1
charset-normalizer==3.1.0
click==8.0.4
distlib==0.3.6
entrypoints==0.3
filelock==3.12.0
flake8==3.7.9
fsspec==2023.1.0
huggingface-hub==0.15.1
HyperPyYAML==1.2.1
identify==2.5.24
idna==3.4
importlib-metadata==6.6.0
joblib==1.2.0
jsonlines==3.1.0
mccabe==0.6.1
more-itertools==9.1.0
nodeenv==1.7.0
numpy==1.21.6
packaging==23.1
pandas==1.3.5
pathspec==0.11.1
Pillow==9.5.0
platformdirs==3.5.1
pluggy==0.13.1
pre-commit==2.21.0
py==1.11.0
pycodestyle==2.5.0
pyflakes==2.1.1
pytest==5.4.1
python-dateutil==2.8.2
pytz==2023.3
PyYAML==6.0
regex==2023.5.5
requests==2.31.0
ruamel.yaml==0.17.28
ruamel.yaml.clib==0.2.7
scipy==1.7.3
sentencepiece==0.1.99
six==1.16.0
speechbrain==0.5.14
tokenizers==0.13.3
toml==0.10.2
torch==1.9.0
torchaudio==0.9.0
torchvision==0.10.0
tqdm==4.65.0
transformers==4.29.2
typed-ast==1.5.4
typing_extensions==4.5.0
urllib3==2.0.2
virtualenv==20.23.0
wcwidth==0.2.6
yamllint==1.23.0
zipp==3.15.0
==============================
Could not get git revision==============================
CUDA version:
10.2
Relevant log output
2023-06-14 13:04:41,746 - speechbrain.core - WARNING - Parameter is not finite: Parameter containing:
tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0',
requires_grad=True)
2023-06-14 13:04:41,747 - speechbrain.core - WARNING - Parameter is not finite: Parameter containing:
tensor([nan, nan, nan, ..., nan, nan, nan], device='cuda:0',
requires_grad=True)
2023-06-14 13:04:41,747 - speechbrain.core - WARNING - Parameter is not finite: Parameter containing:
tensor([nan, nan, nan, ..., nan, nan, nan], device='cuda:0',
requires_grad=True)
2023-06-14 13:04:41,748 - speechbrain.core - ERROR - Exception:
Traceback (most recent call last):
File "train_with_wav2vec2.py", line 374, in <module>
valid_loader_kwargs=hparams["dataloader_opts"],
File "/home/zhengbeida/anaconda3/envs/slurp/lib/python3.7/site-packages/speechbrain/core.py", line 1225, in fit
self._fit_train(train_set=train_set, epoch=epoch, enable=enable)
File "/home/zhengbeida/anaconda3/envs/slurp/lib/python3.7/site-packages/speechbrain/core.py", line 1078, in _fit_train
loss = self.fit_batch(batch)
File "train_with_wav2vec2.py", line 145, in fit_batch
if self.check_gradients(loss):
File "/home/zhengbeida/anaconda3/envs/slurp/lib/python3.7/site-packages/speechbrain/core.py", line 1007, in check_gradients
"Loss is not finite and patience is exhausted. "
ValueError: Loss is not finite and patience is exhausted. To debug, wrap `fit()` with autograd's `detect_anomaly()`, e.g.
Additional context
Then the 7th epochs are run, and the subsequent ones can be followed.
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 16
Hi@ mhn226,everything is fine now,Thank you for your guidance.