trl: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
I am getting the following error traceback when I run python -m torch.distributed.launch --nproc_per_node=1 reward_summarization.py --bf16 on a machine with two nodes of A10 (24GB). I have torch==2.0.0 installed.
I appreciate any comment/idea to fix that.
Traceback (most recent call last):
File "/home/opc/trl/examples/summarization/scripts/reward_summarization.py", line 202, in <module>
trainer.train(script_args.resume_from_checkpoint)
File "/home/opc/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 1633, in train
return inner_training_loop(
File "/home/opc/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 1902, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/opc/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2663, in training_step
loss.backward()
File "/home/opc/miniconda3/lib/python3.10/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/home/opc/miniconda3/lib/python3.10/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [CUDABoolType [1, 1, 377, 377]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
wandb: Waiting for W&B process to finish... (failed 1).
wandb: You can sync this run to the cloud by running:
wandb: wandb sync /home/opc/trl/examples/summarization/scripts/wandb/offline-run-20230404_175237-0r3498mc
wandb: Find logs at: ./wandb/offline-run-20230404_175237-0r3498mc/logs
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1902146) of binary: /home/opc/miniconda3/bin/python
Traceback (most recent call last):
File "/home/opc/miniconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/opc/miniconda3/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/opc/miniconda3/lib/python3.10/site-packages/torch/distributed/launch.py", line 195, in <module>
main()
File "/home/opc/miniconda3/lib/python3.10/site-packages/torch/distributed/launch.py", line 191, in main
launch(args)
File "/home/opc/miniconda3/lib/python3.10/site-packages/torch/distributed/launch.py", line 176, in launch
run(args)
File "/home/opc/miniconda3/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/home/opc/miniconda3/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/opc/miniconda3/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
reward_summarization.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-04-04_17:52:47
host : instance-20230329-1307.subnet03291319.vcn03291319.oraclevcn.com
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 1902146)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 19 (6 by maintainers)
I don’t have a clear understanding to the cause of this issue per se, but the problem is derived from the fact that we run two forward passes (for
rewards_jandrewards_krespectively) to compute the loss function, and somehow GPT’s doesn’t like that. Here’s a minimal workaround that doesn’t involve making changes totransformers.models:RewardDataCollatorWithPaddingwith the following. We merge the two batches into one.compute_losswith the following. We split model predictions back torewards_jandrewards_kafter a single forward pass, and compute the loss function.This should work for GPT-2’s and GPT-NeoX’s!
Planning to deep dive in the next weeks about issues with respect to distributed training, assigning this to myself