trl: Error with Multi-GPU peft Reward Training
There is an issue when you combine all four:
peftquantization- gradient checkpointing
- multi-gpu ddp
- two gradients on the same parameters (as you have in the loss function for Reward Trainer)
This is reproducible if you correctly enable gradient checkpointing in examples/multi-adapter-rl as shown in PR #479 and then run in a multi-gpu setup
accelerate launch --multi_gpu reward_modeling.py --gradient_checkpointing True
you will receive the error
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the
forwardfunction. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiplecheckpointfunctions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations. Parameter at index 127 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print parameter names for further debugging.
With TORCH_DISTRIBUTED_DEBUG=DETAIL, we find the affected parameter is a LoRA parameter. It is not related to https://github.com/pytorch/pytorch/issues/60844 because find_unused_parameters is set to False.
This is likely a problem between peft and accelerate/ddp but I’m putting the issue here because it affects RewardTrainer and quantization + multi gpu + gradient checkpointing are a common combination
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 4
- Comments: 15 (7 by maintainers)
Yes indeed! if you use latest releases from transformers, trl and peft, simply pass
gradient_checkpointing_kwargs={"use_reentrant":False}and it should be resolvedI tested this and it is resolved by #912, thank you @younesbelkada !
Sorry I didn’t do this myself earlier. I tried doing the
use_rentranttrick you mentioned by it didn’t work. I must have missed something in my tests. Thanks for doing it!I am using
huggyllama/llama-7bEDIT: I misunderstood “unused parameters”. As long as the frozen parameters are used in the backward computation, they are fine. So if the model has two peft adapters, then there could be unused parameters, but not if there’s just one.
After looking into this, I am mostly convinced that DDP does not work with gradient checkpointing when there are unused parameters in the forward computation or two forward passes. This means you should not use gradient checkpointing with
peftand DDP, ~regardless of the gradient passes but~ as it will explicitly fail with two forward passes on the same parameters.This is a note from the Pytorch docs
two
peftadapters can create unused parameters and two forward passes (as in the reward trainer) will cause a layer to be checkpointed twice.I think a next step is to check whether DDP training with
peftis worse with gradient checkpointing even without two forward passes. If so,peftshould probably add a warning or error when a user tries to combine these things until DDP supports the use case. Or maybe switch toDataParallelalthough it feels like it will be deprecated.I’m getting the same error with
accelerate launch --multi_gpu reward_modeling.py --load_in_8bit --gradient_checkpointing Truewith llama2-7b, however if disabling either gradient_checkpointing or load_in_8bit, I got OOM.