DiffuSeq: the Multi-GPU training acutally duplicates data in each GPU ?
Hello.
I find that the Dataloader constructed in diffuseq/text_datasets.py not used pytorch’s DistributedSampler
https://github.com/Shark-NLP/DiffuSeq/blob/bea43e1fd0a954486bc36ad62f2a71dcb2bd300a/diffuseq/text_datasets.py#L47
, which makes the data is actually duplicated in each GPU, e.g., in func:forward_backward in train_util.py
https://github.com/Shark-NLP/DiffuSeq/blob/bea43e1fd0a954486bc36ad62f2a71dcb2bd300a/train_util.py#L235
i.e., each GPU is processing the same data, which makes distributed training pointless.
Is my conjecture correct?
just FYI, the training script in Diffusion-LM’s repo train_run.py uses transformers’s training script run_clm.py, in which DistributedSampler is used in the Trainer
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 19 (7 by maintainers)
I understood that you implemented different data loader with
shuffle=Truekeyword inload_data_textfunction, indiffuseq/text_datset.py. However, this works improperly, due to reasons below.in
torch.utils.data.DataLoader, when usingshuffle=True, DataLoader objects makestorch.utils.data.RandomSamplerinstead oftorch.utils.data.SequentialSampler. In this case, withoutgeneratorkeyword,RandomSamplermakes newtorch.Generator()instance, and its seed is set bytorch._C._default_generator. (see torch source code.) which is dependent ontransformer.set_seed(args.seed)function intrain.py. As a consequence, we will get same data per each process, even though usingshuffle=Truekeyword.My PR intended to fix this issues. By setting only generator’s seed different, other processes will go on with same seed per each processes!
@Dawn-LX In default, torch random operation such as
torch.randnuses default random generatortorch._C._default_genetator(it’s for CPU and genrrator for GPU also exists), unless we don’t add generator keyword. My intend (for PR) was to make dataloader use another generator instead of default one. Meanwhile, in funcctionload_model_emb, when local rank equals 0 function callstorch.nn.init.normal_(model.weight), and this is random operation which uses default generator. (Model initialization is also random process too but it’s same in all processes and doesn’t make gap)I tested the outputs, and checked that current code returns same data outputs. Since process 0 uses random generator while initializing random embedding (
load_model_emb), data of process 0 was different, however, other processes’ datum were same, in output.train.py@Dawn-LX It doesn’t mean that process 0 use another seed. In original code all processes use same seed, however, process 0 has ONLY 1 MORE RANDOM OPERATION before
RandomSamplerbeing initialized, and this ONLY 1 MORE RANDOM OPERATION makes different random output (for dataloader’s seed) even with same seed.Thank you very much ! I get it !
Yes. my method is fine for infinite loops, however, considering more general cases, DistributedSampler would be more compact solution. Thank you for reviewing!
If you want to use same seed per nodes, you can consider alternative code like below:
Step 1. Change this function to my code below.
https://github.com/Shark-NLP/DiffuSeq/blob/bea43e1fd0a954486bc36ad62f2a71dcb2bd300a/diffuseq/text_datasets.py#L11
Step 2. Add
seedargument in training script.https://github.com/Shark-NLP/DiffuSeq/blob/bea43e1fd0a954486bc36ad62f2a71dcb2bd300a/train.py#L44
line 44~63
Thank you for your reply! But I am still confused.
shuffle=Truehas nothing to do with “each GPU gets a different batch of data”.To verfiy my point, we can turn
infinite_loaderoff and see how many batch iteration it actually runs. say, if a signle GPU training script has a dataloader of 800 interations. Then for 4 GPUs training, the dataloader (with DistributedSampler) will run 200 iterations (for the same batchsize) Note that for DistributedSampler & DistributedDataParallel, the the batchsize of dataloader is directly the batchsize on each GPU.But with the existing multi-gpu training script, the data is duplicated in each gpu, an it will still run 800 iterations for 4 GPU training
Hi,
Good question!
We follow the training script in Diffusion-LM’s repo script/run_train.py. The train_run.py you mentioned uses run_clm.py to train the classifier instead of LM itself.
It’s true that we “split data per gpu” when we do sampling. That’s because we only want to iterate each test case once and in order. However, when training, we set
shuffle=True, which means each GPU gets a different batch of data. It functions in the same way as usingDistributedSampler.