ignite: Unable to create DiskSaver when program launched with torch.distributed.launcher

🐛 Bug description

As mentioned in this issue in MONAI, I tried to run this tutorial code with torch.distributed.launcher. However, the program froze at instantiating the CheckpointSaver. The reason was that DiskSaver of ignite cannot be created when the program is launched with torch.distributed.launcher (I am using SLURM). I also noticed that it might be caused by calling get_rank() in the one_rank_only decorator, which is used in the definition of DiskSaver: https://github.com/pytorch/ignite/blob/d16d15efbbbfc476702e91f3ab2bc757b839be26/ignite/distributed/utils.py#L595

I also did a simple experiment to verify this. I launched the following script with srun python -m torch.distributed.launcher --nproc_per_node=4 --nnodes=1 script.py, and found that the program froze at creating the DiskSaver.

import torch.distributed as dist
from ignite.handlers import DiskSaver
from argparse import ArgumentParser


def create_disk_saver(args):
	dist.init_process_group(backend='nccl', init_method='env://')

	if dist.get_rank() == 0:
		print('building DiskSaver')
		disk_saver = DiskSaver(dirname='./runs/')
		print('DiskSaver built')

	dist.destroy_process_group()


def main():
	parser = ArgumentParser()
	parser.add_argument('--local_rank', type=int)
	args = parser.parse_args()
	create_disk_saver(args)


if __name__ == '__main__':
	main()

I would be much appreciated if you could fix this. I prefer launching the program with torch.distributed.launcher to ignite.distributed.Parallel context manager, as it has less issues with the SLURM env.

Environment

  • PyTorch Version (e.g., 1.4): 1.8
  • Ignite Version (e.g., 0.3.0): 0.4.4
  • OS (e.g., Linux): Linux
  • How you installed Ignite (conda, pip, source): pip
  • Python version: 3.8
  • Any other relevant information:

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 2
  • Comments: 27 (21 by maintainers)

Most upvoted comments

@sandylaker I tried a few runs on the cluster of my company.

1 - using srun and torch.launch.distributed without ignite.distributed.Parallel

script, README

srun -N1 -n1 python -m torch.distributed.launch --nproc_per_node 2 helloworld.py

> [http://127.0.0.1:29500] hello from [ener021:nccl] process 0/2
> [http://127.0.0.1:29500] hello from [ener021:nccl] process 1/2

NOTE : I removed some mandatory options like -J, -p, --mem, etc. related to the own configuration of my cluster.

srun -N1 -n1 python -m torch.distributed.launch --nproc_per_node 8 helloworld.py --backend="gloo"

> [http://127.0.0.1:29500] hello from [ener021:gloo] process 0/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 1/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 2/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 3/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 4/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 5/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 6/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 7/8

2 - using srun without torch.launch.distributed and ignite.distributed.Parallel

script, README

srun -N1 -n2 python helloworld.py

> [http://ener021:22163] hello from [ener021:nccl] process 0/2
> [http://ener021:22163] hello from [ener021:nccl] process 1/2
srun -N1 -n8 python helloworld.py --backend="gloo"

> [http://ener021:22165] hello from [ener021:gloo] process 0/8
> [http://ener021:22165] hello from [ener021:gloo] process 1/8
> [http://ener021:22165] hello from [ener021:gloo] process 2/8
> [http://ener021:22165] hello from [ener021:gloo] process 3/8
> [http://ener021:22165] hello from [ener021:gloo] process 4/8
> [http://ener021:22165] hello from [ener021:gloo] process 5/8
> [http://ener021:22165] hello from [ener021:gloo] process 6/8
> [http://ener021:22165] hello from [ener021:gloo] process 7/8

3 - using srun and torch.launch.distributed with ignite.distributed.Parallel

script, README

One script, both usages.

On a computing node, use torch.launch.distributed

python -m torch.distributed.launch --nproc_per_node 2 --use_env helloworld.py

> 2021-06-08 08:57:28,548 ignite.distributed.launcher.Parallel INFO: Initialized processing group with backend: 'nccl'
> 2021-06-08 08:57:28,548 ignite.distributed.launcher.Parallel INFO: - Run '<function main_fn at 0x2aac7e5bf4c0>' in 2 processes
> [http://127.0.0.1:29500] hello from [ener021:nccl] process 0/2
> [http://127.0.0.1:29500] hello from [ener021:nccl] process 1/2
> 2021-06-08 08:57:28,548 ignite.distributed.launcher.Parallel INFO: End of run
> 2021-06-08 08:57:28,548 ignite.distributed.launcher.Parallel INFO: Finalized processing group with backend: 'nccl'
python -m torch.distributed.launch --nproc_per_node 8 --use_env helloworld.py --backend="gloo"

> 2021-06-08 08:58:22,682 ignite.distributed.launcher.Parallel INFO: Initialized processing group with backend: 'gloo'
> 2021-06-08 08:58:22,683 ignite.distributed.launcher.Parallel INFO: - Run '<function main_fn at 0x2b0ae40ec4c0>' in 8 processes
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 0/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 1/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 2/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 3/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 4/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 5/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 6/8
> [http://127.0.0.1:29500] hello from [ener021:gloo] process 7/8
> 2021-06-08 08:58:22,685 ignite.distributed.launcher.Parallel INFO: End of run
> 2021-06-08 08:58:22,685 ignite.distributed.launcher.Parallel INFO: Finalized processing group with backend: 'gloo'

On the frontend, use srun (or sbatch)

srun -N1 -n2 python helloworld.py

> 2021-06-08 09:00:56,121 ignite.distributed.launcher.Parallel INFO: Initialized processing group with backend: 'nccl'
> 2021-06-08 09:00:56,121 ignite.distributed.launcher.Parallel INFO: - Run '<function main_fn at 0x2b10e3ce34c0>' in 2 processes
> [http://ener021:22182] hello from [ener021:nccl] process 0/2
> [http://ener021:22182] hello from [ener021:nccl] process 1/2
> 2021-06-08 09:00:56,132 ignite.distributed.launcher.Parallel INFO: End of run
> 2021-06-08 09:00:56,132 ignite.distributed.launcher.Parallel INFO: Finalized processing group with backend: 'nccl'
srun -N1 -n8 python helloworld.py --backend="gloo"

> 2021-06-08 09:02:26,940 ignite.distributed.launcher.Parallel INFO: Initialized processing group with backend: 'gloo'
> 2021-06-08 09:02:26,941 ignite.distributed.launcher.Parallel INFO: - Run '<function main_fn at 0x2b3c2f58f4c0>' in 8 processes
> [http://ener021:22185] hello from [ener021:gloo] process 0/8
> [http://ener021:22185] hello from [ener021:gloo] process 1/8
> [http://ener021:22185] hello from [ener021:gloo] process 2/8
> [http://ener021:22185] hello from [ener021:gloo] process 3/8
> [http://ener021:22185] hello from [ener021:gloo] process 4/8
> [http://ener021:22185] hello from [ener021:gloo] process 5/8
> [http://ener021:22185] hello from [ener021:gloo] process 6/8
> [http://ener021:22185] hello from [ener021:gloo] process 7/8
> 2021-06-08 09:02:26,946 ignite.distributed.launcher.Parallel INFO: End of run
> 2021-06-08 09:02:26,947 ignite.distributed.launcher.Parallel INFO: Finalized processing group with backend: 'gloo'

HTH

Just looking your code, it can’t work if you create the DiskSaver in a if section only restricted to one process. It seems that DiskSaver needs a collective __init__ call.

@vfdev-5 Yes, That’s what I mentioned looking the code a few days ago. However you explained it better 😊

The parallel / sequential sections remain a tricky (and classical) thing in parallel computing. Having to manage the 2 behaviours (collective call similar to reduction or guard per processor) makes the codes more complicated. An idea would be to have only handlers defined in collective, we avoid the if clauses and it’s simpler.

Although I don’t know if the bug label should be added to this issue.

Last thing, I didn’t understand how idist.sync() would help, it doesn’t remove the collective code section ?

You can do as you prefer, but using ignite.distributed.Parallel, you would be able to use torch.distributed.launch, torch.distributed.spawn, slurm, xla and horovod as well, with a unique code.

Please have a look here https://github.com/sdesrozis/why-ignite/tree/main/basics

We are currently finishing writing a blog article explaining how ignite can help about parallel computing.

HTH

@sandylaker could you please test this code with nightly version : pip install --pre pytorch-ignite ? I think it should raise this runtime error: https://github.com/pytorch/ignite/blob/d16d15efbbbfc476702e91f3ab2bc757b839be26/ignite/distributed/comp_models/native.py#L218-L222

In general, I think calling srun python -m torch.distributed.launcher --nproc_per_node=4 --nnodes=1 script.py is incorrect as srun create new job with 1 proc per node and torch.distributed.launcher spawns 4 proc per node. What do you think ?

Just looking your code, it can’t work if you create the DiskSaver in a if section only restricted to one process. It seems that DiskSaver needs a collective __init__ call.