mmsegmentation: Distributed training hangs due to missing keys in `mmseg.segmentors.base.BaseSegmentor._parse_losses`
When training on multiple GPUs, my code of customized model get stuck. When training on only one GPU, it works good. Ctrl+C gives me the following error stack:
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 173, in <module>
main()
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 169, in main
run(args)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/run.py", line 624, in run
)(*cmd_args)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launcher/api.py", line 116, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launcher/api.py", line 238, in launch_agent
result = agent.run()
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/elastic/agent/server/api.py", line 700, in run
result = self._invoke_run(role)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/elastic/agent/server/api.py", line 828, in _invoke_run
time.sleep(monitor_interval)
KeyboardInterrupt
I cannot find many useful information online. Any advices on how to debug further?
Environment:
------------------------------------------------------------
sys.platform: linux
Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: GeForce GTX 1080 Ti
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.2, V10.2.89
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.9.0+cu102
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.10.0+cu102
OpenCV: 4.5.3
MMCV: 1.3.14
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 10.2
MMSegmentation: 0.18.0+ef68770
------------------------------------------------------------
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 19 (19 by maintainers)
Commits related to this issue
- Fix padding in dreambooth (#1030) — committed to aravind-h-v/mmsegmentation by shirayu 2 years ago
- Merge dev-0.21 into master (#1062) * map location to cpu when load checkpoint (#1007) * [Enhancement] Support minus output feature index in mobilenet_v3 (#1005) * fix typo in mobilenet_v3 * ... — committed to wjkim81/mmsegmentation by ly015 3 years ago
Actually I can do this, maybe when I have time, I’d create a PR. Closing this.
@MengzhangLI Can I pass a
Nonefor log_vars to skip this item?@MengzhangLI In the
_parse_logfunction of themmseg.segmentors.base.BaseSegmentor, it attempts to synchronize the loss values among all GPUs. What happened is that in this loop (line 194):One GPU A does not have a
"roi_acc"asloss_name(suppose it is the last key inlog_vars). Then this GPU A thinks it has done all its work, and jump out of the loop. Other GPUs with the last"roi_acc"will try to calltorch.distributed.all_reduce, which infinitely waits for the reply from GPU A.A quick fix is to delete this
roi_accor setting it to zero for unavailable data. A good fix is for this_parse_lossto divide a runtime counter instead ofdist.get_world_size(), and those counters are not recognized as a metric variable:For those GPUs without a
"roi_acc"key, they just set it to zero (or using adefaultdict).