pytorch-lightning: tpu_cores=8 not working

🐛 Bug

After #2016 was fixed with PR #2033 the code is running perfectly on single tpu core and a specific tpu core but now not working with 8 tpu cores. After the training is complete getting RuntimeError: Cannot replicate if number of devices (1) is different from 8.

To Reproduce

Colab notebook

Expected behavior

Should train with 8 tpu cores with no error just like it works in case of a single core.

Environment

  • pytorch/xla: nightly
  • pytorch-lightning: master
  • PyTorch Version (e.g., 1.0): 1.5
  • OS (e.g., Linux): Linux
  • How you installed PyTorch (conda, pip, source): pip
  • Python version: 3.7

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 15 (13 by maintainers)

Most upvoted comments

may we add a test for it so we can later fix it?