stablediffusion: ModuleNotFoundError: No module named 'torchtext.legacy'

Walked through the README and got this. I didn’t use conda though to install pytorch might try to do that instead

!python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt models/ldm/768-v-ema.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768 
Traceback (most recent call last):
  File "scripts/txt2img.py", line 11, in <module>
    from pytorch_lightning import seed_everything
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/__init__.py", line 20, in <module>
    from pytorch_lightning import metrics  # noqa: E402
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/__init__.py", line 15, in <module>
    from pytorch_lightning.metrics.classification import (  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/__init__.py", line 14, in <module>
    from pytorch_lightning.metrics.classification.accuracy import Accuracy  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/accuracy.py", line 18, in <module>
    from pytorch_lightning.metrics.utils import deprecated_metrics, void
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/utils.py", line 29, in <module>
    from pytorch_lightning.utilities import rank_zero_deprecation
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/__init__.py", line 18, in <module>
    from pytorch_lightning.utilities.apply_func import move_data_to_device  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/apply_func.py", line 31, in <module>
    from torchtext.legacy.data import Batch
ModuleNotFoundError: No module named 'torchtext.legacy'

https://colab.research.google.com/drive/10jKS9pAB2bdN3SHekZzoKzm4jo2F4W1Q?usp=sharing

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 15

Most upvoted comments

If you run:

%pip install -q torchtext==0.9

then you get a different error:

/usr/local/lib/python3.7/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
  warn(f"Failed to load image Python extension: {e}")
Traceback (most recent call last):
  File "scripts/txt2img.py", line 10, in <module>
    from torchvision.utils import make_grid
  File "/usr/local/lib/python3.7/dist-packages/torchvision/__init__.py", line 7, in <module>
    from torchvision import models
  File "/usr/local/lib/python3.7/dist-packages/torchvision/models/__init__.py", line 18, in <module>
    from . import quantization
  File "/usr/local/lib/python3.7/dist-packages/torchvision/models/quantization/__init__.py", line 3, in <module>
    from .mobilenet import *
  File "/usr/local/lib/python3.7/dist-packages/torchvision/models/quantization/mobilenet.py", line 1, in <module>
    from .mobilenetv2 import *  # noqa: F401, F403
  File "/usr/local/lib/python3.7/dist-packages/torchvision/models/quantization/mobilenetv2.py", line 6, in <module>
    from torch.ao.quantization import QuantStub, DeQuantStub
ModuleNotFoundError: No module named 'torch.ao'

If you run:

%pip install -q torchtext==0.10

then you get a different error:

ModuleNotFoundError: No module named 'torch.ao.quantization'

If you run:

%pip install -q torchtext==0.11

then you get a different error:

ImportError: cannot import name 'QuantStub' from 'torch.ao.quantization' (/usr/local/lib/python3.7/dist-packages/torch/ao/quantization/__init__.py)

If you run:

%pip install -q torchtext==0.12

then you get the error:

/usr/local/lib/python3.7/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
  warn(f"Failed to load image Python extension: {e}")
Traceback (most recent call last):
  File "scripts/txt2img.py", line 11, in <module>
    from pytorch_lightning import seed_everything
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/__init__.py", line 20, in <module>
    from pytorch_lightning import metrics  # noqa: E402
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/__init__.py", line 15, in <module>
    from pytorch_lightning.metrics.classification import (  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/__init__.py", line 14, in <module>
    from pytorch_lightning.metrics.classification.accuracy import Accuracy  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/accuracy.py", line 18, in <module>
    from pytorch_lightning.metrics.utils import deprecated_metrics, void
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/utils.py", line 29, in <module>
    from pytorch_lightning.utilities import rank_zero_deprecation
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/__init__.py", line 18, in <module>
    from pytorch_lightning.utilities.apply_func import move_data_to_device  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/apply_func.py", line 31, in <module>
    from torchtext.legacy.data import Batch
ModuleNotFoundError: No module named 'torchtext.legacy'

The issue isnt torchtext, it’s your version of pytorch_lighting, here’s the most recent version that should work:

!pip install pytorch-lightning==1.8.3.post0

Thanks! this resolved it with the additional collab recommendations above

!pip install pytorch-lightning==1.8.3.post0

Guys how do you test and release software? What’s your DevOps? It looks like there wasn’t any test before making the announcement. I know people in DS world avoid SE best practices. Let me know if you need help for the future for testing and giving to the community the correct instruction to really make ML more open.

I don’t think it is possible to run this script on the free version of Google Colab. Thanks for the help though.

For reference, this is what my Google Colab code looked like:

%cd /content
!git clone https://github.com/Stability-AI/stablediffusion.git
%cd /content/stablediffusion

!curl https://raw.githubusercontent.com/backnotprop/stablediffusion/backnotprop-patch-pytorch_lighting/requirements.txt -O
%pip install -q -r requirements.txt

%pip install -q invisible-watermark

!nvidia-smi #T4
%pip install https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/T4/xformers-0.0.13.dev0-py3-none-any.whl

!wget https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/main/512-base-ema.ckpt

!python scripts/txt2img.py \
 --config configs/stable-diffusion/v2-inference.yaml \
 --ckpt 512-base-ema.ckpt \
 --n_samples 1 --H 256 --W 256 \
 --prompt "a professional photograph of an astronaut riding a horse" 

However, the good piece of news is that the model is being integrated into HuggingFace’s Diffusers:

@woctezuma try high runtime type if you have it:

In Colab : Runtime-> Change runtime type -> Runtime shape -> High-RAM

also for xformers there are precompiled versions you can grab if you dont want to wait: https://github.com/TheLastBen/fast-stable-diffusion/tree/main/precompiled