stablediffusion: ModuleNotFoundError: No module named 'torchtext.legacy'
Walked through the README and got this. I didn’t use conda though to install pytorch might try to do that instead
!python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt models/ldm/768-v-ema.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768
Traceback (most recent call last):
File "scripts/txt2img.py", line 11, in <module>
from pytorch_lightning import seed_everything
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/__init__.py", line 20, in <module>
from pytorch_lightning import metrics # noqa: E402
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/__init__.py", line 15, in <module>
from pytorch_lightning.metrics.classification import ( # noqa: F401
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/__init__.py", line 14, in <module>
from pytorch_lightning.metrics.classification.accuracy import Accuracy # noqa: F401
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/accuracy.py", line 18, in <module>
from pytorch_lightning.metrics.utils import deprecated_metrics, void
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/utils.py", line 29, in <module>
from pytorch_lightning.utilities import rank_zero_deprecation
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/__init__.py", line 18, in <module>
from pytorch_lightning.utilities.apply_func import move_data_to_device # noqa: F401
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/apply_func.py", line 31, in <module>
from torchtext.legacy.data import Batch
ModuleNotFoundError: No module named 'torchtext.legacy'
https://colab.research.google.com/drive/10jKS9pAB2bdN3SHekZzoKzm4jo2F4W1Q?usp=sharing
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 15
If you run:
then you get a different error:
If you run:
then you get a different error:
If you run:
then you get a different error:
If you run:
then you get the error:
The issue isnt torchtext, it’s your version of pytorch_lighting, here’s the most recent version that should work:
!pip install pytorch-lightning==1.8.3.post0Thanks! this resolved it with the additional collab recommendations above
Guys how do you test and release software? What’s your DevOps? It looks like there wasn’t any test before making the announcement. I know people in DS world avoid SE best practices. Let me know if you need help for the future for testing and giving to the community the correct instruction to really make ML more open.
I don’t think it is possible to run this script on the free version of Google Colab. Thanks for the help though.
For reference, this is what my Google Colab code looked like:
However, the good piece of news is that the model is being integrated into HuggingFace’s Diffusers:
@woctezuma try high runtime type if you have it:
In Colab : Runtime-> Change runtime type -> Runtime shape -> High-RAM
also for xformers there are precompiled versions you can grab if you dont want to wait: https://github.com/TheLastBen/fast-stable-diffusion/tree/main/precompiled
v1.10.2