InvokeAI: [bug]: cannot complete installation

Is there an existing issue for this?

  • I have searched the existing issues

OS

Windows

GPU

cuda

VRAM

12

What version did you experience this issue on?

2.3.5

What happened?

my system is windows 10 with python 3.10.9

(InvokeAI) D:\stable-diffusion1\invokeai>invokeai --web
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Program Files\Python310\lib\runpy.py:196 in _run_module_as_main                               │
│                                                                                                  │
│   193 │   main_globals = sys.modules["__main__"].__dict__                                        │
│   194 │   if alter_argv:                                                                         │
│   195 │   │   sys.argv[0] = mod_spec.origin                                                      │
│ ❱ 196 │   return _run_code(code, main_globals, None,                                             │
│   197 │   │   │   │   │    "__main__", mod_spec)                                                 │
│   198                                                                                            │
│   199 def run_module(mod_name, init_globals=None,                                                │
│                                                                                                  │
│ C:\Program Files\Python310\lib\runpy.py:86 in _run_code                                          │
│                                                                                                  │
│    83 │   │   │   │   │      __loader__ = loader,                                                │
│    84 │   │   │   │   │      __package__ = pkg_name,                                             │
│    85 │   │   │   │   │      __spec__ = mod_spec)                                                │
│ ❱  86 │   exec(code, run_globals)                                                                │
│    87 │   return run_globals                                                                     │
│    88                                                                                            │
│    89 def _run_module_code(code, init_globals=None,                                              │
│                                                                                                  │
│ in <module>:4                                                                                    │
│                                                                                                  │
│   1 # -*- coding: utf-8 -*-                                                                      │
│   2 import re                                                                                    │
│   3 import sys                                                                                   │
│ ❱ 4 from ldm.invoke.CLI import main                                                              │
│   5 if __name__ == '__main__':                                                                   │
│   6 │   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])                         │
│   7 │   sys.exit(main())                                                                         │
│                                                                                                  │
│ D:\stable-diffusion1\invokeai\.venv\lib\site-packages\ldm\invoke\CLI.py:21 in <module>           │
│                                                                                                  │
│     18                                                                                           │
│     19 import ldm.invoke                                                                         │
│     20                                                                                           │
│ ❱   21 from ..generate import Generate                                                           │
│     22 from .args import (Args, dream_cmd_from_png, metadata_dumps,                              │
│     23 │   │   │   │   │   │   │    metadata_from_png)                                           │
│     24 from .generator.diffusers_pipeline import PipelineIntermediateState                       │
│                                                                                                  │
│ D:\stable-diffusion1\invokeai\.venv\lib\site-packages\ldm\generate.py:32 in <module>             │
│                                                                                                  │
│     29 from omegaconf import OmegaConf                                                           │
│     30 from pathlib import Path                                                                  │
│     31 from PIL import Image, ImageOps                                                           │
│ ❱   32 from pytorch_lightning import logging, seed_everything                                    │
│     33                                                                                           │
│     34 import ldm.invoke.conditioning                                                            │
│     35 from ldm.invoke.args import metadata_from_png                                             │
│                                                                                                  │
│ D:\stable-diffusion1\invokeai\.venv\lib\site-packages\pytorch_lightning\__init__.py:34 in        │
│ <module>                                                                                         │
│                                                                                                  │
│   31 │   _logger.addHandler(logging.StreamHandler())                                             │
│   32 │   _logger.propagate = False                                                               │
│   33                                                                                             │
│ ❱ 34 from pytorch_lightning.callbacks import Callback  # noqa: E402                              │
│   35 from pytorch_lightning.core import LightningDataModule, LightningModule  # noqa: E402       │
│   36 from pytorch_lightning.trainer import Trainer  # noqa: E402                                 │
│   37 from pytorch_lightning.utilities.seed import seed_everything  # noqa: E402                  │
│                                                                                                  │
│ D:\stable-diffusion1\invokeai\.venv\lib\site-packages\pytorch_lightning\callbacks\__init__.py:25 │
│ in <module>                                                                                      │
│                                                                                                  │
│   22 from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint                    │
│   23 from pytorch_lightning.callbacks.model_summary import ModelSummary                          │
│   24 from pytorch_lightning.callbacks.prediction_writer import BasePredictionWriter              │
│ ❱ 25 from pytorch_lightning.callbacks.progress import ProgressBarBase, RichProgressBar, TQDMP    │
│   26 from pytorch_lightning.callbacks.pruning import ModelPruning                                │
│   27 from pytorch_lightning.callbacks.quantization import QuantizationAwareTraining              │
│   28 from pytorch_lightning.callbacks.rich_model_summary import RichModelSummary                 │
│                                                                                                  │
│ D:\stable-diffusion1\invokeai\.venv\lib\site-packages\pytorch_lightning\callbacks\progress\__ini │
│ t__.py:22 in <module>                                                                            │
│                                                                                                  │
│   19                                                                                             │
│   20 """                                                                                         │
│   21 from pytorch_lightning.callbacks.progress.base import ProgressBarBase  # noqa: F401         │
│ ❱ 22 from pytorch_lightning.callbacks.progress.rich_progress import RichProgressBar  # noqa:     │
│   23 from pytorch_lightning.callbacks.progress.tqdm_progress import TQDMProgressBar  # noqa:     │
│   24                                                                                             │
│                                                                                                  │
│ D:\stable-diffusion1\invokeai\.venv\lib\site-packages\pytorch_lightning\callbacks\progress\rich_ │
│ progress.py:20 in <module>                                                                       │
│                                                                                                  │
│    17 from datetime import timedelta                                                             │
│    18 from typing import Any, Dict, Optional, Union                                              │
│    19                                                                                            │
│ ❱  20 from torchmetrics.utilities.imports import _compare_version                                │
│    21                                                                                            │
│    22 import pytorch_lightning as pl                                                             │
│    23 from pytorch_lightning.callbacks.progress.base import ProgressBarBase                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports'
(D:\stable-diffusion1\invokeai\.venv\lib\site-packages\torchmetrics\utilities\imports.py)

Screenshots

No response

Additional context

No response

Contact Details

No response

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 1
  • Comments: 18

Commits related to this issue

Most upvoted comments

I also encountered the same problem, but I am a macos system, and I solved it like this. Uninstall torchmetrics: /Users/chengtianlong/invokeai/.venv/bin/python3.10 -m pip uninstall torchmetrics==1.0.0 Then reinstall the lower version of torchmetrics: /Users/chengtianlong/invokeai/.venv/bin/python3.10 -m pip install torchmetrics==0.7.0 Then re-run the command that originally installed invokeai, choosing to overwrite the installation in the same directory, Note that the directory of the above 2 commands may be different, you need to adjust.

You’ll see, it’s not that complicated. Just click go here : https://github.com/invoke-ai/InvokeAI/issues/3700 It explaines what you need to do. But if you want better explication, here it is: You need to open InvokeAI-Installer\lib\installer.py in a IDE (visual studio code for me, or notepad if you don’t have one), find the line pip[ "install", "--require-virtualenv", "torch~=2.0.0", "torchvision>=0.14.1", "--force-reinstall", "--find-links" if find_links is not None else None, find_links, "--extra-index-url" if extra_index_url is not None else None, extra_index_url, ] And transform it in pip[ "install", "--require-virtualenv", "torch~=2.0.0", "torchmetrics==0.11.4", "torchvision>=0.14.1", "--force-reinstall", "--find-links" if find_links is not None else None, find_links, "--extra-index-url" if extra_index_url is not None else None, extra_index_url, ] (you just have to add "torchmetrics==0.11.4",) Then, relaunch the installation, and it will be fine

Thanks @ctianlong ! On Windows 11, torchmetrics was installed in invokeai/.venv/Lib/site-packages