Stable-Diffusion-WebUI-TensorRT: Error installing in Automatic1111
Here is the error in the console:
Error running install.py for extension D:\repos\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT.
*** Command: "d:\repos\stable-diffusion-webui\venv\Scripts\python.exe" "D:\repos\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\install.py"
*** Error code: 1
*** stdout: Looking in indexes: https://pypi.org/simple, https://pypi.nvidia.com
*** Collecting tensorrt==9.0.1.post11.dev4
*** Downloading https://pypi.nvidia.com/tensorrt/tensorrt-9.0.1.post11.dev4.tar.gz (18 kB)
*** Preparing metadata (setup.py): started
*** Preparing metadata (setup.py): finished with status 'done'
*** Building wheels for collected packages: tensorrt
*** Building wheel for tensorrt (setup.py): started
*** Building wheel for tensorrt (setup.py): still running...
*** Building wheel for tensorrt (setup.py): finished with status 'done'
*** Created wheel for tensorrt: filename=tensorrt-9.0.1.post11.dev4-py2.py3-none-any.whl size=17618 sha256=e059e2b3b7dd7ecf4c805ab6f2b4589ddb43b0959bfa66178fa0d01559ba1ef8
*** Stored in directory: c:\users\X\appdata\local\pip\cache\wheels\d1\6d\71\f679d0d23a60523f9a05445e269bfd0bcd1c5272097fa931df
*** Successfully built tensorrt
*** Installing collected packages: tensorrt
*** Successfully installed tensorrt-9.0.1.post11.dev4
*** Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
*** Collecting polygraphy
*** Downloading polygraphy-0.49.0-py2.py3-none-any.whl (327 kB)
*** -------------------------------------- 327.9/327.9 kB 4.1 MB/s eta 0:00:00
*** Installing collected packages: polygraphy
*** Successfully installed polygraphy-0.49.0
*** Collecting protobuf==3.20.2
*** Downloading protobuf-3.20.2-cp310-cp310-win_amd64.whl (904 kB)
*** -------------------------------------- 904.0/904.0 kB 4.4 MB/s eta 0:00:00
*** Installing collected packages: protobuf
*** Attempting uninstall: protobuf
*** Found existing installation: protobuf 3.20.0
*** Uninstalling protobuf-3.20.0:
*** Successfully uninstalled protobuf-3.20.0
*** TensorRT is not installed! Installing...
*** Installing nvidia-cudnn-cu11
*** Installing tensorrt
*** removing nvidia-cudnn-cu11
*** Polygraphy is not installed! Installing...
*** Installing polygraphy
*** GS is not installed! Installing...
*** Installing protobuf
***
*** stderr: A matching Triton is not available, some optimizations will not be enabled.
*** Error caught was: No module named 'triton'
*** d:\repos\stable-diffusion-webui\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
*** rank_zero_deprecation(
***
*** [notice] A new release of pip available: 22.2.1 -> 23.2.1
*** [notice] To update, run: d:\repos\stable-diffusion-webui\venv\Scripts\python.exe -m pip install --upgrade pip
***
*** [notice] A new release of pip available: 22.2.1 -> 23.2.1
*** [notice] To update, run: d:\repos\stable-diffusion-webui\venv\Scripts\python.exe -m pip install --upgrade pip
*** ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'D:\\repos\\stable-diffusion-webui\\venv\\Lib\\site-packages\\google\\~rotobuf\\internal\\_api_implementation.cp310-win_amd64.pyd'
*** Check the permissions.
***
***
*** [notice] A new release of pip available: 22.2.1 -> 23.2.1
*** [notice] To update, run: d:\repos\stable-diffusion-webui\venv\Scripts\python.exe -m pip install --upgrade pip
*** Traceback (most recent call last):
*** File "D:\repos\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\install.py", line 30, in <module>*** install()
*** File "D:\repos\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\install.py", line 19, in install
*** launch.run_pip("install protobuf==3.20.2", "protobuf", live=True)
*** File "d:\repos\stable-diffusion-webui\modules\launch_utils.py", line 138, in run_pip
*** return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
*** File "d:\repos\stable-diffusion-webui\modules\launch_utils.py", line 115, in run
*** raise RuntimeError("\n".join(error_bits))
*** RuntimeError: Couldn't install protobuf.
*** Command: "d:\repos\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install protobuf==3.20.2 --prefer-binary
*** Error code: 1
And then when I restarted the webui, I got these popups:
What does that mean?
About this issue
- Original URL
- State: closed
- Created 8 months ago
- Reactions: 14
- Comments: 179
Watch below video to learn how to compile SDXL TensorRT and use it - it includes both manual and auto way
RTX Acceleration Quick Tutorial With Auto Installer V2 SDXL - Tensor RT
The is the correct answer for me, thank you.
same issue here
would you please stop polluting threads with clickbait images?
here a quick tutorial for how to install big tutorial still editing
RTX Acceleration Quick Tutorial With Auto Installer
I just finished watching this as you posted it…great video and great job covering all the basics and not so basics 😃
First run: venv\scripts\activate.bat
This will run that activate batch file in the venv\scripts folder that will activate the venv virtual python environment that automatic1111 runs in.
You’ll be able to tell if the virtual environment is active, if the beginning of your command prompt line, shows (venv).
Then you’ll need to run python -m pip uninstall -y nvidia-cudnn-cu11
This will remove cudnn which isn’t needed to run TensorRT, but is currently needed to install TensorRT, however having it installed results in the error you are seeing. The normal install process should do this automatically so that this doesn’t occur, however it’s something that is currently being addressed to resolve.
install and then switch to dev version of auto1111
sdxl working
For my part, the
python -m pip uninstall -y nvidia-cudnn-cu11
didn’t seem to work as the extension was “not installed”.So instead I went to
venv\Lib\site-packages
and I removed the cudnn.dist-info & the cudnn folder in the nvidia folder. It seems to be working fine. At least, I can start without errors, and I can start generating the engine. It seems to be generating without issues now.Edit: I can confirm that after doing this fix, everything works for me.
the problem is medvram. Solved!
ok, figured out the pip freeze.
make sure youre in cmd.exe terminal not powershell
–medvram-sdxl argument was the issue for the following error:
ERROR:root:Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
Removed it and the Engine Export is now working.
You removed the generated trt model from the Unet-trt folder? If so you need to also remove the reference to that model in the model.json folder. Or if you only have one model generated, just delete the model file and the .json file both.
I tried stopping and restarting the webui server. I made sure I am on the SD 1.5 model. If I try to export the default engine I get the same full error that I did above, ending in
'AsyncRequest' object has no attribute '_json_response_data'
I don’t think it was the model I was on, as it is giving the same error with SD 1.5.What is that about "-vidia-cudnn-cu11" being an invalid distribution?
From my understanding, when pip is installing a package, it removes the first letter of the package name. When the installation completes, this extra file is removed. When the install crashes, the extra file isn’t removed and is left as a fake package that is ignored. It’s fine to leave it, but you can also navigate to
venv\Lib\site-packages
to remove it.(I had a “-rotobuf” warning myself :p)
I can see that the packages are installed that our extension needs, but cudnn is still installed. The extension won’t uninstall it because it only checks if tensorrt is installed, and if it is, it won’t uninstall cudnn… might need a code change here. You can try to run the following in the venv of automatic1111 :
i have auto installer working and a video
https://www.patreon.com/posts/86438018
https://youtu.be/eKnMVXVjVoU
I try anything beyond AVG person and I have a lot of technical knowledge and this S**t doesn’t work it just doesn’t work period.
Yes, it will not work with ControlNet
You saved me after half a day banging my head! I’m usually reading through these threads months/years after they’ve been archived, not while people are figuring shit out…so just know I really appreciate this!
Thats actually good to know. but aparently i got it installed now. in the portable version. i havent tried this in the other version i have installed. activate.bat in venv/scripts/ running pip upgrade command to 23.3 version
pip install nvidia-cudnn-cu11==8.9.4.25 --no-cache-dir pip install --pre --extra-index-url https://pypi.nvidia.com/ tensorrt==9.0.1.post11.dev4 --no-cache-dir pip uninstall -y nvidia-cudnn-cu11
then i ran webui.bat and installed the extension from url used the restart button in the ui and it started without errors this time.
I deleted the venv folder before i ran all this and the extension, waited for it all to download again and ran the above commands
In webui root directory:
If I change the vae before generating image it seems to work. BTW it’s not faster 😦
Export finished, and properly added to SD Unet dropdown. I can now use the TensorRT model, and it is indeed much faster on my 3060.
So it seems my errors were mostly caused by --medvram (which I still have not re-enabled), possibly conflicting with another extension (Photoshop plugin) using the --api. I had to manually uninstall cudnn via
python -m pip uninstall -y nvidia-cudnn-cu11
.After all that, it seems to be working. I will try to reenable --medvram and --api commandline args and see if it still works.
Ok, I’m going to delete everything in the models/Unet-trt and models/Unet-onnx folders, reboot webui server, and try exporting default engine again. Hopefully this time the model.json file is created correctly.
i also foudn that if i remove the models i converted then the extention disappear from the UI. restoring those deleted files will make the extention appear again… wtf lol
Ok, so I removed medvram, and that did seem to be causing the problem with the tensors being on the same device. But now I get a bunch of other errors when trying to export default engine:
I thought it might be the model I had selected, so I switched to the SD 1.5 model, and then got this error:
Yes, I was currently following the instructions actually, to see if “seems to be working” could be converted into “is working”. I can confirm it works flawlessly! I was able to export an engine and confirm that it increases my generation speed by ~40%!
In summary, for me the issue was that during the installation, somehow it didn’t uninstall cudnn, and cudnn was taking priority over the dlls in tensorrt. After removing cudnn manually, my problem was solved.
I shouldn’t have to install anything manually… Why are there separate cudnn libraries? They seem to be conflicting… Or did TensorRT fail to remove the package after installing the tensorrt wheel?