poetry: Can't install detectron2
- I am on the latest Poetry version.
- I have searched the issues of this repo and believe that this is not a duplicate.
- If an exception occurs when executing a command, I executed it again in debug mode (
-vvv
option).
- OS version and name: Arch Linux
- Poetry version: 1.0.5
- Contents of your pyproject.toml file:
[tool.poetry]
name = "a"
version = "0.1.0"
description = ""
authors = ["a"]
[tool.poetry.dependencies]
python = "^3.6"
detectron2 = { git = "https://github.com/facebookresearch/detectron2.git" }
[tool.poetry.dev-dependencies]
[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"
Issue
Poetry can’t figure out the version of detectron2. I’ve tried to debug this myself but the error messages don’t tell me anything useful:
$ /tmp/poetry/bin/poetry install -vvv
Creating virtualenv a-1_qs6r2x-py3.8 in /home/x/.cache/pypoetry/virtualenvs
Using virtualenv: /home/x/.cache/pypoetry/virtualenvs/a-1_qs6r2x-py3.8
Updating dependencies
Resolving dependencies...
1: fact: a is 0.1.0
1: derived: a
1: fact: a depends on detectron2 (*)
1: selecting a (0.1.0)
1: derived: detectron2 (*)
1: Version solving took 3.842 seconds.
1: Tried 1 solutions.
[RuntimeError]
Unable to retrieve the package version for /tmp/pypoetry-git-detectron2683igqav
Traceback (most recent call last):
File "/tmp/poetry/lib/poetry/_vendor/py3.8/clikit/console_application.py", line 131, in run
status_code = command.handle(parsed_args, io)
File "/tmp/poetry/lib/poetry/_vendor/py3.8/clikit/api/command/command.py", line 120, in handle
status_code = self._do_handle(args, io)
File "/tmp/poetry/lib/poetry/_vendor/py3.8/clikit/api/command/command.py", line 171, in _do_handle
return getattr(handler, handler_method)(args, io, self)
File "/tmp/poetry/lib/poetry/_vendor/py3.8/cleo/commands/command.py", line 92, in wrap_handle
return self.handle()
File "/tmp/poetry/lib/poetry/console/commands/install.py", line 63, in handle
return_code = installer.run()
File "/tmp/poetry/lib/poetry/installation/installer.py", line 74, in run
self._do_install(local_repo)
File "/tmp/poetry/lib/poetry/installation/installer.py", line 161, in _do_install
ops = solver.solve(use_latest=self._whitelist)
File "/tmp/poetry/lib/poetry/puzzle/solver.py", line 36, in solve
packages, depths = self._solve(use_latest=use_latest)
File "/tmp/poetry/lib/poetry/puzzle/solver.py", line 180, in _solve
result = resolve_version(
File "/tmp/poetry/lib/poetry/mixology/__init__.py", line 7, in resolve_version
return solver.solve()
File "/tmp/poetry/lib/poetry/mixology/version_solver.py", line 80, in solve
next = self._choose_package_version()
File "/tmp/poetry/lib/poetry/mixology/version_solver.py", line 355, in _choose_package_version
packages = self._provider.search_for(dependency)
File "/tmp/poetry/lib/poetry/puzzle/provider.py", line 130, in search_for
packages = self.search_for_vcs(dependency)
File "/tmp/poetry/lib/poetry/puzzle/provider.py", line 167, in search_for_vcs
package = self.get_package_from_vcs(
File "/tmp/poetry/lib/poetry/puzzle/provider.py", line 204, in get_package_from_vcs
package = cls.get_package_from_directory(tmp_dir, name=name)
File "/tmp/poetry/lib/poetry/puzzle/provider.py", line 343, in get_package_from_directory
raise RuntimeError(
Is poetry unable to get dynamically generated versions?
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 11
- Comments: 30 (7 by maintainers)
Commits related to this issue
- Minimum viable pyproject.toml for PEP-518 compliance The new pyproject.toml specifies the minimum requirements necessary to build the project's wheel. This makes this project up to the PEP-518 standa... — committed to devcsrj/detectron2 by devcsrj 3 years ago
- Adds a pyproject.toml consistent with PEP-518 The missing pyproject.toml results in an inability to install detectron2 via poetry. fixes https://github.com/python-poetry/poetry/issues/2113 — committed to RamTank/detectron2 by jim80net a year ago
@malcolmgreaves for completeness this is what you need to keep legacy build.
@malcolmgreaves I just tried your workaround to integrate detectron2 with an existing project but still get errors… 😕
My pyproject.toml looks like this:
However, when I run
poetry install
, I get:Any idea why that might be the case?
Wonderful! Thank you a ton 💯 💯 🤗
just using the executable python to run the setup script, like:
cd ../detectron2 && /root/.cache/pypoetry/virtualenvs/image-inpainting-WGCBfPov-py3.10/bin/python setup.py install
I managed to get it run by specifying:
in a detectron2 fork
using the following config
using poetry==1.2.0b2 following this issue, it is important to mention that this file will not work on the latest non-beta release
and additionally adding
https://download.pytorch.org/whl
as an extra index in the global git config fileThe last step is necessary, as PEP-518 specifies to use isolated builds for building wheels, so the index specified in poetry won’t be used. However, globally configured indexes will also be used by the newly isolated venv.
Some side notes:
gcc
andg++
version below10.*
, so you might have to manually downgrade the versions.venv
didn’t manage to copy the isolated venv to a temp folder, failing the build before any detectron2 related code was executed. I suppose this is not related to this issue. I don’t understand why that happens or if that is specific to my setup or not.python="3.8.*"
it will download wheels for a veriety of python versions. You might want to ensure thatpip cache lost
is working before starting installing it. I had it disabled for some reason, which made it frustrating when my internet crashed, and I had to start downloading some wheels again.This seems to no longer work - or at least not in the general case.
The issue that everyone seems to be missing - please correct me if I’m wrong about it - is that simply specifying
torch
or eventorch==1.10.2
in build-system section ofdetectron2
does not solve the true problem - that is that torch version you install - whatever that version might be - has to be compiled for the same CUDA version you have on your machine.Since
torch
available @pypi is compiled by default with CUDA 10.2 (or at least newer versions are), this FORCES CUDA 10.2 onto your system - something the authors ofdetectron2
have no chance of fixing. @ppwwyyxxThe problem runs even deeper. Even if
detectron2
is forked, and proper version of torch is required in the minimalpyproject.toml
, there is seems to be no way (@1) of forcing build process to download and use custom wheel - such as available at https://download.pytorch.org/whl/cu113 - which would be required to builddetectron2
using specific CUDA version…^1. please confirm this, I’m new to setuptools and custom package building in general.
@lucinvitae as far as I can tell, the issue here is the package build specification does to convey that it needs
torch
to successfully build the package. Let me know if this is not the case.Fro poetry’s perspective, we first need to build the bdist of the package if a compatible one is not already available on the configured indices. This will lead to a poetry attempting to build the package’s wheel, this happens inside an ephemeral isolated build environment according to PEP-517. When creating this environment, the
build-system
requirements are installed. In this case, since the build scripts needtorch
available (https://github.com/python-poetry/poetry/issues/2113#issuecomment-711075790). Essentially, something similar to the following is what happens under the hood.However, as @ppwwyyxx identified, there are other limitations that limit the options you have at the moment - ie. you cannot discover automatically the pre-built wheels provided by
detectron2
nor can they distribute CUDA wheels on PyPI.For instance, poetry resolver today does not handle local tags well neither does using link repositories work. This means that you cannot use the indices described in https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md#install-pre-built-detectron2. However, you can use the direct links to the wheels here, but this assumes everwhere you install your package the same wheel can be used (this should be fine for cpu, alternatively you can have various extras. This is being fixed (partially done).
That said, I do not think the python packaging ecosystem has a decent solution for gpu packages anyway - eg. there is no standard way to determine if a cuda enabled wheel can be used in active environment.
Having the dependency metadata specify something like
{ ..., extra-build-requires = ["torch"] }
might be an option, but I am not sure if we should start doing this. On the one hand, this enables poetry users to consume packages where similar issues exist more cleanly (this won’t solve the gpu issue), on the other hand this is a workaround for lack of standard metadata.Another option is to allow for something like
{ ..., build-allow-system-site = true }
. In this case, the system packages are allowed for package builds.What poetry should do in the near future are the following:
--find-link
type sources in projects.I still think that the
torch
dependency need to be declared in the project’s metadata (not in downstream packages) if it is required at build time, so that PEP-517 builds can work correctly.As for the gpu specific versions, I am not sure where this needs to go. Ideally, there will be a packaging PEP that will help tools like ‘poetry’ to implement a standardised approach.
@joostmeulenbeld I have a workaround here – you can
poetry install
this as a{ git = ... }
dependency https://github.com/malcolmgreaves/detectron2I am going to close this issue as the original issue has been resolved and the current issue has to do with lack of PEP 517 metadata in the package source.
🎉 I have answered my own question – just adding
torch
torequires
in thebuild-system
section is all that’s necessary to getpoetry
to understand how to build thedetectron2
wheel! 🎉No worries. Wish it was a more helpful response. You can just send them a PR with a minimal
pyproject.toml
. In the interim, you can use a branch on your fork with the change.