vision: torchvision conda packaging fails to resolve when opencv constraint is present

๐Ÿ› Describe the bug

This seems to be somewhat serious to figure out and resolve by the next release or asap.

Hereโ€™s how the bug manifests. Thereโ€™s a very common scenario when someone installs torchvision and opencv within the same conda environment.

However, running the following command:

conda create -n tvtest python=3.7 "torchvision>=0.10" "cudatoolkit>=11" opencv -c pytorch -c conda-forge

This results in the torchvision resolving as the cpu torchvision packge from conda-forge, the pytorch being resolved from pytorch channel, and the worst part, old versions of pytorch and pytorch-cpu being installed.

Tested with latest anaconda: conda --version resolves to conda 4.10.3.

Removing opencv in the above command results in torchvision and pytorch correctly resolving to their CUDA binaries, and pytorch-cpu not being installed.

cc: @malfet

Versions

Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31

Python version: 3.9.5 (default, Jun  4 2021, 12:28:51)  [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: 11.1.105
GPU models and configuration:
GPU 0: GeForce GTX TITAN X
GPU 1: GeForce GTX TITAN X

Nvidia driver version: 460.73.01
cuDNN version: Probably one of the following:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] Could not collect
[conda] # packages in environment at /home/soumith/miniconda3:

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 3
  • Comments: 26 (15 by maintainers)

Most upvoted comments

we just added CI testing for these in https://github.com/pytorch/pytorch-integration-testing/pull/21 yesterday

@datumbox just fyi, I confirmed that this is not a dependency issue, but a packaging issue โ€“ i.e. our packaging seems to be at fault.