mamba: Mamba does not respect Pytorch's cpuonly

To install PyTorch without CUDA, the official docs recommend installing a special cpuonly package (see https://pytorch.org/): that is, we can do something like conda create -n test -c pytorch python=3 pytorch=1.5 cpuonly.

But on Mamba, this will create an environent that installs both cpuonly and cudatoolkit (and installs a Pytorch version that depends on CUDA):

$ mamba --version
mamba 0.3.6
conda 4.8.3
$ mamba create -n test -c pytorch python=3 pytorch=1.5 cpuonly                                                                             
                  __    __    __    __
                 /  \  /  \  /  \  /  \
                /    \/    \/    \/    \
███████████████/  /██/  /██/  /██/  /████████████████████████
              /  / \   / \   / \   / \  \____
             /  /   \_/   \_/   \_/   \    o \__,
            / _/                       \_____/  `
            |/
        ███╗   ███╗ █████╗ ███╗   ███╗██████╗  █████╗
        ████╗ ████║██╔══██╗████╗ ████║██╔══██╗██╔══██╗
        ██╔████╔██║███████║██╔████╔██║██████╔╝███████║
        ██║╚██╔╝██║██╔══██║██║╚██╔╝██║██╔══██╗██╔══██║
        ██║ ╚═╝ ██║██║  ██║██║ ╚═╝ ██║██████╔╝██║  ██║
        ╚═╝     ╚═╝╚═╝  ╚═╝╚═╝     ╚═╝╚═════╝ ╚═╝  ╚═╝

        mamba (0.3.6) supported by @QuantStack

        GitHub:  https://github.com/QuantStack/mamba
        Twitter: https://twitter.com/QuantStack

█████████████████████████████████████████████████████████████

conda-forge/linux-64     Using cache
conda-forge/noarch       Using cache
pkgs/main/linux-64       [====================] (00m:00s) No change
pkgs/main/noarch         [====================] (00m:00s) No change
pkgs/r/linux-64          [====================] (00m:00s) No change
pkgs/r/noarch            [====================] (00m:00s) No change
pytorch/linux-64         [====================] (00m:00s) No change
pytorch/noarch           [====================] (00m:00s) No change

Looking for: ['python=3', 'pytorch=1.5', 'cpuonly']

Transaction

  Prefix: /opt/anaconda/envs/test

  Updating specs:

   - python==3
   - pytorch==1.5
   - cpuonly


  Package              Version  Build                           Channel                    Size
─────────────────────────────────────────────────────────────────────────────────────────────────
  Install:
─────────────────────────────────────────────────────────────────────────────────────────────────

  _libgcc_mutex            0.1  conda_forge                     conda-forge/linux-64     Cached
  _openmp_mutex            4.5  0_gnu                           conda-forge/linux-64     435 KB
  blas                    2.15  mkl                             conda-forge/linux-64      10 KB
  ca-certificates   2020.4.5.2  hecda079_0                      conda-forge/linux-64     Cached
  certifi           2020.4.5.2  py38h32f6830_0                  conda-forge/linux-64     152 KB
  cpuonly                  1.0  0                               pytorch/noarch             2 KB
  cudatoolkit          10.2.89  hfd86e86_1                      pkgs/main/linux-64       365 MB
  intel-openmp          2020.1  217                             pkgs/main/linux-64       780 KB
  ld_impl_linux-64        2.34  h53a641e_5                      conda-forge/linux-64     Cached
  libblas                3.8.0  15_mkl                          conda-forge/linux-64     Cached
  libcblas               3.8.0  15_mkl                          conda-forge/linux-64      10 KB
  libffi                 3.2.1  he1b5a44_1007                   conda-forge/linux-64     Cached
  libgcc-ng              9.2.0  h24d8f2e_2                      conda-forge/linux-64     Cached
  libgfortran-ng         7.5.0  hdf63c60_6                      conda-forge/linux-64     Cached
  libgomp                9.2.0  h24d8f2e_2                      conda-forge/linux-64     Cached
  liblapack              3.8.0  15_mkl                          conda-forge/linux-64      10 KB
  liblapacke             3.8.0  15_mkl                          conda-forge/linux-64      10 KB
  libstdcxx-ng           9.2.0  hdf63c60_2                      conda-forge/linux-64     Cached
  mkl                   2020.1  217                             pkgs/main/linux-64       129 MB
  ncurses                  6.1  hf484d3e_1002                   conda-forge/linux-64     Cached
  ninja                 1.10.0  hc9558a2_0                      conda-forge/linux-64     Cached
  numpy                 1.18.5  py38h8854b6b_0                  conda-forge/linux-64       5 MB
  openssl               1.1.1g  h516909a_0                      conda-forge/linux-64     Cached
  pip                   20.1.1  py_1                            conda-forge/noarch       Cached
  python                 3.8.3  cpython_he5300dc_0              conda-forge/linux-64      71 MB
  python_abi               3.8  1_cp38                          conda-forge/linux-64       4 KB
  pytorch                1.5.0  py3.8_cuda10.2.89_cudnn7.6.5_0  pytorch/linux-64         425 MB
  readline                 8.0  hf8c457e_0                      conda-forge/linux-64     Cached
  setuptools            47.3.1  py38h32f6830_0                  conda-forge/linux-64     637 KB
  sqlite                3.30.1  hcee41ef_0                      conda-forge/linux-64     Cached
  tk                    8.6.10  hed695b0_0                      conda-forge/linux-64     Cached
  wheel                 0.34.2  py38_0                          conda-forge/linux-64      43 KB
  xz                     5.2.5  h516909a_0                      conda-forge/linux-64     Cached
  zlib                  1.2.11  h516909a_1006                   conda-forge/linux-64     Cached

I’m on Fedora 31 and the latest conda + mamba version.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 2
  • Comments: 22 (13 by maintainers)

Most upvoted comments

Maybe we can convince the pytorch maintainers to switch?

I think that it would be the most reasonable approach since this is deprecated in conda.

Ah, sorry, the proper syntax is

CONDA_SUBDIR=linux-64 mamba create -n test "pytorch=*=*cuda*" -c pytorch -c nvidia --dry-run

So use "pytorch=*=*cuda*" to properly match on the build string.

Sorry to be a bother, but I have the opposite problem: I can’t for the life of me get pytorch’s cuda version installed with mamba 😕 Any idea?

Which channel are you using, pytorch, pytorch-nightly or conda-forge? Is it the case that mamba install -c pytorch -c nvidia pytorch doesn’t install the CUDA build of PyTorch for you? Could you try running the command in a fresh environment or restricting the build with mamba install -c pytorch -c nvidia pytorch=*=*cuda*?

I worked around it by explicitly fetching PyTorch and friends using the CPU-based package build string, e.g. from my Dockerfile:

ARG PYTORCH_VERSION=1.8.0
ARG TORCHVISION_VERSION=0.9.0
...
...
...
RUN mamba install --quiet --yes -c conda-forge \
   "pytorch==${PYTORCH_VERSION}=cpu_py38hd248515_1" \
   "torchvision==${TORCHVISION_VERSION}=py38h89b28b9_0_cpu"