conda-smithy: BUG: conda-smithy v3.31 mismatches cuda_compiler & image, breaks zip_keys

I’m getting the following rerender in https://github.com/conda-forge/arrow-cpp-feedstock/pull/1325, which mismatches the cuda_compiler with the image, and then predictably fails.

--- a/.azure-pipelines/azure-pipelines-linux.yml
+++ b/.azure-pipelines/azure-pipelines-linux.yml
@@ -11,7 +11,7 @@ jobs:
       linux_64_cuda_compiler_version11.2:                          # CUDA
         CONFIG: linux_64_cuda_compiler_version11.2
         UPLOAD_PACKAGES: 'True'
-        DOCKER_IMAGE: quay.io/condaforge/linux-anvil-cuda:11.2     # CUDA
+        DOCKER_IMAGE: quay.io/condaforge/linux-anvil-cos7-x86_64   # !! no CUDA !!
       linux_64_cuda_compiler_versionNone:
         CONFIG: linux_64_cuda_compiler_versionNone
         UPLOAD_PACKAGES: 'True'
--- a/.ci_support/linux_64_cuda_compiler_version11.2.yaml
+++ b/.ci_support/linux_64_cuda_compiler_version11.2.yaml           # CUDA
@@ -7,15 +7,15 @@ bzip2:
 cuda_compiler:
-- nvcc                                                             # CUDA
+- None                                                             # !! no CUDA !!
 cuda_compiler_version:
 - '11.2'                                                           # CUDA
 cuda_compiler_version_min:
@@ -23,23 +23,23 @@ cuda_compiler_version_min:
 docker_image:
-- quay.io/condaforge/linux-anvil-cuda:11.2
+- quay.io/condaforge/linux-anvil-cos7-x86_64
 gflags:
--- a/.ci_support/linux_64_cuda_compiler_versionNone.yaml
+++ b/.ci_support/linux_64_cuda_compiler_versionNone.yaml           # no CUDA
@@ -9,13 +9,13 @@ c_compiler:
 cuda_compiler:
-- None                                                             # no CUDA
+- nvcc                                                             # !! CUDA !!
 cuda_compiler_version:
 - None                                                             # no CUDA
 cuda_compiler_version_min:

In fact, for 4/4 builds on the arrow feedstock, the rerender randomly mismatches at least two jobs, seemingly by messing up the zip_keys. Not sure what’s happening here or whether #1815 had a bug or uncovered another problem.

@mbargull @beckermr CC @conda-forge/core

About this issue

  • Original URL
  • State: closed
  • Created 4 months ago
  • Comments: 15 (15 by maintainers)

Most upvoted comments

The rendering times for arrow-cpp-feedstock with its multiple outputs is… something. Rendering conda-forge/arrow-cpp-feedstock@fd8da32 took more than 10 minutes for me locally.

Yeah… Local rerenders take up to 25min for me now - not fun. 😑

I’ve profiled this coarsely and quickly worked on some changes at the end of last month and but only got to put in https://github.com/conda/conda-build/issues/5224 today. (conda-build=24.3 release processes is starting now, so that would only be available in >=24.5.) With those changes, it is still very far from how quick it should be, but at least it should hopefully avoid >10 minute re-rendering times. Re-rendering arrow-cpp-feedstock went down to about 3 minutes and ctng-compilers-feedstock to about 4 minutes for me – still very bad, IMO; but I had no idea how atrociously slow these things are currently. (Please do file bug reports for such things!)

FWIW, I tried your changes from gh-1851 on https://github.com/conda-forge/python-feedstock/pull/656/commits/73873d8de6b643f04ab08356d2da703dbcea552e (which failed with 3.30.4 but works with 3.31.0; i.e., the reason for gh-1815) and it still works fine for that case, too 👍.

The compilers runs ~30 minutes on GHA so a factor of 4 would be a big improvement no matter what. Thank you!

PYTHONHASHSEED=4 will cause the error for those trying this out.

Awesome that you reduced this, thanks for tackling this so quickly!

The rendering times for arrow-cpp-feedstock with its multiple outputs is… something. Rendering https://github.com/conda-forge/arrow-cpp-feedstock/commit/fd8da32e98de34ba3225a2d7e3a064ababe64f9e took more than 10 minutes for me locally.

Yeah… Local rerenders take up to 25min for me now - not fun. 😑