numba: cuda stubs raising mypy SyntaxError

  • I have tried using the latest released version of Numba (most recent is visible in the change log (https://github.com/numba/numba/blob/main/CHANGE_LOG).
  • I have included a self contained code sample to reproduce the problem. i.e. it’s possible to run as ‘python bug.py’.

We’re seeing:

    import cudf
/conda/envs/rapids/lib/python3.7/site-packages/cudf/__init__.py:4: in <module>
    validate_setup()
/conda/envs/rapids/lib/python3.7/site-packages/cudf/utils/gpu_utils.py:18: in validate_setup
    from rmm._cuda.gpu import (
/conda/envs/rapids/lib/python3.7/site-packages/rmm/__init__.py:16: in <module>
    from rmm import mr
/conda/envs/rapids/lib/python3.7/site-packages/rmm/mr.py:2: in <module>
    from rmm._lib.memory_resource import (
/conda/envs/rapids/lib/python3.7/site-packages/rmm/_lib/__init__.py:3: in <module>
    from .device_buffer import DeviceBuffer
rmm/_lib/device_buffer.pyx:1: in init rmm._lib.device_buffer
    ???
rmm/_cuda/stream.pyx:26: in init rmm._cuda.stream
    ???
/conda/envs/rapids/lib/python3.7/site-packages/numba/cuda/__init__.py:7: in <module>
    from .device_init import *
/conda/envs/rapids/lib/python3.7/site-packages/numba/cuda/device_init.py:2: in <module>
    from .stubs import (threadIdx, blockIdx, blockDim, gridDim, laneid,
E     File "/conda/envs/rapids/lib/python3.7/site-packages/numba/cuda/stubs.py", line 449
E       """  # noqa: W605
E                      ^
E   SyntaxError: invalid escape sequence \|

Seemingly corresponding to the \| in https://github.com/numba/numba/blob/468647dddde27ee8af124c97dfcd20c35c4a2bc6/numba/cuda/stubs.py#L468

Our mypy config is:

[mypy-numba.*]
ignore_missing_imports = True
ignore_errors = True

Versions:

+ python --version
Python 3.7.10
+ mypy --version
mypy 0.950 (compiled: yes)

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 18 (8 by maintainers)

Commits related to this issue

Most upvoted comments

OK. It should be fixed in 0.57 - I’m not clear what action we can take here - do you have a workaround?