Nuitka: Transformers raises "no sympy" after compiling
Before submitting an Issue, please review the Issue Guidelines.
- Please check whether the bug was already reported or fixed Mentioned here https://github.com/Nuitka/Nuitka/issues/2228
Tried to build with torch and transformers it seems to be missing sympy?
- Nuitka version, full Python version, flavor, OS, etc. as output by this command.
1.6 Commercial: None Python: 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] Flavor: CPython Official Executable: C:\Users\Tensor\Desktop\aiengine.venv\Scripts\python.exe OS: Windows Arch: x86_64 WindowsRelease: 10 Version C compiler: C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.32.31326\bin\Hostx64\x64\cl.exe (cl 14.3).
- How did you install Nuitka and Python
using a virtual env then using pip3
- The specific PyPI names and versions
anyio==3.7.0 backoff==2.2.1 certifi==2022.12.7 click==8.1.3 clickhouse-connect==0.5.25 colorama==0.4.6 coloredlogs==15.0.1 duckdb==0.8.0 exceptiongroup==1.1.1 fastapi==0.95.2 filelock==3.9.0 flatbuffers==23.5.26 fsspec==2023.5.0 h11==0.14.0 hnswlib==0.7.0 httptools==0.5.0 huggingface-hub==0.14.1 humanfriendly==10.0 idna==3.4 Jinja2==3.1.2 lz4==4.3.2 MarkupSafe==2.1.2 monotonic==1.6 mpmath==1.2.1 networkx==3.0 Nuitka==1.6 numpy==1.24.1 onnxruntime==1.15.0 ordered-set==4.1.0 overrides==7.3.1 packaging==23.1 pandas==2.0.2 Pillow==9.3.0 posthog==3.0.1 protobuf==4.23.2 pydantic==1.10.8 pyreadline3==3.4.1 python-dateutil==2.8.2 python-dotenv==1.0.0 pytz==2023.3 PyYAML==6.0 regex==2023.5.5 requests==2.28.1 safetensors==0.3.1 six==1.16.0 sniffio==1.3.0 starlette==0.27.0 sympy==1.11.1 tokenizers==0.13.3 torch==2.0.0+cu117 torchaudio==2.0.1+cu117 torchvision==0.15.1+cu117 tqdm==4.65.0 transformers==4.29.2 typing_extensions==4.6.2 tzdata==2023.3 urllib3==1.26.13 uvicorn==0.22.0 watchfiles==0.19.0 websockets==11.0.3 zstandard==0.21.0
Heres the output after compiling with hugging face and torch.
No sympy found
Traceback (most recent call last):
File “C:\Users\Tensor\Desktop\aiengine\GPTQ-F~1\MAIN~1.DIS\main.py”, line 4, in <module>
File “<frozen importlib._bootstrap>”, line 1027, in _find_and_load
File “<frozen importlib._bootstrap>”, line 1006, in _find_and_load_unlocked
File “<frozen importlib._bootstrap>”, line 688, in load_unlocked
File "C:\Users\Tensor\Desktop\aiengine\GPTQ-F~1\MAIN~1.DIS\torch_init.py", line 1465, in <module torch>
File “<frozen importlib._bootstrap>”, line 1027, in _find_and_load
File “<frozen importlib._bootstrap>”, line 1006, in _find_and_load_unlocked
File “<frozen importlib._bootstrap>”, line 688, in _load_unlocked
File “C:\Users\Tensor\Desktop\aiengine\GPTQ-F~1\MAIN~1.DIS\torch_meta_registrations.py”, line 7, in <module torch._meta_registrations>
File “<frozen importlib._bootstrap>”, line 1027, in _find_and_load
File “<frozen importlib._bootstrap>”, line 1006, in _find_and_load_unlocked
File “<frozen importlib._bootstrap>”, line 688, in load_unlocked
File "C:\Users\Tensor\Desktop\aiengine\GPTQ-F~1\MAIN~1.DIS\torch_decomp_init.py", line 169, in <module torch._decomp>
File “<frozen importlib._bootstrap>”, line 1027, in _find_and_load
File “<frozen importlib._bootstrap>”, line 1006, in _find_and_load_unlocked
File “<frozen importlib._bootstrap>”, line 688, in _load_unlocked
File “C:\Users\Tensor\Desktop\aiengine\GPTQ-F~1\MAIN~1.DIS\torch_decomp\decompositions.py”, line 10, in <module torch._decomp.decompositions>
File “<frozen importlib._bootstrap>”, line 1027, in _find_and_load
File “<frozen importlib._bootstrap>”, line 1006, in _find_and_load_unlocked
File “<frozen importlib._bootstrap>”, line 688, in load_unlocked
File "C:\Users\Tensor\Desktop\aiengine\GPTQ-F~1\MAIN~1.DIS\torch_prims_init.py", line 33, in <module torch._prims>
File “<frozen importlib._bootstrap>”, line 1027, in _find_and_load
File “<frozen importlib._bootstrap>”, line 1006, in _find_and_load_unlocked
File “<frozen importlib._bootstrap>”, line 688, in load_unlocked
File "C:\Users\Tensor\Desktop\aiengine\GPTQ-F~1\MAIN~1.DIS\torch_subclasses_init.py", line 3, in <module torch._subclasses>
File “<frozen importlib._bootstrap>”, line 1027, in _find_and_load
File “<frozen importlib._bootstrap>”, line 1006, in _find_and_load_unlocked
File “<frozen importlib._bootstrap>”, line 688, in _load_unlocked
File “C:\Users\Tensor\Desktop\aiengine\GPTQ-F~1\MAIN~1.DIS\torch_subclasses\fake_tensor.py”, line 13, in <module
torch._subclasses.fake_tensor>
File “<frozen importlib._bootstrap>”, line 1027, in _find_and_load
File “<frozen importlib._bootstrap>”, line 1006, in _find_and_load_unlocked
File “<frozen importlib._bootstrap>”, line 688, in _load_unlocked
File “C:\Users\Tensor\Desktop\aiengine\GPTQ-F~1\MAIN~1.DIS\torch_guards.py”, line 78, in <module torch._guards>
File “C:\Users\Tensor\Desktop\aiengine\GPTQ-F~1\MAIN~1.DIS\torch_guards.py”, line 79, in ShapeGuard
NameError: name ‘sympy’ is not defined
Command to execute
python -m nuitka main.py --standalone --follow-imports --show-progress --show-scons --show-modules --include-package=setuptools --noinclude-pytest-mode=nofollow --noinclude-unittest-mode=nofollow --noinclude-dask-mode=nofollow --noinclude-numba-mode=nofollow --lto=no --standalone
Code used
from transformers import AutoTokenizer
import time
import torch
import torch.nn as nn
DEV = torch.device('cuda:0')
model_name = "EleutherAI/gpt-j-6b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_ids = tokenizer.encode("Hello world this is a test of the context and how it should work.", return_tensors="pt").to(DEV)
print(input_ids)
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 20 (9 by maintainers)
Since doctest will not work without unittest, it’s excluded too. I recall making anti-bloat against sympy recently, specifically
sympy.testingbut I guess, this one might be worth too. I usually have to leave it enabled due tounittest.mockhaving legit uses in many places. On day we might have something that will be able to handle that and not allow the others. And we will need anImportErroror ratherModuleNotFoundErrorthat indicates why a used module was not included.@ArEnSc
I can confirm that:
Reports strive to make sure I do not need ask questions anymore.