llama-cpp-python: Problems when i try to use this inside the default python 3.10 docker container

When i try to install and use this package via a requirements file in the default 3.10 python container i get the following error when i try to import the module: Failed to load shared library '/usr/local/lib/python3.10/site-packages/llama_cpp/libllama.so': /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version 'GLIBCXX_3.4.29' not found (required by /usr/local/lib/python3.10/site-packages/llama_cpp/libllama.so)

Am i doing something wrong? Or am i just missing some dependencies?

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 28 (6 by maintainers)

Most upvoted comments

@gjmulder Alright thats kind of my bad, the lib folder is created in a github action and contains the llama.so binary i added above. I now also added a lib folder with an llama.so that was generated this way into the repo for testing purposes.

There is no need to build or recursively checkout the repo. To reproduce the issue i’m experiencing just clone the fork and run a docker build and run on the dockerfile. The build should work without any errors and when trying to run the container the 'GLIBCXX_3.4.29' not found error should occure.

The buildprocess of the llama.so binary is described in this workflow file.

@gjmulder Alrigth i had another look, setting the CXXFLAGS or CFLAGS is not possible as they are reset in the make file. But if i use cmake i can simply pass an -D LLAMA_AVX512=OFF flag to enable/disable the avx512 instructions. I will probably use this to build llama.cpp manually in a workflow and then copy the resulting libllama.so into my docker containers. This way i can easily create a separate avx512 and avx2 container.

@gjmulder Well the images are build on an github-actions-runner which probably uses a virtualized Intel CPU. I then downloaded them an tried to run them on my AMD systems, which leads to the errors. The thing that confuses me is that all systems i used support all CPU features listed in the cmake file namely avx, avx2, fma and f16c. I don’t quite get why an image build on one of these systems shouldn’t work when moved to another system supporting the exact same instruction sets.