LocalAI: compilation fails for "examples/grpc-server"

LocalAI version:

45370c212bbc379f65f2c77560958acc24877fba

Environment, CPU architecture, OS, and Version:

Linux fedora 6.5.6-300.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Oct 6 19:57:21 UTC 2023 x86_64 GNU/Linux

Describe the bug

After failures with CUDA and docker in https://github.com/go-skynet/LocalAI/issues/1178

I try to compile and run LocalAI directly on the host: make BUILD_TYPE=cublas build

To Reproduce

make BUILD_TYPE=cublas build

Expected behavior

Successful build, binaries running with CUDA support

Logs

make -C go-llama BUILD_TYPE=cublas libbinding.a
make[1]: Verzeichnis „/home/sgw/LocalAI/go-llama“ wird betreten
I llama.cpp build info: 
I UNAME_S:  Linux
I UNAME_P:  unknown
I UNAME_M:  x86_64
I CFLAGS:   -I./llama.cpp -I. -O3 -DNDEBUG -std=c11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wno-unused-function -pthread -march=native -mtune=native
I CXXFLAGS: -I./llama.cpp -I. -I./llama.cpp/common -I./common -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -pthread
I CGO_LDFLAGS:  
I LDFLAGS:  
I BUILD_TYPE:  cublas
I CMAKE_ARGS:  -DLLAMA_AVX512=OFF -DLLAMA_CUBLAS=ON
I EXTRA_TARGETS:  llama.cpp/ggml-cuda.o
I CC:       cc (GCC) 13.2.1 20230918 (Red Hat 13.2.1-3)
I CXX:      g++ (GCC) 13.2.1 20230918 (Red Hat 13.2.1-3)

make[1]: „libbinding.a“ ist bereits aktuell.
make[1]: Verzeichnis „/home/sgw/LocalAI/go-llama“ wird verlassen
LLAMA_VERSION=24ba3d829e31a6eda3fa1723f692608c2fa3adda make -C backend/cpp/llama grpc-server
make[1]: Verzeichnis „/home/sgw/LocalAI/backend/cpp/llama“ wird betreten
cd llama.cpp && mkdir -p build && cd build && cmake .. -DLLAMA_AVX512=OFF -DLLAMA_CUBLAS=ON && cmake --build . --config Release
-- cuBLAS found
-- Using CUDA architectures: 52;61;70
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Error at examples/grpc-server/CMakeLists.txt:7 (find_package):
  Could not find a package configuration file provided by "absl" with any of
  the following names:

    abslConfig.cmake
    absl-config.cmake

  Add the installation prefix of "absl" to CMAKE_PREFIX_PATH or set
  "absl_DIR" to a directory containing one of the above files.  If "absl"
  provides a separate development package or SDK, be sure it has been
  installed.


-- Configuring incomplete, errors occurred!
make[1]: *** [Makefile:43: grpc-server] Fehler 1
make[1]: Verzeichnis „/home/sgw/LocalAI/backend/cpp/llama“ wird verlassen

Additional context

I also tried CMAKE_ARGS="-DLLAMA_AVX512=OFF" make BUILD_TYPE=cublas build because my CPU doesn’t support AVX512.

Maybe interesting:

$ gcc -v
Es werden eingebaute Spezifikationen verwendet.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/13/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Ziel: x86_64-redhat-linux
Konfiguriert mit: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,m2,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --enable-libstdcxx-backtrace --with-libstdcxx-zoneinfo=/usr/share/zoneinfo --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl=/builddir/build/BUILD/gcc-13.2.1-20230918/obj-x86_64-redhat-linux/isl-install --enable-offload-targets=nvptx-none --without-cuda-driver --enable-offload-defaulted --enable-gnu-indirect-function --enable-cet --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux --with-build-config=bootstrap-lto --enable-link-serialization=1
Thread-Modell: posix
Unterstützte LTO-Kompressionsalgorithmen: zlib zstd
gcc-Version 13.2.1 20230918 (Red Hat 13.2.1-3) (GCC)

But the error message seems more like something is missing than it’s the wrong gcc.

EDIT: search for absl, found and installed sudo dnf install python3-absl-py.noarch … doesn’t help.

About this issue

  • Original URL
  • State: open
  • Created 8 months ago
  • Comments: 33 (6 by maintainers)

Most upvoted comments

The issue was fixed. With the code in the master branch, it is possible to build gRPC locally by running the command: BUILD_GRPC_FOR_BACKEND_LLAMA=ON make build

I think it is possible to close the issue.

@nbollman It’s not just you - I’m also having the same problem attempting to compile the image with documented defaults.

No luck with the apt install libabsl-dev command, think the GPU installation instructions are borked for the moment.

I also got this bug following the same https://localai.io/howtos/easy-setup-docker-gpu/ … unforutunately didnt find this ‘apt install libabsl-dev’ and am in the process of a new pull/build, will give it a try assuming the issue persists. Could be missing the package on the Quay build?