tensorflow: Linking an Android library with TFLite GPU using CMake causes undefined symbol errors

Issue type

Build/Install

Have you reproduced the bug with TensorFlow Nightly?

No

Source

source

TensorFlow version

2.13

Custom code

Yes

OS platform and distribution

Linux 6.3.1, EndeavourOS

Mobile device

No response

Python version

No response

Bazel version

No response

GCC/compiler version

clang version 14.0.7

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

Linking an Android library with libtensorflow-lite.a using CMake with GPU delegate enabled causes undefined symbol errors

Standalone code to reproduce the issue

Please find a minimal test case here.

Relevant log output

ld: error: undefined symbol: tflite::delegates::BackendAsyncKernelInterface::BackendAsyncKernelInterface()
>>> referenced by delegate.cc:705 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:705)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::CreateAsyncRegistration()::$_3::__invoke(TfLiteContext*, char const*, unsigned long)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> did you mean: tflite::delegates::BackendAsyncKernelInterface::~BackendAsyncKernelInterface()
>>> defined in: tensorflow/tensorflow/lite/libtensorflow-lite.a(delegate.cc.o)

ld: error: undefined symbol: kTfLiteSyncTypeNoSyncObj
>>> referenced by string.h:61 (/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/bits/fortify/string.h:61)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::CreateAsyncRegistration()::$_3::__invoke(TfLiteContext*, char const*, unsigned long)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by string.h:61 (/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/bits/fortify/string.h:61)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::CreateAsyncRegistration()::$_3::__invoke(TfLiteContext*, char const*, unsigned long)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: TfLiteAttributeMapIsBufferAttributeMap
>>> referenced by delegate.cc:1058 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1058)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:908 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:908)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:909 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:909)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced 1 more times

ld: error: undefined symbol: tflite::delegates::utils::ReadBufferAttrs(TfLiteAttributeMap const*)
>>> referenced by delegate.cc:1061 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1061)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:925 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:925)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: TfLiteBackendBufferGetPtr
>>> referenced by delegate.cc:1087 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1087)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: AHardwareBuffer_acquire
>>> referenced by delegate.cc:787 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:787)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: AHardwareBuffer_describe
>>> referenced by delegate.cc:803 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:803)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:803 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:803)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::$_10::operator()(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs*, std::__ndk1::vector<long, std::__ndk1::allocator<long> > const&, absl::lts_20230125::Status (tflite::gpu::InferenceRunner::*)(int, std::__ndk1::variant<std::__ndk1::monostate, tflite::gpu::OpenGlBuffer, tflite::gpu::OpenGlTexture, tflite::gpu::CpuMemory, tflite::gpu::OpenClBuffer, tflite::gpu::OpenClTexture, tflite::gpu::VulkanBuffer, tflite::gpu::VulkanTexture>)) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: AHardwareBuffer_release
>>> referenced by delegate.cc:795 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:795)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:795 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:795)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:795 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:795)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Acquire(AHardwareBuffer*)::'lambda'(AHardwareBuffer*)::__invoke(AHardwareBuffer*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: tflite::delegates::utils::WriteBufferAttrs(tflite::delegates::utils::BufferAttributes const&, TfLiteAttributeMap*)
>>> referenced by delegate.cc:927 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:927)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:927 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:927)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:927 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:927)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced 2 more times

ld: error: undefined symbol: TfLiteAttributeMapIsSyncAttributeMap
>>> referenced by delegate.cc:933 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:933)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:934 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:934)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:941 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:941)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced 1 more times

ld: error: undefined symbol: tflite::delegates::utils::ReadSyncAttrs(TfLiteAttributeMap const*)
>>> referenced by delegate.cc:950 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:950)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:983 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:983)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::SetAttributes(TfLiteOpaqueContext*, TfLiteOpaqueNode*, int, TfLiteAttributeMap const*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: tflite::delegates::utils::WriteSyncAttrs(tflite::delegates::utils::SyncAttributes const&, TfLiteAttributeMap*)
>>> referenced by delegate.cc:952 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:952)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:954 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:954)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: TfLiteSynchronizationGetPtr
>>> referenced by delegate.cc:1256 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1256)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: tflite::delegates::utils::WaitForAllFds(absl::lts_20230125::Span<int const>)
>>> referenced by delegate.cc:1268 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1268)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: tflite::delegates::utils::ConvertToTfLiteStatus(absl::lts_20230125::Status)
>>> referenced by delegate.cc:1308 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1308)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:1289 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1289)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::$_10::operator()(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs*, std::__ndk1::vector<long, std::__ndk1::allocator<long> > const&, absl::lts_20230125::Status (tflite::gpu::InferenceRunner::*)(int, std::__ndk1::variant<std::__ndk1::monostate, tflite::gpu::OpenGlBuffer, tflite::gpu::OpenGlTexture, tflite::gpu::CpuMemory, tflite::gpu::OpenClBuffer, tflite::gpu::OpenClTexture, tflite::gpu::VulkanBuffer, tflite::gpu::VulkanTexture>)) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: AHardwareBuffer_unlock
>>> referenced by delegate.cc:1212 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1212)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:1212 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1212)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:1212 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1212)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs::~LockedAHWBs()) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: TfLiteSynchronizationSetPtr
>>> referenced by delegate.cc:1328 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1328)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

ld: error: undefined symbol: AHardwareBuffer_lock
>>> referenced by delegate.cc:1185 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1185)
>>>               delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::$_10::operator()(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs*, std::__ndk1::vector<long, std::__ndk1::allocator<long> > const&, absl::lts_20230125::Status (tflite::gpu::InferenceRunner::*)(int, std::__ndk1::variant<std::__ndk1::monostate, tflite::gpu::OpenGlBuffer, tflite::gpu::OpenGlTexture, tflite::gpu::CpuMemory, tflite::gpu::OpenClBuffer, tflite::gpu::OpenClTexture, tflite::gpu::VulkanBuffer, tflite::gpu::VulkanTexture>)) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Comments: 15 (2 by maintainers)

Most upvoted comments

Hey @GoldFeniks,

I’ve got the same problem when building TFLite 2.13.0 with CMake. I manage to fix it by editing tensorflow/lite/CMakeLists.txt.

Before if(TFLITE_ENABLE_GPU) I added the following lines:

populate_tflite_source_vars("core/async/interop" TFLITE_CORE_ASYNC_INTEROP_SRCS)
populate_tflite_source_vars("core/async/interop/c" TFLITE_CORE_ASYNC_INTEROP_C_SRCS)
populate_tflite_source_vars("delegates/utils" TFLITE_DELEGATES_UTILS_SRCS)
populate_tflite_source_vars("async" TFLITE_ASYNC_SRCS)

Then, after set(_ALL_TFLITE_SRCS I added the following lines:

${TFLITE_CORE_ASYNC_INTEROP_SRCS}
${TFLITE_CORE_ASYNC_INTEROP_C_SRCS}
${TFLITE_DELEGATES_UTILS_SRCS}
${TFLITE_ASYNC_SRCS}

Turns out AHardwareBuffer_* functions require linking to libandroid. So adding

find_library(android-lib android REQUIRED)

and changing target_link_libraries to

target_link_libraries(gpu tensorflow-lite ${android-lib})

fixes the problem.

Hey @AntonMalyshev, yes. It will drop compatibility to versions below 8.0. I think AHardwareBuffer symbols were introduced in API 26. I tried to build for API 21, 22, 23, 24, 25 and failed in all of them. Just worked with API 26.

@GoldFeniks, here’s my cmake config command:

cmake \
    -DCMAKE_BUILD_TYPE="release" \
    -DCMAKE_TOOLCHAIN_FILE="$ANDROID_NDK_HOME/build/cmake/android.toolchain.cmake" \
    -DANDROID_PLATFORM="26" \
    -DANDROID_ABI="arm64-v8a" \
    -DTFLITE_ENABLE_GPU=ON \
    -DXNNPACK_ENABLE_ARM_BF16=OFF \
     ../../tensorflow/lite

I am using NDK 21.4.7075529 instead of 25. Also, I had to disable XNNPACK_ENABLE_ARM_BF16 as advised here.