tensorflow: Custom Reader Op results in undefined symbol: _ZTIN10tensorflow8OpKernelE on library load
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
- TensorFlow installed from (source or binary): Docker container, also tried source
- TensorFlow version: 1.12.0
- Python version: 2.7
- Installed using virtualenv? pip? conda?: pip
- Bazel version (if compiling from source): 0.18
- GCC/Compiler version (if compiling from source): 5.4
- CUDA/cuDNN version: 9/7
- GPU model and memory: 1080ti
I have a custom Reader op that inherits ReaderBase class (tensorflow/core/framework/reader_base.h). The op compiled and worked as expected under TF 1.3 and earlier.
I have now upgraded to TF1.12.0. Shared library for the custom op builds successfully, but loading it with import tensorFlow as tf; tf.load_op_library() fails due to missing symbols:
tensorflow.python.framework.errors_impl.NotFoundError: ./libjson_record_reader_op.so: undefined symbol: _ZTIN10tensorflow8OpKernelE
Actually, at first it was reporting undefined symbol ReaderBaseE, but now reporting OpKernelE.
I tried building the op in the following Docker containers with the following tags: 1.12.0, 1.12.0-devel, 1.12.0-gpu-devel.
I have also built TF from source with bazel 0.18 and installed via pip (also in tensorflow/tensorflow-devel-gpu container) with the same result as above.
Building custom op as per current instructions in the docs using g++. I tried building with g++ 5.5, 5.4, 4.8, 4.9 with the same result.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 1
- Comments: 19 (6 by maintainers)
No, sorry, I was not able to make it work and moved to PyTorch since.
@agnz could you add -D_GLIBCXX_USE_CXX11_ABI=0 to your g++ flag?
@yifeif - Sorry for a very delayed response.
This flag is already automatically added from TF_CFLAGS…
Please see below a full set of steps to reproduce the problem. Steps are taken from this guide: https://github.com/tensorflow/custom-op
Then inside the container
Everything works as expected. Now modifying zero_out_kernel.cc to include an additional SimpleReader custom op that inherits from ReaderBase (see source below). It does absolutely nothing and is used just for demonstration purposes.
Trying to compile and run zero_out op again - this part is identical to the first set of steps, except an additional op has been compiled.