tensorflow: XNNPACK Delegate error
Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 20
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: Redmi 7
- TensorFlow installed from (source or binary): source
- TensorFlow version: 2.3.0
- Python version: 3.8
- Installed using virtualenv? pip? conda?: conda
- Bazel version (if compiling from source): 3.1.0
- GCC/Compiler version (if compiling from source): ndk builder for android-28 API version 20.0.5594570
- CUDA/cuDNN version: 10.1
- GPU model and memory:
Describe the problem How do I get XNNPACK as an optional delegate for my application purely in c++? I tried building with “define xnnpack=true” but then it won’t allow me to modify the graph with other delegates saying ```ERROR:Graph is immutable". I tried building XNNPACK separately and link it as required by “evaluations;:utils” header, to no avail. Help?
Provide the exact sequence of commands / steps that you executed before running into the problem
- I built the tensorflowlite using
bazel build -c opt --config android_arm64 --define tflite_with_xnnpack=true tensorflow/lite:libtensorflowlite.so
, I compiled the libraries for Hexagon and GPU and linked them as required byevaluation::utils
header. - When runing the application it automatically creates XNNPACK Delegate for me. But when I try to modify graph with GPU delegate, I get
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
INFO: Created TensorFlow Lite delegate for GPU.
INFO: Initialized OpenCL-based API.
INFO: Created 1 GPU delegate kernels.
ERROR: ModifyGraphWithDelegate is disallowed when graph is immutable.
How do I allow changing delegates in XNNPACK support or how do I add optional XNNPACK support? I checked the README.md, following which I got an .lo
file which I’m not sure what to do with. Help
Any other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 16 (6 by maintainers)
Correct me if I’m wrong but those lines doesn’t hint that they’re getting linked to tensorflowlite.so.
I’m unable to get
libxnnpack_f32.a
, so I compiled it manually using the XNNPACK repository and used the fpic command as you’ve suggested and it still didn’t help. I used this command ’bazel build --copt="-fpic" -c opt --config android_arm64 xnnpack_f32
bazel build --copt="-fpic" -c opt --config android_arm64 tensorflow/lite/delegates/xnnpack:xnnpack_delegate
In my CMake I’ve used
I get the following error: https://pastebin.com/3qCBCtkR
Aside from that I’ve noticed tf_with_xnnpack and tf_with_xnnpack_optional(in the lite folder which can be built using bazel) , what are those? will they solve my problem?
==============================================================================================
UPDATE: (you can ignore the above)
I’ve
bazel build -c opt --config=android_arm64 //tensorflow/lite:libtensorflowlite.so
bazel build --copt="-fpic" -c opt --config android_arm64 tensorflow/lite/delegates/xnnpack:xnnpack_delegate
bazel-bin/external/xnnpack
and link my library against it.I feel we are getting closer and closer to the solution. Please stick around, thanks!
I see. That we implemented the delegate application in this way is to allow applying multiple delegates to the TFLite interpreter. But in general, when using these tools, we only specify one delegate to be applied via command-line flags, like “–use_gpu=true” etc, without specifying another one like “–use_nnapi=true” at the same time.