tensorflow: when running tflite mode using GPU , the result on AMD is wrong
Click to expand!
Issue Type
Bug
Source
source
Tensorflow Version
2.10 or 2.11
Custom Code
Yes
OS Platform and Distribution
win64
Mobile device
AMD
Python version
3.7
Bazel version
no
GCC/Compiler version
no
CUDA/cuDNN version
no
GPU model and memory
111
Current Behaviour?
A bug happened!
device : AMD Ryzen 5 5600U with Radeon Graphics (notebook)
I run a .tflite model on notebook PC using GPU delegate (opencl backend), the inference result is wrong .
I try other tflite models , or other AMD notebook,both find result is wrong on GPU delagate.
Please you help see it ,thank you very much
Standalone code to reproduce the issue
my configuration is following,
TfLiteGpuDelegateOptionsV2 gpu_options = TfLiteGpuDelegateOptionsV2Default();
gpu_options.inference_priority1 =
TFLITE_GPU_INFERENCE_PRIORITY_MIN_MEMORY_USAGE;
gpu_options.inference_priority2 = TFLITE_GPU_INFERENCE_PRIORITY_MIN_LATENCY;
gpu_options.inference_priority3 = TFLITE_GPU_INFERENCE_PRIORITY_MAX_PRECISION;
gpu_options.experimental_flags |= TFLITE_GPU_EXPERIMENTAL_FLAGS_ENABLE_QUANT;
But IF I use following configuration,
gpu_options.inference_priority1 =
TFLITE_GPU_INFERENCE_PRIORITY_MAX_PRECISION;
gpu_options.inference_priority2 =TFLITE_GPU_INFERENCE_PRIORITY_MIN_MEMORY_USAGE;
gpu_options.inference_priority3 = TFLITE_GPU_INFERENCE_PRIORITY_MIN_LATENCY;
the result is right. Is it bug ?
Relevant log output
No response
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 16 (6 by maintainers)
cmake tensorflow/lite ^ -G “Visual Studio 16 2019” -A x64 ^ -DCMAKE_BUILD_TYPE=Release ^ -DTFLITE_C_BUILD_SHARED_LIBS=OFF ^ -DTFLITE_ENABLE_NNAPI=OFF ^ -DTFLITE_ENABLE_GPU=ON cmake --build . --target demo --config Release demo is so easy, here I can not provide