tensorflow: Image Encoding/Decoding and B64 Encoding/Decoding Not Working

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • TensorFlow installed from (source or binary): Binary
  • TensorFlow version (or github SHA if from source): tf-nightly-gpu

Provide the text output from tflite_convert

2020-06-10 20:13:06.127888: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
2020-06-10 20:13:08.090537: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library nvcuda.dll
2020-06-10 20:13:08.113383: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1683] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 computeCapability: 6.1
coreClock: 1.733GHz coreCount: 10 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 178.99GiB/s
2020-06-10 20:13:08.113524: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
2020-06-10 20:13:08.116647: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll
2020-06-10 20:13:08.119646: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll
2020-06-10 20:13:08.122447: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll
2020-06-10 20:13:08.126673: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll
2020-06-10 20:13:08.128467: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll
2020-06-10 20:13:08.135299: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll
2020-06-10 20:13:08.135697: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1825] Adding visible gpu devices: 0
2020-06-10 20:13:08.136396: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-06-10 20:13:08.146065: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1eb4c32fa00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-06-10 20:13:08.146205: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-06-10 20:13:08.146679: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1683] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 computeCapability: 6.1
coreClock: 1.733GHz coreCount: 10 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 178.99GiB/s
2020-06-10 20:13:08.146785: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
2020-06-10 20:13:08.146881: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll
2020-06-10 20:13:08.146962: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll
2020-06-10 20:13:08.147058: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll
2020-06-10 20:13:08.147123: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll
2020-06-10 20:13:08.147222: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll
2020-06-10 20:13:08.147325: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll
2020-06-10 20:13:08.147829: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1825] Adding visible gpu devices: 0
2020-06-10 20:13:08.640614: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1224] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-10 20:13:08.640783: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1230]      0
2020-06-10 20:13:08.640915: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1243] 0:   N
2020-06-10 20:13:08.641430: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1369] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4826 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-06-10 20:13:08.644906: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1eb6d09aff0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-06-10 20:13:08.645004: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce GTX 1060, Compute Capability 6.1
WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\keras\backend.py:467: set_learning_phase (from tensorflow.python.keras.backend) is deprecated and will be removed after 2020-10-11.
Instructions for updating:
Simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\training\tracking\tracking.py:105: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\training\tracking\tracking.py:105: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
2020-06-10 20:13:09.359436: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2020-06-10 20:13:09.359812: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-06-10 20:13:09.361503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1683] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 computeCapability: 6.1
coreClock: 1.733GHz coreCount: 10 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 178.99GiB/s
2020-06-10 20:13:09.361962: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
2020-06-10 20:13:09.362238: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll
2020-06-10 20:13:09.362494: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll
2020-06-10 20:13:09.362799: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll
2020-06-10 20:13:09.363040: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll
2020-06-10 20:13:09.363260: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll
2020-06-10 20:13:09.363492: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll
2020-06-10 20:13:09.363870: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1825] Adding visible gpu devices: 0
2020-06-10 20:13:09.364116: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1224] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-10 20:13:09.364323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1230]      0
2020-06-10 20:13:09.364533: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1243] 0:   N
2020-06-10 20:13:09.364959: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1369] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4826 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-06-10 20:13:09.390848: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:810] Optimization results for grappler item: graph_to_optimize
2020-06-10 20:13:09.391281: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: Graph size after: 24 nodes (21), 27 edges (24), time = 1.991ms.
2020-06-10 20:13:09.391508: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: Graph size after: 24 nodes (0), 27 edges (0), time = 0.906ms.
2020-06-10 20:13:09.391733: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:810] Optimization results for grappler item: decode_image_cond_jpeg_false_45
2020-06-10 20:13:09.391947: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: Graph size after: 11 nodes (0), 12 edges (0), time = 0.673ms.
2020-06-10 20:13:09.392162: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: Graph size after: 11 nodes (0), 12 edges (0), time = 0.593ms.
2020-06-10 20:13:09.392375: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:810] Optimization results for grappler item: decode_image_cond_jpeg_cond_png_cond_gif_false_75
2020-06-10 20:13:09.392602: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: function_optimizer did nothing. time = 0ms.
2020-06-10 20:13:09.392844: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: function_optimizer did nothing. time = 0ms.
2020-06-10 20:13:09.393065: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:810] Optimization results for grappler item: decode_image_cond_jpeg_cond_png_cond_gif_true_74
2020-06-10 20:13:09.393291: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: function_optimizer did nothing. time = 0.001ms.
2020-06-10 20:13:09.393513: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: function_optimizer did nothing. time = 0ms.
2020-06-10 20:13:09.393738: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:810] Optimization results for grappler item: decode_image_cond_jpeg_cond_png_false_64
2020-06-10 20:13:09.393958: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: Graph size after: 8 nodes (0), 8 edges (0), time = 0.471ms.
2020-06-10 20:13:09.394159: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: Graph size after: 8 nodes (0), 8 edges (0), time = 0.402ms.
2020-06-10 20:13:09.394365: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:810] Optimization results for grappler item: decode_image_cond_jpeg_cond_png_true_63
2020-06-10 20:13:09.394570: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: function_optimizer did nothing. time = 0ms.
2020-06-10 20:13:09.394760: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: function_optimizer did nothing. time = 0ms.
2020-06-10 20:13:09.394966: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:810] Optimization results for grappler item: decode_image_cond_jpeg_true_44
2020-06-10 20:13:09.395137: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: function_optimizer did nothing. time = 0ms.
2020-06-10 20:13:09.395308: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:812]   function_optimizer: function_optimizer did nothing. time = 0ms.
2020-06-10 20:13:09.497900: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:313] Ignored output_format.
2020-06-10 20:13:09.498161: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored drop_control_dependency.
2020-06-10 20:13:09.503781: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1683] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 computeCapability: 6.1
coreClock: 1.733GHz coreCount: 10 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 178.99GiB/s
2020-06-10 20:13:09.504393: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
2020-06-10 20:13:09.504750: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll
2020-06-10 20:13:09.504986: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll
2020-06-10 20:13:09.505255: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll
2020-06-10 20:13:09.505510: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll
2020-06-10 20:13:09.505846: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll
2020-06-10 20:13:09.506091: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll
2020-06-10 20:13:09.506647: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1825] Adding visible gpu devices: 0
2020-06-10 20:13:09.506960: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1224] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-10 20:13:09.507198: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1230]      0
2020-06-10 20:13:09.507422: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1243] 0:   N
2020-06-10 20:13:09.507869: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1369] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4826 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
loc(fused[callsite("decode_image/Substr@__inference_call_120"("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\ops\image_ops_impl.py":2639:0) at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\util\dispatch.py":201:0 at callsite("dev.py":30:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py":955:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py":3722:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\def_function.py":600:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py":979:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py":3052:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py":3200:0 at "C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py":2842:0))))))))), "image_byte_wrapper/StatefulPartitionedCall/decode_image/Substr"]): error: 'tf.Substr' op is neither a custom op nor a flex op
loc(fused[callsite("decode_image/is_jpeg/Substr@__inference_call_120"("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\ops\image_ops_impl.py":2707:0) at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\util\dispatch.py":201:0 at callsite("dev.py":30:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py":955:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py":3722:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\def_function.py":600:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py":979:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py":3052:0 at callsite("C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py":3200:0 at "C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py":2842:0))))))))), "image_byte_wrapper/StatefulPartitionedCall/decode_image/is_jpeg/Substr"]): error: 'tf.Substr' op is neither a custom op nor a flex op
loc("decode_image/cond_jpeg/is_png/Substr@decode_image_cond_jpeg_false_45"): error: 'tf.Substr' op is neither a custom op nor a flex op
loc("decode_image/cond_jpeg/cond_png/cond_gif/Substr@decode_image_cond_jpeg_cond_png_cond_gif_false_75"): error: 'tf.Substr' op is neither a custom op nor a flex op
loc("decode_image/cond_jpeg/cond_png/cond_gif/DecodeGif@decode_image_cond_jpeg_cond_png_cond_gif_true_74"): error: 'tf.DecodeGif' op is neither a custom op nor a flex op
loc("decode_image/cond_jpeg/cond_png/DecodePng@decode_image_cond_jpeg_cond_png_true_63"): error: 'tf.DecodePng' op is neither a custom op nor a flex op
loc("decode_image/cond_jpeg/DecodeJpeg@decode_image_cond_jpeg_true_44"): error: 'tf.DecodeJpeg' op is neither a custom op nor a flex op
error: failed while converting: 'main': Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
        tf.DecodeGif {device = ""}
        tf.DecodeJpeg {acceptable_fraction = 1.000000e+00 : f32, channels = 3 : i64, dct_method = "", device = "", fancy_upscaling = true, ratio = 1 : i64, try_recover_truncated = false}
        tf.DecodePng {channels = 3 : i64, device = ""}
        tf.Substr {T = i32, device = "", unit = "BYTE"}
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\lite\python\convert.py", line 182, in toco_convert_protos
    model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
  File "C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\lite\python\wrap_toco.py", line 32, in wrapped_toco_convert
    return _pywrap_toco_api.TocoConvert(
Exception: C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\ops\image_ops_impl.py:2639:1: error: 'tf.Substr' op is neither a custom op nor a flex op
    substr = string_ops.substr(contents, 0, 3)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\util\dispatch.py:201:1: note: called from
      return target(*args, **kwargs)
^
dev.py:30:1: note: called from
        image = tf.io.decode_image(inputs[0][0], channels=3)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py:955:1: note: called from
            return autograph.converted_call(
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3722:1: note: called from
    return wrapped_fn(*args, **kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\def_function.py:600:1: note: called from
        return weak_wrapped_fn().__wrapped__(*args, **kwds)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py:979:1: note: called from
      func_outputs = python_func(*func_args, **func_kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3052:1: note: called from
        func_graph_module.func_graph_from_py_func(
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3200:1: note: called from
      graph_function = self._create_graph_function(args, kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:2842:1: note: called from
      graph_function, _, _ = self._maybe_define_function(args, kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\ops\image_ops_impl.py:2639:1: note: see current operation: %2 = "tf.Substr"(%1, %cst_1, %cst_0) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
    substr = string_ops.substr(contents, 0, 3)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\ops\image_ops_impl.py:2707:1: error: 'tf.Substr' op is neither a custom op nor a flex op
        is_jpeg(contents), _jpeg, check_png, name='cond_jpeg')
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\util\dispatch.py:201:1: note: called from
      return target(*args, **kwargs)
^
dev.py:30:1: note: called from
        image = tf.io.decode_image(inputs[0][0], channels=3)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py:955:1: note: called from
            return autograph.converted_call(
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3722:1: note: called from
    return wrapped_fn(*args, **kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\def_function.py:600:1: note: called from
        return weak_wrapped_fn().__wrapped__(*args, **kwds)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py:979:1: note: called from
      func_outputs = python_func(*func_args, **func_kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3052:1: note: called from
        func_graph_module.func_graph_from_py_func(
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3200:1: note: called from
      graph_function = self._create_graph_function(args, kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:2842:1: note: called from
      graph_function, _, _ = self._maybe_define_function(args, kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\ops\image_ops_impl.py:2707:1: note: see current operation: %3 = "tf.Substr"(%1, %cst_1, %cst_0) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
        is_jpeg(contents), _jpeg, check_png, name='cond_jpeg')
^
<unknown>:0: error: loc("decode_image/cond_jpeg/is_png/Substr@decode_image_cond_jpeg_false_45"): 'tf.Substr' op is neither a custom op nor a flex op
<unknown>:0: note: loc("decode_image/cond_jpeg/is_png/Substr@decode_image_cond_jpeg_false_45"): see current operation: %0 = "tf.Substr"(%arg0, %cst_1, %cst_0) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
<unknown>:0: error: loc("decode_image/cond_jpeg/cond_png/cond_gif/Substr@decode_image_cond_jpeg_cond_png_cond_gif_false_75"): 'tf.Substr' op is neither a custom op nor a flex op
<unknown>:0: note: loc("decode_image/cond_jpeg/cond_png/cond_gif/Substr@decode_image_cond_jpeg_cond_png_cond_gif_false_75"): see current operation: %0 = "tf.Substr"(%arg0, %cst_0, %cst) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
<unknown>:0: error: loc("decode_image/cond_jpeg/cond_png/cond_gif/DecodeGif@decode_image_cond_jpeg_cond_png_cond_gif_true_74"): 'tf.DecodeGif' op is neither a custom op nor a flex op
<unknown>:0: note: loc("decode_image/cond_jpeg/cond_png/cond_gif/DecodeGif@decode_image_cond_jpeg_cond_png_cond_gif_true_74"): see current operation: %0 = "tf.DecodeGif"(%arg0) {device = ""} : (tensor<!tf.string>) -> tensor<?x?x?x3xui8>
<unknown>:0: error: loc("decode_image/cond_jpeg/cond_png/DecodePng@decode_image_cond_jpeg_cond_png_true_63"): 'tf.DecodePng' op is neither a custom op nor a flex op
<unknown>:0: note: loc("decode_image/cond_jpeg/cond_png/DecodePng@decode_image_cond_jpeg_cond_png_true_63"): see current operation: %0 = "tf.DecodePng"(%arg0) {channels = 3 : i64, device = ""} : (tensor<!tf.string>) -> tensor<?x?x3xui8>
<unknown>:0: error: loc("decode_image/cond_jpeg/DecodeJpeg@decode_image_cond_jpeg_true_44"): 'tf.DecodeJpeg' op is neither a custom op nor a flex op
<unknown>:0: note: loc("decode_image/cond_jpeg/DecodeJpeg@decode_image_cond_jpeg_true_44"): see current operation: %0 = "tf.DecodeJpeg"(%arg0) {acceptable_fraction = 1.000000e+00 : f32, channels = 3 : i64, dct_method = "", device = "", fancy_upscaling = true, ratio = 1 : i64, try_recover_truncated = false} : (tensor<!tf.string>) -> tensor<?x?x3xui8>
<unknown>:0: error: failed while converting: 'main': Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
        tf.DecodeGif {device = ""}
        tf.DecodeJpeg {acceptable_fraction = 1.000000e+00 : f32, channels = 3 : i64, dct_method = "", device = "", fancy_upscaling = true, ratio = 1 : i64, try_recover_truncated = false}
        tf.DecodePng {channels = 3 : i64, device = ""}
        tf.Substr {T = i32, device = "", unit = "BYTE"}
<unknown>:0: note: see current operation: "func"() ( {
^bb0(%arg0: tensor<?x1x!tf.string>):  // no predecessors
  %cst = "std.constant"() {value = dense<"\FF\D8\FF"> : tensor<!tf.string>} : () -> tensor<!tf.string>
  %cst_0 = "std.constant"() {value = dense<3> : tensor<i32>} : () -> tensor<i32>
  %cst_1 = "std.constant"() {value = dense<0> : tensor<i32>} : () -> tensor<i32>
  %cst_2 = "std.constant"() {value = dense<0> : tensor<1xi32>} : () -> tensor<1xi32>
  %cst_3 = "std.constant"() {value = dense<1> : tensor<1xi32>} : () -> tensor<1xi32>
  %0 = "tf.StridedSlice"(%arg0, %cst_2, %cst_3, %cst_3) {begin_mask = 0 : i64, device = "", ellipsis_mask = 0 : i64, end_mask = 0 : i64, new_axis_mask = 0 : i64, shrink_axis_mask = 1 : i64} : (tensor<?x1x!tf.string>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<1x!tf.string>
  %1 = "tf.StridedSlice"(%0, %cst_2, %cst_3, %cst_3) {begin_mask = 0 : i64, device = "", ellipsis_mask = 0 : i64, end_mask = 0 : i64, new_axis_mask = 0 : i64, shrink_axis_mask = 1 : i64} : (tensor<1x!tf.string>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<!tf.string>
  %2 = "tf.Substr"(%1, %cst_1, %cst_0) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
  %3 = "tf.Substr"(%1, %cst_1, %cst_0) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
  %4 = "tfl.equal"(%3, %cst) : (tensor<!tf.string>, tensor<!tf.string>) -> tensor<i1>
  %5 = "tf.If"(%4, %1, %2) {_lower_using_switch_merge = false, _read_only_resource_inputs = [], device = "", else_branch = @decode_image_cond_jpeg_false_450, is_stateless = false, output_shapes = [#tf.shape<*>], then_branch = @decode_image_cond_jpeg_true_440} : (tensor<i1>, tensor<!tf.string>, tensor<!tf.string>) -> tensor<*xui8>
  "std.return"(%5) : (tensor<*xui8>) -> ()
}) {sym_name = "main", tf.entry_function = {control_outputs = "", inputs = "args_0", outputs = "Identity"}, type = (tensor<?x1x!tf.string>) -> tensor<*xui8>} : () -> ()


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "dev.py", line 57, in <module>
    tflite = convert(model)
  File "dev.py", line 42, in convert
    tflite_model = converter.convert()
  File "C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\lite\python\lite.py", line 777, in convert
    return super(TFLiteKerasModelConverterV2,
  File "C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\lite\python\lite.py", line 591, in convert
    result = _toco_convert_impl(
  File "C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\lite\python\convert.py", line 555, in toco_convert_impl
    data = toco_convert_protos(
  File "C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\lite\python\convert.py", line 188, in toco_convert_protos
    raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\ops\image_ops_impl.py:2639:1: error: 'tf.Substr' op is neither a custom op nor a flex op
    substr = string_ops.substr(contents, 0, 3)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\util\dispatch.py:201:1: note: called from
      return target(*args, **kwargs)
^
dev.py:30:1: note: called from
        image = tf.io.decode_image(inputs[0][0], channels=3)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py:955:1: note: called from
            return autograph.converted_call(
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3722:1: note: called from
    return wrapped_fn(*args, **kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\def_function.py:600:1: note: called from
        return weak_wrapped_fn().__wrapped__(*args, **kwds)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py:979:1: note: called from
      func_outputs = python_func(*func_args, **func_kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3052:1: note: called from
        func_graph_module.func_graph_from_py_func(
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3200:1: note: called from
      graph_function = self._create_graph_function(args, kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:2842:1: note: called from
      graph_function, _, _ = self._maybe_define_function(args, kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\ops\image_ops_impl.py:2639:1: note: see current operation: %2 = "tf.Substr"(%1, %cst_1, %cst_0) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
    substr = string_ops.substr(contents, 0, 3)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\ops\image_ops_impl.py:2707:1: error: 'tf.Substr' op is neither a custom op nor a flex op
        is_jpeg(contents), _jpeg, check_png, name='cond_jpeg')
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\util\dispatch.py:201:1: note: called from
      return target(*args, **kwargs)
^
dev.py:30:1: note: called from
        image = tf.io.decode_image(inputs[0][0], channels=3)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py:955:1: note: called from
            return autograph.converted_call(
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3722:1: note: called from
    return wrapped_fn(*args, **kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\def_function.py:600:1: note: called from
        return weak_wrapped_fn().__wrapped__(*args, **kwds)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\framework\func_graph.py:979:1: note: called from
      func_outputs = python_func(*func_args, **func_kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3052:1: note: called from
        func_graph_module.func_graph_from_py_func(
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:3200:1: note: called from
      graph_function = self._create_graph_function(args, kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\eager\function.py:2842:1: note: called from
      graph_function, _, _ = self._maybe_define_function(args, kwargs)
^
C:\ProgramData\Anaconda3\envs\deblurring-gpu\lib\site-packages\tensorflow\python\ops\image_ops_impl.py:2707:1: note: see current operation: %3 = "tf.Substr"(%1, %cst_1, %cst_0) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
        is_jpeg(contents), _jpeg, check_png, name='cond_jpeg')
^
<unknown>:0: error: loc("decode_image/cond_jpeg/is_png/Substr@decode_image_cond_jpeg_false_45"): 'tf.Substr' op is neither a custom op nor a flex op
<unknown>:0: note: loc("decode_image/cond_jpeg/is_png/Substr@decode_image_cond_jpeg_false_45"): see current operation: %0 = "tf.Substr"(%arg0, %cst_1, %cst_0) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
<unknown>:0: error: loc("decode_image/cond_jpeg/cond_png/cond_gif/Substr@decode_image_cond_jpeg_cond_png_cond_gif_false_75"): 'tf.Substr' op is neither a custom op nor a flex op
<unknown>:0: note: loc("decode_image/cond_jpeg/cond_png/cond_gif/Substr@decode_image_cond_jpeg_cond_png_cond_gif_false_75"): see current operation: %0 = "tf.Substr"(%arg0, %cst_0, %cst) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
<unknown>:0: error: loc("decode_image/cond_jpeg/cond_png/cond_gif/DecodeGif@decode_image_cond_jpeg_cond_png_cond_gif_true_74"): 'tf.DecodeGif' op is neither a custom op nor a flex op
<unknown>:0: note: loc("decode_image/cond_jpeg/cond_png/cond_gif/DecodeGif@decode_image_cond_jpeg_cond_png_cond_gif_true_74"): see current operation: %0 = "tf.DecodeGif"(%arg0) {device = ""} : (tensor<!tf.string>) -> tensor<?x?x?x3xui8>
<unknown>:0: error: loc("decode_image/cond_jpeg/cond_png/DecodePng@decode_image_cond_jpeg_cond_png_true_63"): 'tf.DecodePng' op is neither a custom op nor a flex op
<unknown>:0: note: loc("decode_image/cond_jpeg/cond_png/DecodePng@decode_image_cond_jpeg_cond_png_true_63"): see current operation: %0 = "tf.DecodePng"(%arg0) {channels = 3 : i64, device = ""} : (tensor<!tf.string>) -> tensor<?x?x3xui8>
<unknown>:0: error: loc("decode_image/cond_jpeg/DecodeJpeg@decode_image_cond_jpeg_true_44"): 'tf.DecodeJpeg' op is neither a custom op nor a flex op
<unknown>:0: note: loc("decode_image/cond_jpeg/DecodeJpeg@decode_image_cond_jpeg_true_44"): see current operation: %0 = "tf.DecodeJpeg"(%arg0) {acceptable_fraction = 1.000000e+00 : f32, channels = 3 : i64, dct_method = "", device = "", fancy_upscaling = true, ratio = 1 : i64, try_recover_truncated = false} : (tensor<!tf.string>) -> tensor<?x?x3xui8>
<unknown>:0: error: failed while converting: 'main': Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
        tf.DecodeGif {device = ""}
        tf.DecodeJpeg {acceptable_fraction = 1.000000e+00 : f32, channels = 3 : i64, dct_method = "", device = "", fancy_upscaling = true, ratio = 1 : i64, try_recover_truncated = false}
        tf.DecodePng {channels = 3 : i64, device = ""}
        tf.Substr {T = i32, device = "", unit = "BYTE"}
<unknown>:0: note: see current operation: "func"() ( {
^bb0(%arg0: tensor<?x1x!tf.string>):  // no predecessors
  %cst = "std.constant"() {value = dense<"\FF\D8\FF"> : tensor<!tf.string>} : () -> tensor<!tf.string>
  %cst_0 = "std.constant"() {value = dense<3> : tensor<i32>} : () -> tensor<i32>
  %cst_1 = "std.constant"() {value = dense<0> : tensor<i32>} : () -> tensor<i32>
  %cst_2 = "std.constant"() {value = dense<0> : tensor<1xi32>} : () -> tensor<1xi32>
  %cst_3 = "std.constant"() {value = dense<1> : tensor<1xi32>} : () -> tensor<1xi32>
  %0 = "tf.StridedSlice"(%arg0, %cst_2, %cst_3, %cst_3) {begin_mask = 0 : i64, device = "", ellipsis_mask = 0 : i64, end_mask = 0 : i64, new_axis_mask = 0 : i64, shrink_axis_mask = 1 : i64} : (tensor<?x1x!tf.string>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<1x!tf.string>
  %1 = "tf.StridedSlice"(%0, %cst_2, %cst_3, %cst_3) {begin_mask = 0 : i64, device = "", ellipsis_mask = 0 : i64, end_mask = 0 : i64, new_axis_mask = 0 : i64, shrink_axis_mask = 1 : i64} : (tensor<1x!tf.string>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<!tf.string>
  %2 = "tf.Substr"(%1, %cst_1, %cst_0) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
  %3 = "tf.Substr"(%1, %cst_1, %cst_0) {T = i32, device = "", unit = "BYTE"} : (tensor<!tf.string>, tensor<i32>, tensor<i32>) -> tensor<!tf.string>
  %4 = "tfl.equal"(%3, %cst) : (tensor<!tf.string>, tensor<!tf.string>) -> tensor<i1>
  %5 = "tf.If"(%4, %1, %2) {_lower_using_switch_merge = false, _read_only_resource_inputs = [], device = "", else_branch = @decode_image_cond_jpeg_false_450, is_stateless = false, output_shapes = [#tf.shape<*>], then_branch = @decode_image_cond_jpeg_true_440} : (tensor<i1>, tensor<!tf.string>, tensor<!tf.string>) -> tensor<*xui8>
  "std.return"(%5) : (tensor<*xui8>) -> ()
}) {sym_name = "main", tf.entry_function = {control_outputs = "", inputs = "args_0", outputs = "Identity"}, type = (tensor<?x1x!tf.string>) -> tensor<*xui8>} : () -> ()

Standalone code to reproduce the issue

import tensorflow as tf

class ImageByteWrapper(tf.keras.Model):
    @tf.function(input_signature=[tf.TensorSpec(shape=[None, 1], dtype=tf.string)])
    def call(self, inputs):
        # Decode image into 4D tensor
        return tf.io.decode_image(inputs[0][0], channels=3)

def convert(model):
    converter = tf.lite.TFLiteConverter.from_keras_model(model)
    converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
    converter.target_spec.supported_ops = [
        tf.lite.OpsSet.TFLITE_BUILTINS,
        tf.lite.OpsSet.SELECT_TF_OPS,
    ]
    tflite_model = converter.convert()

    return tflite_model

model = ImageByteWrapper()

test_input = tf.random.uniform(shape=[64, 64, 3], minval=0, maxval=255, dtype=tf.int32)
test_input = tf.cast(test_input, dtype=tf.uint8)
test_input = tf.io.encode_jpeg(test_input)
test_input = tf.stack([test_input, test_input])
test_input = tf.reshape(test_input, [-1, 1])

with tf.device('/cpu:0'):
    test_output = model(test_input)

tflite = convert(model)

Most of the image encoding/decoding ops as well as Base64 encode/decode ops are not working. This is important because in production when deploying to some API is easier to transfer the data under b64 encoding or bytes encoding rather than actual tensors.

I just supplied the minimal example with tf.decode_image() for minimalism.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 18 (7 by maintainers)

Most upvoted comments

EncodeBase64 and DecodeBase64 is already supported at the head of master.

For encode and decode_jpeg, it is a bit more complicated and still under discussion to solve some errors now.

There is a build error in my PR, I need to solve it first.

Those ops are not in the flex delegate whitelist yet: lite/delegates/flex/whitelisted_flex_ops.cc I’ll add them to the whitelist.

Hi @ElPapi42

Could you try the conversion with Flex? https://www.tensorflow.org/lite/guide/ops_select#converting_the_model

The conversion was failed due to missing op support. I think you can use the tf ops directly via flex delegate.