tensorflow: optimize_for_inference_lib.optimize_for_inference produces an invalid graph

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes.
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary): Source.
  • TensorFlow version (use command below): 1.8.0
  • Python version: 2.7.12
  • Bazel version (if compiling from source): 0.13.0
  • GCC/Compiler version (if compiling from source): gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9)
  • CUDA/cuDNN version: 9.0.176/7.0.5.15
  • GPU model and memory: 1080 Ti
  • Exact command to reproduce:
cat > test.py <<EOF && python test.py
import tensorflow as tf

from collections import namedtuple
from tensorflow.python.tools import optimize_for_inference_lib


def main():
    with tf.Graph().as_default(), tf.Session() as session:
        input = tf.placeholder(shape=[10], dtype=tf.float32)
        output = top_k(input)

        graph_def = session.graph.as_graph_def()

    input_nodes = [input]
    output_nodes = [output.values, output.indices]

    graph_def = tf.graph_util.convert_variables_to_constants(
        session, graph_def, [_get_node_name(t) for t in output_nodes]
    )

    with tf.Graph().as_default():
        tf.import_graph_def(graph_def)  # OK

    graph_def = optimize_for_inference_lib.optimize_for_inference(
        input_graph_def=graph_def,
        input_node_names=[_get_node_name(t) for t in input_nodes],
        output_node_names=[_get_node_name(t) for t in output_nodes],
        placeholder_type_enum=[node.dtype.as_datatype_enum for node in input_nodes]
    )

    with tf.Graph().as_default():
        tf.import_graph_def(graph_def)  # ERROR


TopKResult = namedtuple('TopKResult', ['values', 'indices'])


def top_k(input, k=1, sorted=True, name=None):
    """
    A version of tf.nn.top_k tolerant to k == 0 and k < tf.shape(input)[-1].
    """
    k = tf.minimum(k, tf.shape(input)[-1])

    return tf.cond(
        tf.equal(k, 0),
        lambda: TopKResult(
            values=tf.zeros(
                shape=tf.concat([tf.shape(input)[:-1], [0]], axis=0),
                dtype=input.dtype
            ),
            indices=tf.zeros(
                shape=tf.concat([tf.shape(input)[:-1], [0]], axis=0),
                dtype=tf.int32
            )
        ),
        lambda: TopKResult(**tf.nn.top_k(input, k, sorted, name)._asdict())
    )


def _get_node_name(tensor):
    assert tensor.name.endswith(':0')
    return tensor.name[:-len(':0')]


if __name__ == '__main__':
    main()
EOF

Describe the problem

optimize_for_inference_lib.optimize_for_inference produces an invalid graph for the graph generated by the above script. The returned GraphDef cannot be imported:

Traceback (most recent call last):
  File "test.py", line 66, in <module>
    main()
  File "test.py", line 32, in main
    tf.import_graph_def(graph_def)  # ERROR
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 493, in import_graph_def
    raise ValueError(str(e))
ValueError: NodeDef expected inputs '' do not match 1 inputs specified; Op<name=Const; signature= -> output:dtype; attr=value:tensor; attr=dtype:type>; NodeDef: import/cond/zeros_1/Const = Const[dtype=DT_INT32, value=Tensor<type: int32 shape: [] values: 0>](import/cond/Switch:1)

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 12
  • Comments: 19 (3 by maintainers)

Most upvoted comments

Facing the a similar issue when using LSTM in my model. Any leads so far?

ValueError: NodeDef expected inputs ‘’ do not match 1 inputs specified; Op<name=Const; signature= -> output:dtype; attr=value:tensor; attr=dtype:type>; NodeDef: {{node lstm_1/while/add/y}} = Const_output_shapes=[[]], dtype=DT_INT32, value=Tensor<type: int32 shape: [] values: 1>

use tensorflow 1.14.0

I met exactly the same problem and it seems like that the problem is caused by tf.cond.

I met the same problem when i use the .pb produced by quantize_graph

Hi kasyoukin, I also met the same issue when use the .pb produced by quantize_graph. Did you find the solution?

I met the same problem when i use the .pb produced by quantize_graph