tensorflow: WARNING:tensorflow:AutoGraph could not transform and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.

System information

  • Have I written custom code: Yes
  • OS Platform and Distribution : Windows 10
  • TensorFlow installed from (source or binary): Anaconda
  • TensorFlow version (use command below): 2.1.0
  • Python version: 3.7.4

Describe the current behavior I’m using anaconda tensorflow, Spyder. When run my custom layers, it shows warning as below:

WARNING:tensorflow:AutoGraph could not transform <bound method GroupSoftmax.call of <__main__.GroupSoftmax object at 0x000002A957B843C8>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: 
WARNING: AutoGraph could not transform <bound method GroupSoftmax.call of <__main__.GroupSoftmax object at 0x000002A957B843C8>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.

Describe the method I tried I have already tried the solution which others provided: pip install gast==0.2.2 I also re-installed all of the softwares (anaconda, tensorflow, spyder). However, these methods doesn’t solve my problem.
Is there any other solution?

Standalone code to reproduce the issue

class GroupSoftmax(layers.Layer):
    def __init__(self, axis=-1, **kwargs):
        super(GroupSoftmax, self).__init__(**kwargs)
        self.supports_masking = True
        self.axis = axis

    def call(self, inputs):
        return tf.divide(inputs, tf.reduce_sum(inputs, axis=self.axis))

    def get_config(self):
        config = {'axis': self.axis}
        base_config = super(GroupSoftmax, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))
    
    @tf_utils.shape_type_conversion
    def compute_output_shape(self, input_shape):
        return input_shape

'''
-----------------network of g-----------------
'''
gModel = tf.keras.Sequential([
# 添加一个有Nodes个神经元的全连接层,“input_shape”为该层接受的输入数据的维度,“activation”指定该层所用的激活函数
layers.Dense(Nodes, activation='sigmoid', input_shape=(60,), use_bias = False),#封装数据应该为(3000,10,6)
# 添加第二个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第3个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第4个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第5个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第6个网络层,改变节点数目
layers.Dense(66, activation='sigmoid', use_bias = False),
# 添加第7个网络层,改变shape
layers.Reshape((11, 6)),
# 添加output网络层,分组softmax
#layers.Dense(6, activation=layers.Softmax(axis=0),input_shape=(11,6), use_bias = False), # [11,6]
#layers.Softmax(axis=0)
GroupSoftmax(axis=0)
])

gModel.summary()   

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 40 (2 by maintainers)

Most upvoted comments

Closing as stale. Please reopen if you’d like to work on this further.

Sorry, for 2.2 please use tf.autograph.experimental.do_not_convert.

It has been 14 days with no activity and the awaiting response label was assigned. Is this still an issue?

Yes, this problem still exists.

Hi everyone!

I’ve had the same warnings and suddenly found out that they are caused by jupyter’s magic commands. It was %%time command in my case. Also, these commands are also causing bugs with TF not being able to retrieve the source code for some functions.

I didn’t do a very thorough research, but as far as I understand, it happens due to magic commands wrapping the cells with code into some object or something. So when TensorFlow tries to get the source code with inspect module, it fails or at least has some difficulties, because the magic command incapsulates the whole cell code, thus making the contents of the cell not accessible to inspect.

Thus, avoid using jupyter’s magic commands along with the tensorflow, they can cause this warning. And also the rule of thumb here will be to avoid any decorating structures over @tf.function-s.

@nsssss

I have tried in colab with TF 2.1.0 and i am not seeing any issue. Please, find the gist here.I have made few assumptions in code while executing. If you feel there is an issue please do update in attached colab and help me to reproduce the issue. It helps me in localizing the issue faster. Thanks!

Thanks. there’s no problem with colab. But there’s always a warning when using spyder in anaconda tensorflow. It seems there’s no influence on the results, but I’m not sure if it will influence the speed.

You can safely ignore the warning log as its intended to debug logging in AutoGraph issues. Thanks !

Hi, we have solved the problem by setting the right versions of the packages. Hope this info helps!

All the best, Katica

On Sun, Aug 15, 2021, 11:46 jda5 @.***> wrote:

I have this same issue. My error logs:

WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x13ffac310> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: unsupported operand type(s) for -: ‘NoneType’ and ‘int’ To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert 2021-08-15 10:38:47.649790: I tensorflow/compiler/tf2mlcompute/kernels/mlc_subgraph_op.cc:326] Compute: Failed in processing TensorFlow graph sequential/MLCSubgraphOp_2_0 with frame_id = 0 and iter_id = 0 with error: Internal: ExecuteMLCInferenceGraph: Failed to execute MLC inference graph. (error will be reported 5 times unless TF_MLC_LOGGING=1). 2021-08-15 10:38:47.652117: F tensorflow/core/framework/op_kernel.cc:983] Check failed: outputs_[index].tensor == nullptr (0x13fd05cb0 vs. nullptr) zsh: abort python test.py

Code to reproduce:

from tensorflow.keras.datasets import fashion_mnist from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Flatten from tensorflow.keras.losses import SparseCategoricalCrossentropy

(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()

X_train = X_train / 255 X_test = X_test / 255

model = Sequential([ Flatten(input_shape=X_train.shape[1:]), Dense(30, activation=‘relu’), Dense(30, activation=‘relu’), Dense(10, activation=‘softmax’) ])

model.compile(optimizer=‘adam’, loss=SparseCategoricalCrossentropy(from_logits=True), metrics=[‘accuracy’])

model.fit(X_train, y_train, epochs=30)

y_pred = model.predict(X_test) # <— Error occurs here print(np.argmax(predictions[0]))

I am on a MacBook Air (M1, 2020) running macOS Big Sur (11.4).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tensorflow/issues/37144#issuecomment-899024223, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGZMARMZSRCENOE53K7AGWTT46EFPANCNFSM4K5DJK3A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

I recommend StackOverflow. If you are certain that xception should be as fast as efficientnet, and can reproduce it with a simple example, consider filing a separate GitHub issue as well.

More info: https://www.tensorflow.org/community

@KaticaR thank you for the logs, they seem to indicate an incompatibility with the toolchain, though it’s unclear which piece. At any rate, the faulting piece was refactored recently. If you have the chance, please retry with tf-nightly.