tensorflow: WARNING:tensorflow:AutoGraph could not transform and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
System information
- Have I written custom code: Yes
- OS Platform and Distribution : Windows 10
- TensorFlow installed from (source or binary): Anaconda
- TensorFlow version (use command below): 2.1.0
- Python version: 3.7.4
Describe the current behavior I’m using anaconda tensorflow, Spyder. When run my custom layers, it shows warning as below:
WARNING:tensorflow:AutoGraph could not transform <bound method GroupSoftmax.call of <__main__.GroupSoftmax object at 0x000002A957B843C8>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause:
WARNING: AutoGraph could not transform <bound method GroupSoftmax.call of <__main__.GroupSoftmax object at 0x000002A957B843C8>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Describe the method I tried
I have already tried the solution which others provided: pip install gast==0.2.2
I also re-installed all of the softwares (anaconda, tensorflow, spyder).
However, these methods doesn’t solve my problem.
Is there any other solution?
Standalone code to reproduce the issue
class GroupSoftmax(layers.Layer):
def __init__(self, axis=-1, **kwargs):
super(GroupSoftmax, self).__init__(**kwargs)
self.supports_masking = True
self.axis = axis
def call(self, inputs):
return tf.divide(inputs, tf.reduce_sum(inputs, axis=self.axis))
def get_config(self):
config = {'axis': self.axis}
base_config = super(GroupSoftmax, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
@tf_utils.shape_type_conversion
def compute_output_shape(self, input_shape):
return input_shape
'''
-----------------network of g-----------------
'''
gModel = tf.keras.Sequential([
# 添加一个有Nodes个神经元的全连接层,“input_shape”为该层接受的输入数据的维度,“activation”指定该层所用的激活函数
layers.Dense(Nodes, activation='sigmoid', input_shape=(60,), use_bias = False),#封装数据应该为(3000,10,6)
# 添加第二个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第3个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第4个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第5个网络层
layers.Dense(Nodes, activation='sigmoid', use_bias = False),
# 添加第6个网络层,改变节点数目
layers.Dense(66, activation='sigmoid', use_bias = False),
# 添加第7个网络层,改变shape
layers.Reshape((11, 6)),
# 添加output网络层,分组softmax
#layers.Dense(6, activation=layers.Softmax(axis=0),input_shape=(11,6), use_bias = False), # [11,6]
#layers.Softmax(axis=0)
GroupSoftmax(axis=0)
])
gModel.summary()
Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 40 (2 by maintainers)
Closing as stale. Please reopen if you’d like to work on this further.
Sorry, for 2.2 please use
tf.autograph.experimental.do_not_convert
.Yes, this problem still exists.
Hi everyone!
I’ve had the same warnings and suddenly found out that they are caused by jupyter’s magic commands. It was
%%time
command in my case. Also, these commands are also causing bugs with TF not being able to retrieve the source code for some functions.I didn’t do a very thorough research, but as far as I understand, it happens due to magic commands wrapping the cells with code into some object or something. So when TensorFlow tries to get the source code with
inspect
module, it fails or at least has some difficulties, because the magic command incapsulates the whole cell code, thus making the contents of the cell not accessible toinspect
.Thus, avoid using jupyter’s magic commands along with the tensorflow, they can cause this warning. And also the rule of thumb here will be to avoid any decorating structures over
@tf.function
-s.Thanks. there’s no problem with colab. But there’s always a warning when using spyder in anaconda tensorflow. It seems there’s no influence on the results, but I’m not sure if it will influence the speed.
You can safely ignore the warning log as its intended to debug logging in AutoGraph issues. Thanks !
Hi, we have solved the problem by setting the right versions of the packages. Hope this info helps!
All the best, Katica
On Sun, Aug 15, 2021, 11:46 jda5 @.***> wrote:
I recommend StackOverflow. If you are certain that xception should be as fast as efficientnet, and can reproduce it with a simple example, consider filing a separate GitHub issue as well.
More info: https://www.tensorflow.org/community
@KaticaR thank you for the logs, they seem to indicate an incompatibility with the toolchain, though it’s unclear which piece. At any rate, the faulting piece was refactored recently. If you have the chance, please retry with tf-nightly.