tensorflow: Keras Backend ones_like with Lambda is not serializable

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Databricks Runtime 7.3
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 2.3
  • Python version: 3
  • CUDA/cuDNN version: 10.1
  • GPU model and memory: AWS p3.xlarge

Describe the current behavior Wrapping a tf.keras.backend.ones_like in a tf.keras.layer.Lambda fails serialization.

The following code creates the model that fails to serialize:

x = keras.Input(shape=1, name="x")
ones_like_layer = keras.layers.Lambda(K.ones_like, name="ones_like")
ones_like_layer(x)
logits = keras.layers.Dense(1, activation="sigmoid")

model = keras.Sequential([x, ones_like_layer, logits], name="ones_like_model")

Errors:

TypeError                                 Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py in wrapper(*args, **kwargs)
    200     try:
--> 201       return target(*args, **kwargs)
    202     except (TypeError, ValueError):
TypeError: 'str' object is not callable
During handling of the above exception, another exception occurred:
TypeError                                 Traceback (most recent call last)
11 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py in wrapper(*args, **kwargs)
    203       # Note: convert_to_eager_tensor currently raises a ValueError, not a
    204       # TypeError, when given unexpected types.  So we need to catch both.
--> 205       result = dispatch(wrapper, args, kwargs)
    206       if result is not OpDispatcher.NOT_SUPPORTED:
    207         return result
TypeError: 'module' object is not callable

This happens on TF 2.3 and TF Nightly. See https://colab.research.google.com/drive/1ih41e5b6jw_3iSm9pKSOcW_Kf5y8ktU5?usp=sharing .

Describe the expected behavior The model should be serializable.

Standalone code to reproduce the issue See above.

Other info / logs Include any logs or source code that would be helpful to Workaround - instead of using the Lambda, just call ones_like directly. This works but leads to the model being less interpretable. This requires using the Functional model.

See also: https://github.com/tensorflow/tensorflow/issues/41244#issuecomment-698918718

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 17 (11 by maintainers)

Most upvoted comments

Hi @jeisinge! I just made few changes according to this thread in lambda layer as below. ones_like_layer = keras.layers.Lambda(lambda x:K.ones_like(x), name="ones_like") and I was able resolve this issue in 2.7. Attaching Gist for reference .Thanks!

I think that you need to use something like:

import tensorflow as tf
from tensorflow import keras
x = keras.layers.Input(shape=(1), name="input_layer")
ones_like_layer = keras.layers.Lambda(lambda x: tf.ones_like(x), name="ones_like")
ones_like_layer(x)
logits = keras.layers.Dense(1, activation="sigmoid")
model = keras.Sequential([x, ones_like_layer, logits], name="ones_like_model")
keras.models.model_from_json(model.to_json())

Also from the documention:

Variables: While it is possible to use Variables with Lambda layers, this practice is discouraged as it can easily lead to bugs. For instance, consider the following layer: python scale = tf.Variable(1.) scale_layer = tf.keras.layers.Lambda(lambda x: x * scale) Because scale_layer does not directly track the scale variable, it will not appear in scale_layer.trainable_weights and will therefore not be trained if scale_layer is used in a Model. A better pattern is to write a subclassed Layer:

class ScaleLayer(tf.keras.layers.Layer): def init(self): super(ScaleLayer, self).init() self.scale = tf.Variable(1.)
  def call(self, inputs):
    return inputs * self.scale

In general, Lambda layers can be convenient for simple stateless computation, but anything more complex should use a subclass Layer instead.