tensorflow: TF saved model Assertion Error ( Called a function referencing variables which have been deleted )

I am facing an issue . When returning a tf.saved_model.load object inside a function and then try to use it, it is not working.

I am having a file sample.py

#### sample.py

import tensorflow as tf
def load_model(model_dir):

    # Load Model
    loaded = tf.saved_model.load(model_dir)
    model = loaded.signatures['serving_default']
    print("Model Loaded")
    return model

When I am executing main.py

from sample import load_model

model_dir = 'som_path of a saved model'
model1 = load_model(model_dir)

If I print model.variables I am getting following error

AssertionError: Called a function referencing variables which have been deleted. This likely means that function-local variables were created and not referenced elsewhere in the program. This is generally a mistake; consider storing variables in an object attribute on first call.

But. If load the model with same code inside the function, but not using the function it works fine

#### main.py
loaded = tf.saved_model.load(model_dir)
model = loaded.signatures['serving_default']

If I print model.variables, its working as expected.

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 5
  • Comments: 26 (9 by maintainers)

Commits related to this issue

Most upvoted comments

Its so sad that no one has given much attention to this, its such a serious bug.

We recommend applying the workaround proposed in the first post (keeping a reference to the object returned by tf.saved_model.load):

loaded = tf.saved_model.load(model_dir)
model = loaded.signatures['serving_default']

We are trying to make it so that functions (e.g. loaded.signatures['serving_default']) can store strong references to variables that they capture, but currently it is not possible. So right now if the loaded object gets garbage collected, all the variables are deleted as well, leading to the AssertionError errors reported.

Facing this as well - it looks like if the original trackable object is released by the Python garbage collector once it goes out of scope, and the signature returned by the function does not maintain a back-reference to the original loaded object.

A quick workaround to avoid this, at the possible expense of creating a circular reference and/or leaking memory:

def load_model_safely(path_to_saved_model):
    saved_model = tf.saved_model.load(path_to_saved_model)
    model = saved_model.signatures["serving_default"]
    model._backref_to_saved_model = saved_model
    return model

I was able to replicate the issue on Colab using the current stable version of tensorflow and tf-nightly-2.13.0.dev20230209. Please check the gists - tf-nightly and TF v2.11. Thank you!

I would suggest a bypass solution since it seems it would take long to fix this.

Do not return its signature but loaded model itself in load_model function. Then, get signature where an inference is actually executed, like main.py for @s4sarath case.

def load_model(path):
    saved_model = tf.saved_model.load(path_to_saved_model)
    return saved_model

Do not worry . try pip install tf-tranasformers. Faster and complete serialisation support. GitHub is on the way.

On Wed, Jan 27, 2021, 1:53 AM Peter Sobot notifications@github.com wrote:

Facing this as well - it looks like if the original trackable object is released by the Python garbage collector once it goes out of scope, and the signature returned by the function does not maintain a back-reference to the original loaded object.

A quick workaround to avoid this, at the possible expense of creating a circular reference and/or leaking memory:

def load_model_safely(path_to_saved_model): saved_model = tf.saved_model.load(path_to_saved_model) model = saved_model.signatures[“serving_default”] model._backref_to_saved_model = saved_model return model

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tensorflow/issues/37615#issuecomment-767804930, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACRE6KDTWDWNUYJGF44LG2DS34QE7ANCNFSM4LLETJ6Q .

Tensorflow is getting disappointed day by day. It’s been so many months.

On Fri, Aug 7, 2020, 9:49 PM Leoni Mota Loris notifications@github.com wrote:

Has anyone managed to fix this?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tensorflow/issues/37615#issuecomment-670596761, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACRE6KDNU3L45RWIKIAOP73R7QSQZANCNFSM4LLETJ6Q .

i am able to replicate the issue, please find gist here