keras: " is not an element of this graph." when loading model.
Update:
I am using the model which I am loading below just to generate features for another model. Could it be that the problem is because I am trying to load multiple models in one session?
When I load the model in word2vec_25_tar-category.h5.zip using load_model in one of my modules I am getting
...
control_dependencies
c = self.as_graph_element(c)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2318, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2397, in _as_graph_element_locked
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("Softmax:0", shape=(?, 13), dtype=float32) is not an element of this graph.
Traceback (most recent call last):
File "/media/Data/workspaces/git/master-thesis/python/thesis/absa/slot1/constrained/ffn.py", line 382, in <module>
verbose=True)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1532, in fit_generator
str(generator_output))
ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None
However, I do not get this in all modules.
This is how I load it:
load_model(model_path,
custom_objects={
'dense_sum': dense_sum,
'merge_average': merge_average
})
These are the custom objects:
def dense_sum(x):
from keras import backend as K
return K.sum(x)
def merge_average(x):
from keras import backend as K
return x[0]/K.sum(x[1])
Why can I use the model in one module but not in another? I am pretty helpless because I don’t see what the issue is. I definitely load the same file.
What’s wrong here?
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 8
- Comments: 52
Links to this issue
Commits related to this issue
- more polish — committed to Akababa/Chess-Zero by Akababa 7 years ago
- fix: hack keras problem https://github.com/keras-team/keras/issues/6462#issuecomment-385962748 — committed to tychota/G by tychota 5 years ago
- fix bug : <tensor> is not an element of this graph https://github.com/keras-team/keras/issues/6462 — committed to opoblblfozb/PConv-Keras by opoblblfozb 3 years ago
I had a problem similar to that of @piraka9011 which was solved by calling
model._make_predict_function()right after loading the trained model. If this does not work, check out #2397.Change the backend to Theano. change it back when the issue is resolved.
This worked for me:
While predicting, use the same graph
I’m having a similar issue (I think) but it occurs when I call the
predict()method on a model. I am given araise ValueError...is not an element of this graph.Any suggestions?
i had the same environment. env: flask + tensorflow + keras (dev env: windows 10 pro) error log is
ValueError: Tensor Tensor(“avg_pool/AvgPool:0”, shape=(?, 1, 1, 2048), dtype=float32) is not an element of this graph.
this worked for me:
global graph graph = tf.get_default_graph()
while predicting, use the same graph with graph.as_default(): feature = resnet_vgg.predict(npImg)
thanks to anujgupta82
In my case, I have a flask endpoint that passes a key to a process_algorithm method. The process_algorithm method will each time load a pre-trained keras model and then make a prediction. It seems that the first time the process_algorithm method is called, it works fine, but the next time, there will be this error:
Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(196, 128), dtype=float32) is not an element of this graphat load_model(). Any way to solve this?Calling
model._make_predict_function()worked for me.@wqp89324 I had the same issue. Calling keras.backend.clear_session() each time before loading a model solved the problem.
@yashmehta14 I was having a similar problem. I fixed it by adding a function which can be called later
I faced the same issue when I was passing an instance of loaded model to another thread which was predicting. I changed my execution and made the model load in the same thread as the one which was predicting. I am not sure if this solves anyone’s problem but hope so it helps someone.
I Solved! I made a function which runs without app.route(), the decorator. And I loaded model in the function. And then I called the function when I need prediction
Just got this error all of a sudden (Was working fine earlier today)…
model._make_predict_function()worked for me as well to solve it, but weird!I am also doing the same thing shown by you but then also i am etting the same error…First time i will get the output but when i am trying to click the link for second time it is giving the error… Can u help me please ?
@wqp89324 try calling keras.backend.clear_session() on each request
Solve it by:
Instead of : app.run(debug=False)
Use:
if name == ‘main’: app.run(debug=False, threaded=False)
I had the same issue while using flask + keras global graph graph = tf.get_default_graph()
with graph.as_default(): res = model.predict()
this worked for me Thanks, @anujgupta82
@joaospinto Thank you so much! You saved my life and dozens of hours… ❤️
model._make_predict_function()Works well.
@Spidy20 I don’t think that is a good solution because it will require the model to be reloaded every time the predict API is called.
For the record, I ran into an issue with
model._make_predict_function()and was able to solve it following this stackoverflow post.If it works within REPL but not flask, try running the flask app with gunicorn and set mode to
sync. Modegeventwill get the error.@gustavz I have the same problem, could you solve it?
Thanks to @Mrjaggu , @anujgupta82 , @GerardWalsh and others! I spent at least a day trying to find how to correct the error. Here’s the final result: def get_model(): global graph graph = tf.get_default_graph() with graph.as_default(): global model model = load_model(‘model_WheelPrediction.h5’) Then in the app.route: @app.route(“/predict”, methods=[“POST”]) def predict(): with graph.as_default(): message = request.get_json(force=True) encoded = message[‘image’] decoded = base64.b64decode(encoded) image = Image.open(io.BytesIO(decoded)) processed_image = preprocess_image(image, target_size=(224, 224)) prediction = model.predict(processed_image).tolist()
loading model predict first is right but second is wrong ,you can do like this
global model_similar_v2
changing backend to theano worked for me