keras: Loading saved model fails with ValueError You are trying to load a weight file containing 1 layers into a model with 0 layers
This toy example
import sys
import keras
from keras import Sequential
from keras.activations import linear
from keras.engine import InputLayer
from keras.layers import Dense
from keras.losses import mean_squared_error
from keras.metrics import mean_absolute_error
from keras.models import load_model
from keras.optimizers import sgd
print("Python version: " + sys.version)
print("Keras version: " + keras.__version__)
model = Sequential()
model.add(InputLayer(batch_input_shape=(1, 5)))
model.add(Dense(10, activation=linear))
model.compile(loss=mean_squared_error, optimizer=sgd(), metrics=[mean_absolute_error])
model.save('test.h5')
del model
load_model('test.h5')
gives the following output/error
Using TensorFlow backend.
Python version: 3.6.5 (default, Apr 25 2018, 14:23:58)
[GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)]
Keras version: 2.2.0
2018-06-13 12:02:50.570395: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
File "/Users/samb/IdeaProjects/connect-four-challenge-client-python3/test.py", line 22, in <module>
load_model('test.h5')
File "/Users/samb/IdeaProjects/connect-four-challenge-client-python3/venv/lib/python3.6/site-packages/keras/engine/saving.py", line 264, in load_model
load_weights_from_hdf5_group(f['model_weights'], model.layers)
File "/Users/samb/IdeaProjects/connect-four-challenge-client-python3/venv/lib/python3.6/site-packages/keras/engine/saving.py", line 901, in load_weights_from_hdf5_group
str(len(filtered_layers)) + ' layers.')
ValueError: You are trying to load a weight file containing 1 layers into a model with 0 layers.
Looking at https://github.com/keras-team/keras/blob/2.2.0/keras/engine/saving.py#L883 when debugging, I see that in
filtered_layers = []
for layer in layers:
weights = layer.weights
if weights:
filtered_layers.append(layer)
the value of weights
is always the empty list []
whereas in the subsequent block
layer_names = filtered_layer_names
if len(layer_names) != len(filtered_layers):
raise ValueError('You are trying to load a weight file '
'containing ' + str(len(layer_names)) +
' layers into a model with ' +
str(len(filtered_layers)) + ' layers.')
the value of layer_names
(respectively, filtered_layer_names
), is the singleton list ['dense_1']
leading to the error message shown above.
I’m not quite certain what the cause of the problem is. Is something going wrong in saving the model? Or is something wrong when loading the model (before loading the weights)? Or is something wrong in this logic for loading the weights?
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 35
- Comments: 69
Commits related to this issue
- Circumvent model (de)serialization bug in Keras Newer versions of Keras suffer from an inability to serialize a Sequential model with an InputLayer properly (see https://github.com/keras-team/keras/i... — committed to krikru/gui-mnist by krikru 5 years ago
- Include InputLayer for model serialization For the serialization of the model architecture the InputLayer is actually important if it is the only place where the user has defined the input shape. So ... — committed to joelbu/keras by joelbu 5 years ago
Closing as this is resolved
I see a workaround, but not a fix. Would be better to re-open the issue.
I consider this a high priority bug. Why has the issue been closed?
The following code snippet isolates the error. It seems the problem happens when InputLayer is used. model1 saves and loads fine but model2 (the same single-layer model) fails. The only difference: model2 uses InputLayer.
I’ve got the same problem after I updated keras from 2.0.8 to 2.2. I can still load my old models but not the newly created. I can also reproduce this error using your example code after changing line 22 to
keras.models.load_model('test.h5')
I hope someone can help.
edit: Saving in two files as jason and weights didn’t help either.
An alternative :- The reason for this error is :- Some Python interpreters demand a “SKELETON” of Neural network before loading another neural network into it. So, in my loading file, I created exactly same Neural Network, compiled it. But, inside the “model.fit()” method, I passed epochs=0 and used “load_weights” function with required “h5” file as parameters. Thus, our model will be compiled but not trained as we will directly load the weights from out already trained model.
model = tf.keras.models.Sequential()
Solving the Layers mismatch issue
model.add(tf.keras.layers.Flatten())
#Hidden Layers model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu,input_dim=784)) model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu,input_dim=784)) #Output Layer model.add(tf.keras.layers.Dense(10,activation=tf.nn.softmax)) #Model Architecture Created. Now, using the model:- model.compile(optimizer=‘adam’, loss=‘sparse_categorical_crossentropy’,metrics=[‘accuracy’])
#Training the Model:- model.fit(x_train,y_train,epochs=0) model.load_weights(‘NumberRecognitionModelWeights.h5’)
I managed to narrow down the problem, it seems to boil down to using the ‘input_shape’ parameter or not. The following code does not give the
ValueError
The only difference to the code above is that now the dense layer has the additional parameter
input_shape
set.I really dislike when a 30-or-more-comments issue gets closed with a mere “closed as resolved”. It does not provide any context, info on how, by whom, and why the issue is considered as solved.
A summary of the workaround (cannot be called solutions) I found in the thread:
InputLayer
In the case you spent your night training a model and you’re pissed that you can retrieve that model, a ~solution~ workaround I found helpful is:
I solved this so easily.
Just call the
.build()
method with theinput_shape
parameter.It still happens to me in version 2.3.0
I suggest this thread be renamed “Models with InputLayer are not serialized to HDF5 correctly”. Below is a demonstration of the issue and a hack to fix existing saved models.
keras.__version__= 2.2.4
The
test1
has the additional structure:"batch_input_shape": [null, 64, 64], "dtype": "float32",
You can fix this using:
I had the same problem when I tried to fine-tune a vgg16 model. It happened when I upgraded from keras 2.1.6 to 2.2.0. The solution proposed above didn’t work for me 😦 and the only way I found was to downgrade keras to the previous version (2.1.6).
The problem appears to be that Sequential.get_config() references self.layers rather than self._layers
so no input shape gets saved (no matter whether InputLayer was implicitly or explicitly added) and the delayed-build pattern is used, which does not set any weights, and so it looks like the model has no layers with weights.
I opened #11683 in response to this being closed, and as far as I know this is still an issue. #11683 better describes the situation as “Models with InputLayer are not serialized to HDF5 correctly” because as the
h5dump
output shows, equivalent models are not serialized the same.Btw, this is happening in a recent version with models save in the same session:
In [17]: tf.keras.version Out[17]: ‘2.2.4-tf’
In [18]: tf.version Out[18]: ‘1.14.1-dev20190311’
… continuing from above, I probed further and noticed that it runs into an issue the saved model_config differ here and is unable to load the weights.
From visual inspection, the main difference is that model_config for model2 (that uses InputLayer) does not have batch_input_shape element for the conv layer.
I’m not exactly sure how to fix it. Just leaving bread crumbs for someone that is more familiar with the keras codebase.
Problem still present in 2.3.1, in exactly the same form.
I’m wondering why this issue is closed as the problem has not been resolved.
@wt-huang
This was a long time ago but for anyone hitting this I have a feeling you need to call the model once (build it) before saving. This is likely the slightly opaque error.
save files with .hdf5 instead of h5
model.add(tf.keras.layers.Flatten()) # takes our 28x28 and makes it 1x784 CHANGE THE PREVIOUS LINE TO model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))
#ERROR RESOLVED : ValueError: You are trying to load a weight file containing 3 layers into a model with 0 layers.
Thanks a lot @cyounkins
I can now continue training my VGG16 model on Keras 2.2.4 just needed to set the shape to [None, 256, 256,3] and configuration to change was at [‘config’][‘layers’][0][‘config’] instead of at [‘config’][0][‘config’]
As a follow up of @SamuelBucheliZ comment on the 14th: The presence of InputLayer definitely triggers the issue. In my case repeating the input_shape in both InputLayer and Dense constructors does the trick (no need of using batch_input_shape).
Something like:
@abiodun-ayodeji, my error on Colab too. I tried but it did not help
ValueError: You are trying to load a weight file containing 23 layers into a model with 32 layers.
I had this issue, using google colab. I reinstalled keras version 2.4.3 and tensorflow version 2.3.1. Issues resolved.
Still have this issue in 2021. Happy New Year!
Had the same problem. What worked was stopping using anything from normal keras, and just using
tf.keras
everywhereNo, cause I am using a model from other people, he said he saved the model with 2.1.6, but not work with mine…
If you use multi-GPU to train your model,you may get this probelm. You can use the this code to train, parallel_model = keras.utils.multi_gpu_model(model, 2) .
@Ketan14a So you “solve” the problem with load_model() simply by not using it? What if you don’t know the structure of the model?
I had the same problem: I cannot load a H5py model saved earlier. with "ValueError You are trying to load a weight file containing [X](model layer) layers into a model with 0 layers "
Downgrading to 2.1.0 solved my problem.
This require to pass the input shape. So if you need to have a flexible shape (as in text processing, or simply if you have several different shape of images) it won’t work. This issue is not happening when saving in tensorflow format.
Maybe it will be better if keras hdf5 saving format had the same proprety as tensorflow saving format ?
I had similar bug and don’t know, why this issue was closed. Hello from 2020!
Same issue in 2.3.1. I don’t know why this long standing issue has been closed and not looked upon. I went ahead and suggested Keras to my team instead of our PyTorch and here I am with this bug.
Not sure if it also helps you, but I could circumvent the issue by installing Keras 2.1.6
Note: You have to save the model using this version of Keras in order to be able to load it. Models saved using the latest versions didn’t work for me.
Yep, downgrading to Keras 2.1.x also solved the problem for me, too, just as reported by @jeffreynghm and @juliojj.