keras: ValueError: Unable to create group (Name already exists) with model.save_weights()

This is a similar issue to https://github.com/keras-team/keras/issues/6005 but I believe it is caused by the way h5py defines groups. In particular, if a layer named foo is in a network after a layer named foo/bar, h5py throws an exception. But the same does not occur if foo comes first. To reproduce, see the snippet below.

from keras import layers, models

# This raises an exception.
input_layer = layers.Input((None, None, 3), name='test_input')
x = layers.Conv2D(1, 1, name='conv1/conv')(input_layer)
x = layers.BatchNormalization(name='conv1/bn')(x)
x = layers.Activation('relu', name='conv1')(x)
models.Model(inputs=input_layer, outputs=x).save_weights('test.h5')

# This doesn't raise an exception
input_layer = layers.Input((None, None, 3), name='test_input')
x = layers.Conv2D(1, 1, name='conv1')(input_layer)
x = layers.BatchNormalization(name='conv1/bn')(x)
x = layers.Activation('relu', name='conv1/relu')(x)
models.Model(inputs=input_layer, outputs=x).save_weights('test.h5')

Perhaps we could provide a more helpful error message in keras/engine/saving.py? For example, changing part of save_weights_to_hdf5_group to the following would help trace the offending layer name.

for layer in layers:
    try:
         g = group.create_group(layer.name)
    except ValueError:
         raise ValueError('An error occurred creating weights group for {0}.'.format(layer.name))
    symbolic_weights = layer.weights
    weight_values = K.batch_get_value(symbolic_weights)

Happy to create PR if this is helpful.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 9
  • Comments: 17 (3 by maintainers)

Commits related to this issue

Most upvoted comments

Hello, I also have this issue with tensorflow 2.0.0 If some of you still want to use the .h5 format, I’ve found a potential fix. Since the problem lays in the order of creation of the h5py groups: a group name can’t be prefix of a previous group name, it is possible to sort the layers by name before saving them. This change worked for me : File: $CUSTOM_PATH/tensorflow_core/python/keras/saving/hdf5_format.py Function: save_weights_to_hdf5_group

  for layer in layers:
    g = f.create_group(layer.name)
    weights = _legacy_weights(layer)
    weight_values = K.batch_get_value(weights)
    weight_names = [w.name.encode('utf8') for w in weights]
    save_attributes_to_hdf5_group(g, 'weight_names', weight_names)
    for name, val in zip(weight_names, weight_values):
      param_dset = g.create_dataset(name, val.shape, dtype=val.dtype)
      if not val.shape:
        # scalar
        param_dset[()] = val
      else:
        param_dset[:] = val

replaced by:

  sorted_layers = [(layers[i].name, i) for i in range(len(layers))]
  sorted_layers.sort()

  for sorted_layer_index in range(len(sorted_layers)):
    layer = layers[sorted_layers[sorted_layer_index][1]]
    g = f.create_group(layer.name)
    weights = _legacy_weights(layer)
    weight_values = K.batch_get_value(weights)
    weight_names = [w.name.encode('utf8') for w in weights]
    save_attributes_to_hdf5_group(g, 'weight_names', weight_names)
    for name, val in zip(weight_names, weight_values):
      param_dset = g.create_dataset(name, val.shape, dtype=val.dtype)
      if not val.shape:
        # scalar
        param_dset[()] = val
      else:
        param_dset[:] = val

With this modification I was able to save my model in .h5 format and then to load my model from scratch and run inferences. I can do a pull request if you think it’s a good idea

I met the same error, I solved it by saving the model with .tf instead of .h5 Plus, I am using TensorFlow 2.0, the default saving format is .tf

This issue has a larger effect than suggested in the top of the thread. It actually completely prevents saving the weights of a keras model that uses tf.ones_like.

in_layer = Input((None, None, 3), name="test_input")
ones = K.ones_like(in_layer)
model = Model(inputs=in_layer, outputs=ones)

model.save_weights("tmp/test_save_ones.keras")

Raises: ValueError: Unable to create group (name already exists)

This is because of the issue described above by @faustomorales and can be seen if we look at how ones_like gets added to the keras model:

[(l, l.name) for l in model.layers]

Output:

[(<tensorflow.python.keras.engine.input_layer.InputLayer at 0x7fb6e9fa53c8>,
  'test_input'),
 (<tensorflow.python.keras.engine.base_layer.TensorFlowOpLayer at 0x7fb6e9fa56d8>,
  'tf_op_layer_ones_like_2/Shape'),
 (<tensorflow.python.keras.engine.base_layer.TensorFlowOpLayer at 0x7fb6e9fa59e8>,
  'tf_op_layer_ones_like_2')]

I haven’t looked around for other places where this is happening but I assume that there are more cases where this causes problems. Given that this effectively breaks a core part of the Keras API in tensorflow–the ability to save certain models in h5 format–I’d argue that this needs a more serious fix than just an error message.

Hello, I also have this issue with tensorflow 2.0.0 If some of you still want to use the .h5 format, I’ve found a potential fix. Since the problem lays in the order of creation of the h5py groups: a group name can’t be prefix of a previous group name, it is possible to sort the layers by name before saving them. This change worked for me : File: $CUSTOM_PATH/tensorflow_core/python/keras/saving/hdf5_format.py Function: save_weights_to_hdf5_group

  for layer in layers:
    g = f.create_group(layer.name)
    weights = _legacy_weights(layer)
    weight_values = K.batch_get_value(weights)
    weight_names = [w.name.encode('utf8') for w in weights]
    save_attributes_to_hdf5_group(g, 'weight_names', weight_names)
    for name, val in zip(weight_names, weight_values):
      param_dset = g.create_dataset(name, val.shape, dtype=val.dtype)
      if not val.shape:
        # scalar
        param_dset[()] = val
      else:
        param_dset[:] = val

replaced by:

  sorted_layers = [(layers[i].name, i) for i in range(len(layers))]
  sorted_layers.sort()

  for sorted_layer_index in range(len(sorted_layers)):
    layer = layers[sorted_layers[sorted_layer_index][1]]
    g = f.create_group(layer.name)
    weights = _legacy_weights(layer)
    weight_values = K.batch_get_value(weights)
    weight_names = [w.name.encode('utf8') for w in weights]
    save_attributes_to_hdf5_group(g, 'weight_names', weight_names)
    for name, val in zip(weight_names, weight_values):
      param_dset = g.create_dataset(name, val.shape, dtype=val.dtype)
      if not val.shape:
        # scalar
        param_dset[()] = val
      else:
        param_dset[:] = val

With this modification I was able to save my model in .h5 format and then to load my model from scratch and run inferences. I can do a pull request if you think it’s a good idea

How can we replicate this in Colab??

hello, I also get trouble with this issue. If you guys are using tensorflow 2.0, you can change “.h5” to “.tf” and everything should be saved.

hello, I also get trouble with this issue. how did u do that ,model.save(xxxx.tf)?

I’m still getting this issue no matter what I try. Any ideas on what I’m doing wrong? I’m trying to save the YAM net model so I can then convert it to tensorflow.js.

      cls._yamnet = yamnet.yamnet_frames_model(params)
      cls._yamnet.load_weights('yamnet.h5')
      cls._yamnet_classes = yamnet.class_names('yamnet_class_map.csv')
      YAMNetTest._yamnet.save('./foo/yamnet.h5') # Unable to create group (name already exists)
    # YAMNetTest._yamnet.save('./foo/yamnet') # 'Tensor' object has no attribute '_datatype_enum'
    # tfjs.converters.save_keras_model(YAMNetTest._yamnet, './foo') # Unable to create group (name already exists)
    # YAMNetTest._yamnet.save('./foo/foo.tf') # 'Tensor' object has no attribute '_datatype_enum'
    # tf.saved_model.save(YAMNetTest._yamnet, './foo') # 'Tensor' object has no attribute '_datatype_enum'

Versions:

tensorboard          2.1.1      
tensorflow           2.1.0      
tensorflow-cpu       2.1.0      
tensorflow-estimator 2.1.0      
tensorflow-hub       0.7.0      
tensorflowjs         1.7.4  

This issue has a larger effect than suggested in the top of the thread. It actually completely prevents saving the weights of a keras model that uses tf.ones_like.

in_layer = Input((None, None, 3), name="test_input")
ones = K.ones_like(in_layer)
model = Model(inputs=in_layer, outputs=ones)

model.save_weights("tmp/test_save_ones.keras")

Raises: ValueError: Unable to create group (name already exists)

This is because of the issue described above by @faustomorales and can be seen if we look at how ones_like gets added to the keras model:

[(l, l.name) for l in model.layers]

Output:

[(<tensorflow.python.keras.engine.input_layer.InputLayer at 0x7fb6e9fa53c8>,
  'test_input'),
 (<tensorflow.python.keras.engine.base_layer.TensorFlowOpLayer at 0x7fb6e9fa56d8>,
  'tf_op_layer_ones_like_2/Shape'),
 (<tensorflow.python.keras.engine.base_layer.TensorFlowOpLayer at 0x7fb6e9fa59e8>,
  'tf_op_layer_ones_like_2')]

I haven’t looked around for other places where this is happening but I assume that there are more cases where this causes problems. Given that this effectively breaks a core part of the Keras API in tensorflow–the ability to save certain models in h5 format–I’d argue that this needs a more serious fix than just an error message.

One walk around is to use the Lambda layer to wrap these type of operations