keras: Error with BatchNormalization while setting the default float type to 'float16'

Hi,

I am trying to train a basic network on Keras with a float16 precision. However it looks like there is a bug with BatchNormalization.

For example, if I run the simple following code:

import keras
keras.backend.set_floatx('float16')

input_layer = keras.layers.Input(shape=(16,16,3))
x = keras.layers.BatchNormalization(axis=3)(input_layer) # <<Fails here
x = keras.layers.Conv2D(32, (3,3))(x)

I have the following error (entire output error at the end of this message):

TypeError: Value passed to parameter ‘scale’ has DataType float16 not in list of allowed values: float32

If I suppress all BatchNormalization layers and only leave Conv2D, Activation, MaxPooling and Dense, I have no error and the training runs fine. Same if I switch ‘float16’ to ‘float32’

I use Keras 2.1.4 with Tensorflow 1.4.0 as a backend, on a Linux machine with python 3.5.2.

Thank you for your help.

Traceback (most recent call last): File “test_BN_fp16.py”, line 6, in <module> x = keras.layers.BatchNormalization(axis=3)(input_layer) # <<Fails here File “/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py”, line 617, in call output = self.call(inputs, **kwargs) File “/usr/local/lib/python3.5/dist-packages/keras/layers/normalization.py”, line 181, in call epsilon=self.epsilon) File “/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py”, line 1824, in normalize_batch_in_training epsilon=epsilon) File “/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py”, line 1799, in _fused_normalize_batch_in_training data_format=tf_data_format) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/nn_impl.py”, line 831, in fused_batch_norm name=name) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_nn_ops.py”, line 2296, in _fused_batch_norm_v2 is_training=is_training, name=name) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py”, line 609, in _apply_op_helper param_name=input_name) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py”, line 60, in _SatisfiesTypeConstraint ", ".join(dtypes.as_dtype(x).name for x in allowed_list))) TypeError: Value passed to parameter ‘scale’ has DataType float16 not in list of allowed values: float32

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 24 (3 by maintainers)

Commits related to this issue

Most upvoted comments

I’ve ran into this issue trying to use float16 as well. Is there a timeline for something to be released to make float16 usable?

I’ve got float16 support for the tensorflow backend working fairly well here: https://github.com/bradklingensmith/keras/tree/batchnorm_float16_support

@flow-ra Hi, have you changed the epsilon in your keras.json, too. I had problems with losses always going to 0.0 very fast when my epsilon was on the default: 1e-7. Without much testing, a value of 1e-3 worked for me.

I’ve got float16 support for the tensorflow backend working fairly well here: https://github.com/bradklingensmith/keras/tree/batchnorm_float16_support

Hi… I’ve tried your implementation and I’m still getting the following error:

from keras.models import Sequential
from keras.layers import Conv2D, BatchNormalization
model = Sequential()
model.add(Conv2D(32,3, input_shape = (224,224,3)))
model.add(BatchNormalization())

ValueError: Tensor conversion requested dtype float32 for Tensor with dtype float16: 'Tensor("batch_normalization_1/Const:0", shape=(32,), dtype=float16)' why?