tensorflow: [tf.keras] Stateful Metrics assorted errors.
I will break this issue down into several code snippets each displaying a different error. @fchollet. In total 3 issues. All of these issues are only relevant to tf.keras
implementation. The keras
implementation works as intended.
System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
- TensorFlow installed from (source or binary): binary.
- TensorFlow version (use command below): 1.9.0
- Python version: 3.6.5
- Bazel version (if compiling from source): n/a
- GCC/Compiler version (if compiling from source): n/a
- CUDA/cuDNN version: n/a
- GPU model and memory: n/a
- Exact command to reproduce: n/a
Problem 1
Issues with multi-input/multi-output and batch averaging. This happens for both train and validation metrics.
Source code/logs
import tensorflow as tf
from tensorflow.python.keras.datasets import mnist
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Input, Conv2D, MaxPooling2D, Flatten, Dense, UpSampling2D
class BatchCounter(tf.keras.layers.Layer):
def __init__(self, name='batch_counter', **kwargs):
super(BatchCounter, self).__init__(name=name, **kwargs)
self.stateful = True
self.batches = tf.keras.backend.variable(value=0, dtype='int32')
def reset_states(self):
tf.keras.backend.set_value(self.batches, 0)
def __call__(self, y_true, y_pred):
updates = [tf.keras.backend.update_add(self.batches, tf.keras.backend.variable(value=1, dtype='int32'))]
self.add_update(updates)
return self.batches
batch_size = 100
num_classes = 10
epochs = 1
# input image dimensions
img_rows, img_cols = 28, 28
# Data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1).astype('float32') / 255
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1).astype('float32') / 255
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)
# Convolutional Encoder
input_img = Input(shape=(img_rows, img_cols, 1))
conv_1 = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
pool_1 = MaxPooling2D((2, 2), padding='same')(conv_1)
conv_2 = Conv2D(8, (3, 3), activation='relu', padding='same')(pool_1)
pool_2 = MaxPooling2D((2, 2), padding='same')(conv_2)
conv_3 = Conv2D(8, (3, 3), activation='relu', padding='same')(pool_2)
encoded= MaxPooling2D((2, 2), padding='same')(conv_3)
# Classification
flatten = Flatten()(encoded)
fc = Dense(128, activation='relu')(flatten)
softmax = Dense(num_classes, activation='softmax', name='classification')(fc)
# Decoder
conv_4 = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
up_1 = UpSampling2D((2, 2))(conv_4)
conv_5 = Conv2D(8, (3, 3), activation='relu', padding='same')(up_1)
up_2 = UpSampling2D((2, 2))(conv_5)
conv_6 = Conv2D(16, (3, 3), activation='relu')(up_2)
up_3 = UpSampling2D((2, 2))(conv_6)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same', name='autoencoder')(up_3)
model = Model(inputs=input_img, outputs=[softmax, decoded])
model.compile(loss={'classification': 'categorical_crossentropy',
'autoencoder': 'binary_crossentropy'},
loss_weights={'classification': 1.0,
'autoencoder': 0.5},
optimizer='adam',
metrics={'classification': 'accuracy', 'autoencoder': BatchCounter()})
history = model.fit(x_train,
{'classification': y_train, 'autoencoder': x_train},
batch_size=batch_size,
epochs=epochs,
validation_data= (x_test, {'classification': y_test, 'autoencoder': x_test}),
verbose=1)
Epoch 1/1
60000/60000 [==============================] - 41s 677us/step - loss: 0.5086 - classification_loss: 0.4051 - autoencoder_loss: 0.2069 - classification_acc: 0.8755 - autoencoder_batch_counter: 299.7983 - val_loss: 0.2001 - val_classification_loss: 0.1242 - val_autoencoder_loss: 0.1518 - val_classification_acc: 0.9596 - val_autoencoder_batch_counter: 50.1000
autoencoder_batch_counter
& val_autoencoder_batch_counter
should always be (600, 100) respectively. These metrics are batch averaged. This does not happen in the Keras implementation.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 17 (7 by maintainers)
Commits related to this issue
- fixes issue in stateful metrics, where they are getting batch averaged Fixes issue: #20529 PiperOrigin-RevId: 206867787 — committed to tensorflow/tensorflow by raymond-yuan 6 years ago
@pavithrasv Do you want some help?
I’m very keen on using Stateful Metrics for production.