keras: EarlyStopping failing with `tfa.metrics.FBetaScore`
System information.
I am using Colab and running this example on Knowledge Distillation with tfa.metrics.FBetaScore
as my metric.
Describe the problem.
EarlyStopping
callback fails with thetfa.metrics.FBetaScore
metric.
Describe the current behavior.
It complains about the following:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Describe the expected behavior
It should work as expected as the other metrics. The main issue is that FBetaScore values are not averaged by the time this line is reached. It gets a numpy array of the batch-wise scores.
- Do you want to contribute a PR? (yes/no): Yes.
- If yes, please read this page for instructions
- Briefly describe your candidate solution(if contributing):
Introduce a simple check after this line:
if isinstance(current, np.ndarray):
current = np.mean(current)
Standalone code to reproduce the issue.
# Install TensorFlow Addons
!pip install tensorflow-addons
Code:
# Imports
import tensorflow_addons as tfa
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
# Define a simple model.
teacher = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(256, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding="same"),
layers.Conv2D(512, (3, 3), strides=(2, 2), padding="same"),
layers.Flatten(),
layers.Dense(10),
],
name="teacher",
)
# Prepare the train and test dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Scale data.
x_train = x_train.astype("float32") / 255.0
x_train = np.reshape(x_train, (-1, 28, 28, 1))
x_test = x_test.astype("float32") / 255.0
x_test = np.reshape(x_test, (-1, 28, 28, 1))
# One-hot encode the labels.
y_train = tf.one_hot(y_train, 10).numpy()
y_test = tf.one_hot(y_test, 10).numpy()
# Train the model.
teacher.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=[tfa.metrics.FBetaScore(10, beta=2.0, reduction="mean")],
)
# Train and evaluate teacher on data.
teacher.fit(x_train, y_train, epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.EarlyStopping(patience=5, monitor="val_fbeta_score")])
Note
I have tried with different reduction mechanisms inside the FBetaScore metric but that didn’t help.
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 18 (16 by maintainers)
Thanks @bhack.
Was able to make it work with
Cc: @nilabhra