keras: Accuracy, fmeasure, precision, and recall all the same for binary classification problem (cut and paste example provided)

keras 1.2.2, tf-gpu -.12.1

Example code to show issue:

'''Trains a simple convnet on the MNIST dataset.

Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''

#from __future__ import print_function
import numpy as np
np.random.seed(1337)  # for reproducibility

from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K

batch_size = 128
nb_classes = 10
nb_epoch = 12

# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
pool_size = (2, 2)
# convolution kernel size
kernel_size = (3, 3)

# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# make 2 categories
y_train = y_train>=5
y_test = y_test>=5

if K.image_dim_ordering() == 'th':
    X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
    X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
    X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, 2)
Y_test = np_utils.to_categorical(y_test, 2)

model = Sequential()

model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
                        border_mode='valid',
                        input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer='adadelta',
              metrics=['accuracy', 'f1score', 'precision', 'recall'])

model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
          verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])

yields output:

Using TensorFlow backend.
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cublas64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cudnn64_5.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cufft64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library nvcuda.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library curand64_80.dll locally
X_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
Train on 60000 samples, validate on 10000 samples
Epoch 1/12
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GTX TITAN Black
major: 3 minor: 5 memoryClockRate (GHz) 0.98
pciBusID 0000:01:00.0
Total memory: 6.00GiB
Free memory: 5.85GiB
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:906] DMA: 0 
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:916] 0:   Y 
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN Black, pci bus id: 0000:01:00.0)

  128/60000 [..............................] - ETA: 1686s - loss: 0.7091 - acc: 0.4688 - fmeasure: 0.4687 - precision: 0.4688 - recall: 0.4688
  384/60000 [..............................] - ETA: 567s - loss: 0.6981 - acc: 0.4922 - fmeasure: 0.4922 - precision: 0.4922 - recall: 0.4922 
  640/60000 [..............................] - ETA: 343s - loss: 0.6845 - acc: 0.5609 - fmeasure: 0.5609 - precision: 0.5609 - recall: 0.5609
 1024/60000 [..............................] - ETA: 217s - loss: 0.6654 - acc: 0.6143 - fmeasure: 0.6143 - precision: 0.6143 - recall: 0.6143
 1408/60000 [..............................] - ETA: 159s - loss: 0.6427 - acc: 0.6456 - fmeasure: 0.6456 - precision: 0.6456 - recall: 0.6456
 1792/60000 [..............................] - ETA: 126s - loss: 0.6226 - acc: 0.6629 - fmeasure: 0.6629 - precision: 0.6629 - recall: 0.6629

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 16
  • Comments: 60 (2 by maintainers)

Most upvoted comments

for those who will come here later, since Keras 2.0 metrics fmeasure, precision, and recall have been removed.

if you want to use them, you can check history of the repo or add this code:


from keras import backend as K

 def mcor(y_true, y_pred):
     #matthews_correlation
     y_pred_pos = K.round(K.clip(y_pred, 0, 1))
     y_pred_neg = 1 - y_pred_pos
 
 
     y_pos = K.round(K.clip(y_true, 0, 1))
     y_neg = 1 - y_pos
 
 
     tp = K.sum(y_pos * y_pred_pos)
     tn = K.sum(y_neg * y_pred_neg)
 
 
     fp = K.sum(y_neg * y_pred_pos)
     fn = K.sum(y_pos * y_pred_neg)
 
 
     numerator = (tp * tn - fp * fn)
     denominator = K.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn))
 
 
     return numerator / (denominator + K.epsilon())




def precision(y_true, y_pred):
    """Precision metric.

    Only computes a batch-wise average of precision.

    Computes the precision, a metric for multi-label classification of
    how many selected items are relevant.
    """
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision

def recall(y_true, y_pred):
    """Recall metric.

    Only computes a batch-wise average of recall.

    Computes the recall, a metric for multi-label classification of
    how many relevant items are selected.
    """
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
    recall = true_positives / (possible_positives + K.epsilon())
    return recall


def f1(y_true, y_pred):
    def recall(y_true, y_pred):
        """Recall metric.

        Only computes a batch-wise average of recall.

        Computes the recall, a metric for multi-label classification of
        how many relevant items are selected.
        """
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
        recall = true_positives / (possible_positives + K.epsilon())
        return recall

    def precision(y_true, y_pred):
        """Precision metric.

        Only computes a batch-wise average of precision.

        Computes the precision, a metric for multi-label classification of
        how many selected items are relevant.
        """
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
        precision = true_positives / (predicted_positives + K.epsilon())
        return precision
    precision = precision(y_true, y_pred)
    recall = recall(y_true, y_pred)
    return 2*((precision*recall)/(precision+recall+K.epsilon()))

#you can use it like this
model.compile(loss='binary_crossentropy',
              optimizer= "adam",
              metrics=[mcor,recall, f1])

I’ve created a pull request to solve the problem(https://github.com/netrack/keras-metrics/pull/4), I hope it’ll be accepted soon. For those who wanna use custom method, I corrected the unnir’s code as following

from keras import backend as K

def check_units(y_true, y_pred):
    if y_pred.shape[1] != 1:
      y_pred = y_pred[:,1:2]
      y_true = y_true[:,1:2]
    return y_true, y_pred

def precision(y_true, y_pred):
    y_true, y_pred = check_units(y_true, y_pred)
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision

def recall(y_true, y_pred):
    y_true, y_pred = check_units(y_true, y_pred)
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
    recall = true_positives / (possible_positives + K.epsilon())
    return recall

def f1(y_true, y_pred):
    def recall(y_true, y_pred):
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
        recall = true_positives / (possible_positives + K.epsilon())
        return recall

    def precision(y_true, y_pred):
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
        precision = true_positives / (predicted_positives + K.epsilon())
        return precision
    y_true, y_pred = check_units(y_true, y_pred)
    precision = precision(y_true, y_pred)
    recall = recall(y_true, y_pred)
    return 2*((precision*recall)/(precision+recall+K.epsilon()))

#you can use it as following
model.compile(loss='binary_crossentropy',
              optimizer= "adam",
              metrics=[precision,recall, f1])

Same problem. I customized metrics – precision, recall and F1-measure. The model.fit_generator and model.evaluate_generator also gives the same precision, recall and F1-measure.

keras==2.0.0 on Mac OS Sierra 10.12.4

Epoch 8/10 0s - loss: 0.0269 - binary_accuracy: 0.8320 - f1score: 0.8320 - precision: 0.8320 - recall: 0.8320 Epoch 9/10 0s - loss: 0.0488 - binary_accuracy: 0.6953 - f1score: 0.6953 - precision: 0.6953 - recall: 0.6953 Epoch 10/10 0s - loss: 0.0457 - binary_accuracy: 0.7148 - f1score: 0.7148 - precision: 0.7148 - recall: 0.7148 Start to evaluate. binary_accuracy: 76.06% f1score: 76.06% precision: 76.06% recall: 76.06%

i am also seeing the same scores coming through for custom metrics. the below gave the following output for an epoch:

Epoch 1/20
72326/72326 [==============================] - 293s - loss: 0.4666 - acc: 0.8097 - precision: 0.8097 - recall: 0.8097 - f1_score: 0.8097 - val_loss: 0.4592 - val_acc: 0.8100 - val_precision: 0.8100 - val_recall: 0.8100 - val_f1_score: 0.8100
def f1_score(y_true, y_pred):

    # Count positive samples.
    c1 = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    c2 = K.sum(K.round(K.clip(y_pred, 0, 1)))
    c3 = K.sum(K.round(K.clip(y_true, 0, 1)))

    # If there are no true samples, fix the F1 score at 0.
    if c3 == 0:
        return 0

    # How many selected items are relevant?
    precision = c1 / c2

    # How many relevant items are selected?
    recall = c1 / c3

    # Calculate f1_score
    f1_score = 2 * (precision * recall) / (precision + recall)
    return f1_score


def precision(y_true, y_pred):

    # Count positive samples.
    c1 = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    c2 = K.sum(K.round(K.clip(y_pred, 0, 1)))
    c3 = K.sum(K.round(K.clip(y_true, 0, 1)))

    # If there are no true samples, fix the F1 score at 0.
    if c3 == 0:
        return 0

    # How many selected items are relevant?
    precision = c1 / c2

    return precision


def recall(y_true, y_pred):

    # Count positive samples.
    c1 = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    c3 = K.sum(K.round(K.clip(y_true, 0, 1)))

    # If there are no true samples, fix the F1 score at 0.
    if c3 == 0:
        return 0

    recall = c1 / c3

    return recall
    model.compile(loss='categorical_crossentropy',
                  optimizer='adam',
                  metrics=['accuracy', precision, recall, f1_score])

@nsarafianos Only do this per-batch as the values are reported on a per-batch basis by keras callbacks. Once you’re trained, you can just use mode.predict to go over the complete test set and compute your metrics in full.

@hbb21st halloo, I had the same problem. In my case it caused by using softmax in binary classification problem with output dimension of 2 ([0,1] or [1,0]). So when I changed the output dimension to 1 ([0] or [1]) with sigmoid activation function, then it worked just fine.

@unnir if i use ‘binary_crossentropy’, the custom precision is correct. but when use ‘categorical_crossentropy’, it has the same problem as what @moming2k said.

Any update yet ?? @unnir

Any update yet ?? @unnir Did you find anything ?

@unnir i did not mean that they do not work; what i was trying to say is that the numbers that i get don’t make much sense to me. i have indeed normalized my data prior to feeding them into the neural network, and i am doing cross-validation to tune hyper-parameters.

The relavant metrics are no longer supported in keras 2.x. Closing for good housekeeping.

EQUALITY PROBLEM

I had exactly ran into the same problem (accuracy, precision, recall are f1score are equal to each other both on the training set and the validation set for a balanced task) with another dataset which made me look into this, which we can call it the EQUALITY PROBLEM.

I use: tensorflow version: 1.13.1 tensorflow keras version: 2.2.4-tf

I have combined all the replies and tried all the codes above, and finally come up with two versions. The first version is to define precison, recall, and f1score as above. The second version is to use the precison, recall, and f1score defined in keras-metrics (which depends on keras).

CONCLUTION:

The following is the results of the first version, when I try “categorical classfication using softmax with one-hot output”, I HAVE EQUALITY PROBLEM. However, when I try “binary classfication using sigmoid with 0-1 vector output”, I DO NOT have EQUALITY PROBLEM.

Here is all my codes

"""
Created on Thu May  9 10:36:22 2019
# Example code to show issue:
Trains a simple convnet on the MNIST dataset.
"""

import numpy as np

from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.utils import to_categorical
import tensorflow.keras.backend as K

import tensorflow as tf
print("tensorflow version:", tf.VERSION)
print("tensorflow keras version:", tf.keras.__version__)

def mcor(y_true, y_pred):
    # matthews_correlation
    y_pred_pos = K.round(K.clip(y_pred, 0, 1))
    y_pred_neg = 1 - y_pred_pos
    y_pos = K.round(K.clip(y_true, 0, 1))
    y_neg = 1 - y_pos
    tp = K.sum(y_pos * y_pred_pos)
    tn = K.sum(y_neg * y_pred_neg)
    fp = K.sum(y_neg * y_pred_pos)
    fn = K.sum(y_pos * y_pred_neg)
    numerator = (tp * tn - fp * fn)
    denominator = K.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn))
    return numerator / (denominator + K.epsilon())


def precision(y_true, y_pred):
    """ Precision metric.
    Only computes a batch-wise average of precision.
    Computes the precision, a metric for multi-label classification of
    how many selected items are relevant.
    """
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision


def recall(y_true, y_pred):
    """Recall metric.
    Only computes a batch-wise average of recall.
    Computes the recall, a metric for multi-label classification of
    how many relevant items are selected.
    """
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
    recall = true_positives / (possible_positives + K.epsilon())
    return recall


def f1score(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
    recall = true_positives / (possible_positives + K.epsilon())
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return 2*((precision * recall) / (precision+recall + K.epsilon()))

NB_BATCH = 128
NB_EPOCH = 11
NB_FILTER = 32  # number of convolutional filters to use
SZ_POOL = (2, 2)  # size of pooling area for max pooling
SZ_KERNEL = (3, 3)  # convolution kernel size

def get_mnist_bin_data():
    import tensorflow.keras.backend as K
    img_rows, img_cols = 28, 28  # input image dimensions
    # the data, shuffled and split between train and test sets
    (X_train, y_train), (X_test, y_test) = mnist.load_data()
    y_train = (y_train >= 5)  # make 2 categories
    y_test = (y_test >= 5)
    if K.image_data_format() == 'channels_first':  # Theano
        X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
        X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
        input_shape = (1, img_rows, img_cols)
    elif K.image_data_format() == 'channels_last':  # TensorFlow
        X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
        X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
        input_shape = (img_rows, img_cols, 1)
    print(input_shape)
    X_train = X_train.astype('float32')
    X_test = X_test.astype('float32')
    X_train /= 255
    X_test /= 255
    print('X_train shape:', X_train.shape)
    print(X_train.shape[0], 'train samples')
    print(X_test.shape[0], 'test samples')
    return X_train, X_test, y_train, y_test


def ann_cat_soft():
    np.random.seed(5400)  # for reproducibility
    X_train, X_test, y_train, y_test = get_mnist_bin_data()
    # convert class vectors to binary class matrices
    Y_train = to_categorical(y_train, 2)
    Y_test = to_categorical(y_test, 2)
    input_shape = X_train.shape[1:]
    model = Sequential()
    model.add(Conv2D(filters=NB_FILTER, kernel_size=SZ_KERNEL,
                     padding='valid', input_shape=input_shape))
    model.add(Activation('relu'))
    model.add(Conv2D(NB_FILTER, SZ_KERNEL))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=SZ_POOL))
    model.add(Dropout(rate=1-0.25))
    model.add(Flatten())
    model.add(Dense(128))
    model.add(Activation('relu'))
    model.add(Dropout(rate=0.5))
    model.add(Dense(2, activation='softmax'))
    model.compile(
        loss='categorical_crossentropy',
        optimizer='adadelta',
        metrics=[ mcor, 'accuracy', precision, recall, f1score])
    model.fit(
        X_train, Y_train, batch_size=NB_BATCH, epochs=NB_EPOCH,
        verbose=1, validation_data=(X_test, Y_test))
    score = model.evaluate(X_test, Y_test, verbose=0)
    print('Test score:', score[0])
    print('Test accuracy:', score[1])
    '''
    Accuracy, fmeasure, precision, and recall all the same for
    binary classification problem (cut and pasted example) on May 09 2019.
    '''


def ann_bin_sigm():
    np.random.seed(5400)  # for reproducibility
    X_train, X_test, y_train, y_test = get_mnist_bin_data()
    # convert class vectors to binary class matrices
    Y_train = y_train.astype('float32')
    Y_test = y_test.astype('float32')
    input_shape = X_train.shape[1:]
    model = Sequential()
    model.add(Conv2D(filters=NB_FILTER, kernel_size=SZ_KERNEL[0],
                     strides=SZ_KERNEL[1], padding='valid',
                     input_shape=input_shape))
    model.add(Activation('relu'))
    model.add(Conv2D(NB_FILTER, SZ_KERNEL[0], SZ_KERNEL[1]))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=SZ_POOL))
    model.add(Dropout(rate=1-0.25))
    model.add(Flatten())
    model.add(Dense(128))
    model.add(Activation('relu'))
    model.add(Dropout(rate=0.5))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(
        loss='binary_crossentropy',
        optimizer='adadelta',
        metrics=[mcor, 'accuracy', precision, recall, f1score])
    model.fit(
        X_train, Y_train, batch_size=NB_BATCH, epochs=NB_EPOCH,
        verbose=1, validation_data=(X_test, Y_test))
    score = model.evaluate(X_test, Y_test, verbose=0)
    print('Test score:', score[0])
    print('Test accuracy:', score[1])

For the “categorical classfication using softmax with one-hot output”, I get the following results, which shows I have the EQUALITY PROBLEM.

ann_cat_soft()

Epoch 1/11 60000/60000 [==============================] - 67s 1ms/sample - loss: 0.2254 - mcor: 0.8140 - acc: 0.9070 - precision: 0.9070 - recall: 0.9070 - f1score: 0.9070 - val_loss: 0.0715 - val_mcor: 0.9539 - val_acc: 0.9767 - val_precision: 0.9770 - val_recall: 0.9770 - val_f1score: 0.9770 Epoch 2/11 60000/60000 [==============================] - 67s 1ms/sample - loss: 0.0995 - mcor: 0.9292 - acc: 0.9646 - precision: 0.9646 - recall: 0.9646 - f1score: 0.9646 - val_loss: 0.0497 - val_mcor: 0.9666 - val_acc: 0.9831 - val_precision: 0.9833 - val_recall: 0.9833 - val_f1score: 0.9833 Epoch 3/11 60000/60000 [==============================] - 65s 1ms/sample - loss: 0.0778 - mcor: 0.9470 - acc: 0.9735 - precision: 0.9735 - recall: 0.9735 - f1score: 0.9735 - val_loss: 0.0416 - val_mcor: 0.9693 - val_acc: 0.9852 - val_precision: 0.9847 - val_recall: 0.9847 - val_f1score: 0.9847 Epoch 4/11 60000/60000 [==============================] - 64s 1ms/sample - loss: 0.0683 - mcor: 0.9546 - acc: 0.9773 - precision: 0.9773 - recall: 0.9773 - f1score: 0.9773 - val_loss: 0.0371 - val_mcor: 0.9753 - val_acc: 0.9875 - val_precision: 0.9876 - val_recall: 0.9876 - val_f1score: 0.9876 Epoch 5/11 60000/60000 [==============================] - 66s 1ms/sample - loss: 0.0615 - mcor: 0.9587 - acc: 0.9793 - precision: 0.9793 - recall: 0.9793 - f1score: 0.9793 - val_loss: 0.0359 - val_mcor: 0.9759 - val_acc: 0.9878 - val_precision: 0.9879 - val_recall: 0.9879 - val_f1score: 0.9879 Epoch 6/11 60000/60000 [==============================] - 66s 1ms/sample - loss: 0.0563 - mcor: 0.9633 - acc: 0.9816 - precision: 0.9816 - recall: 0.9816 - f1score: 0.9816 - val_loss: 0.0342 - val_mcor: 0.9767 - val_acc: 0.9882 - val_precision: 0.9883 - val_recall: 0.9883 - val_f1score: 0.9883 Epoch 7/11 60000/60000 [==============================] - 67s 1ms/sample - loss: 0.0538 - mcor: 0.9632 - acc: 0.9816 - precision: 0.9816 - recall: 0.9816 - f1score: 0.9816 - val_loss: 0.0300 - val_mcor: 0.9802 - val_acc: 0.9900 - val_precision: 0.9901 - val_recall: 0.9901 - val_f1score: 0.9901 Epoch 8/11 60000/60000 [==============================] - 67s 1ms/sample - loss: 0.0529 - mcor: 0.9643 - acc: 0.9822 - precision: 0.9821 - recall: 0.9821 - f1score: 0.9821 - val_loss: 0.0307 - val_mcor: 0.9782 - val_acc: 0.9890 - val_precision: 0.9891 - val_recall: 0.9891 - val_f1score: 0.9891 Epoch 9/11 60000/60000 [==============================] - 68s 1ms/sample - loss: 0.0513 - mcor: 0.9663 - acc: 0.9832 - precision: 0.9832 - recall: 0.9832 - f1score: 0.9832 - val_loss: 0.0294 - val_mcor: 0.9780 - val_acc: 0.9896 - val_precision: 0.9890 - val_recall: 0.9890 - val_f1score: 0.9890 Epoch 10/11 60000/60000 [==============================] - 67s 1ms/sample - loss: 0.0477 - mcor: 0.9692 - acc: 0.9846 - precision: 0.9846 - recall: 0.9846 - f1score: 0.9846 - val_loss: 0.0291 - val_mcor: 0.9773 - val_acc: 0.9892 - val_precision: 0.9886 - val_recall: 0.9886 - val_f1score: 0.9886 Epoch 11/11 60000/60000 [==============================] - 66s 1ms/sample - loss: 0.0466 - mcor: 0.9681 - acc: 0.9840 - precision: 0.9841 - recall: 0.9841 - f1score: 0.9841 - val_loss: 0.0283 - val_mcor: 0.9794 - val_acc: 0.9896 - val_precision: 0.9897 - val_recall: 0.9897 - val_f1score: 0.9897 Test score: 0.028260348330519627 Test accuracy: 0.9792332

For the “binary classfication using sigmoid with 0-1 vector output”, I get the following results, which shows I DO NOT have the EQUALITY PROBLEM.

ann_bin_sigm()

Train on 60000 samples, validate on 10000 samples Epoch 1/11 60000/60000 [==============================] - 4s 61us/sample - loss: 0.5379 - mcor: 0.4488 - acc: 0.7237 - precision: 0.7249 - recall: 0.7078 - f1score: 0.7133 - val_loss: 0.3585 - val_mcor: 0.7453 - val_acc: 0.8715 - val_precision: 0.8549 - val_recall: 0.8889 - val_f1score: 0.8705 Epoch 2/11 60000/60000 [==============================] - 3s 50us/sample - loss: 0.4248 - mcor: 0.6232 - acc: 0.8109 - precision: 0.8206 - recall: 0.7878 - f1score: 0.8018 - val_loss: 0.2906 - val_mcor: 0.7892 - val_acc: 0.8945 - val_precision: 0.9033 - val_recall: 0.8764 - val_f1score: 0.8888 Epoch 3/11 60000/60000 [==============================] - 3s 50us/sample - loss: 0.3910 - mcor: 0.6602 - acc: 0.8298 - precision: 0.8411 - recall: 0.8053 - f1score: 0.8214 - val_loss: 0.2740 - val_mcor: 0.8137 - val_acc: 0.9083 - val_precision: 0.9019 - val_recall: 0.9054 - val_f1score: 0.9030 Epoch 4/11 60000/60000 [==============================] - 3s 49us/sample - loss: 0.3738 - mcor: 0.6764 - acc: 0.8380 - precision: 0.8476 - recall: 0.8173 - f1score: 0.8307 - val_loss: 0.2689 - val_mcor: 0.8199 - val_acc: 0.9089 - val_precision: 0.9223 - val_recall: 0.8899 - val_f1score: 0.9051 Epoch 5/11 60000/60000 [==============================] - 3s 48us/sample - loss: 0.3596 - mcor: 0.6866 - acc: 0.8434 - precision: 0.8523 - recall: 0.8233 - f1score: 0.8364 - val_loss: 0.2672 - val_mcor: 0.8241 - val_acc: 0.9108 - val_precision: 0.9250 - val_recall: 0.8916 - val_f1score: 0.9070 Epoch 6/11 60000/60000 [==============================] - 3s 49us/sample - loss: 0.3529 - mcor: 0.6949 - acc: 0.8475 - precision: 0.8567 - recall: 0.8277 - f1score: 0.8408 - val_loss: 0.2529 - val_mcor: 0.8334 - val_acc: 0.9165 - val_precision: 0.9274 - val_recall: 0.8987 - val_f1score: 0.9122 Epoch 7/11 60000/60000 [==============================] - 3s 48us/sample - loss: 0.3416 - mcor: 0.7108 - acc: 0.8551 - precision: 0.8640 - recall: 0.8371 - f1score: 0.8489 - val_loss: 0.2429 - val_mcor: 0.8415 - val_acc: 0.9199 - val_precision: 0.9257 - val_recall: 0.9101 - val_f1score: 0.9173 Epoch 8/11 60000/60000 [==============================] - 3s 49us/sample - loss: 0.3359 - mcor: 0.7142 - acc: 0.8569 - precision: 0.8673 - recall: 0.8360 - f1score: 0.8501 - val_loss: 0.2422 - val_mcor: 0.8401 - val_acc: 0.9197 - val_precision: 0.9152 - val_recall: 0.9215 - val_f1score: 0.9177 Epoch 9/11 60000/60000 [==============================] - 3s 47us/sample - loss: 0.3297 - mcor: 0.7222 - acc: 0.8609 - precision: 0.8717 - recall: 0.8403 - f1score: 0.8545 - val_loss: 0.2461 - val_mcor: 0.8440 - val_acc: 0.9232 - val_precision: 0.9146 - val_recall: 0.9275 - val_f1score: 0.9205 Epoch 10/11 60000/60000 [==============================] - 3s 47us/sample - loss: 0.3263 - mcor: 0.7270 - acc: 0.8634 - precision: 0.8735 - recall: 0.8444 - f1score: 0.8576 - val_loss: 0.2354 - val_mcor: 0.8534 - val_acc: 0.9274 - val_precision: 0.9242 - val_recall: 0.9249 - val_f1score: 0.9239 Epoch 11/11 60000/60000 [==============================] - 3s 48us/sample - loss: 0.3215 - mcor: 0.7281 - acc: 0.8638 - precision: 0.8724 - recall: 0.8467 - f1score: 0.8582 - val_loss: 0.2372 - val_mcor: 0.8529 - val_acc: 0.9257 - val_precision: 0.9314 - val_recall: 0.9165 - val_f1score: 0.9234 Test score: 0.23720481104850769 Test accuracy: 0.8519195

I find it very interesting, but I don’t know why, can anyone explain why this happens? Thank you!

I got it. I’ve tried a binary classification on google servers. It is all about how many units one has on the last layer. If you have only one, everything is okay but if you have two of them it’s not working. On the other hand, for the binary classification using two units with a softmax activation function(probably that’s what you do as well) is often suggested for a better convergence as far as I know. You can check my code below, I will create a post under this keras-vis library’s issue. https://colab.research.google.com/drive/1lmQ-hWcN4tsGMicd4dKnSjeTD-BdgJuE Best

I have the same issue which is having the same results for the custom metrics for the binary classification on an unbalanced data and I am very positive that I there is nothing wrong in the model. Looks like best way is to use the keras metrics rather than implementing it on the backend. Let me know if any of you understands what’s wrong here

@baharian I guess it has nothing to do with metrics. Do you have the result for the loss too?

Did you runt he code I provided?

metrics.py is a month old. I just did the pypy pull to get keras 1.2.2. I can’t see how that can be the issue.