how to implement custom metric in keras?
The problem is that y_pred
and y_true
are not NumPy arrays but either Theano or TensorFlow tensors. That's why you got this error.
You can define your custom metrics but you have to remember that its arguments are those tensors – not NumPy arrays.
Here I'm answering to OP's topic question rather than his exact problem. I'm doing this as the question shows up in the top when I google the topic problem.
You can implement a custom metric in two ways.
As mentioned in Keras docu.
import keras.backend as K def mean_pred(y_true, y_pred): return K.mean(y_pred) model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy', mean_pred])
But here you have to remember as mentioned in Marcin Możejko's answer that
y_true
andy_pred
are tensors. So in order to correctly calculate the metric you need to usekeras.backend
functionality. Please look at this SO question for details How to calculate F1 Macro in Keras?Or you can implement it in a hacky way as mentioned in Keras GH issue. For that you need to use
callbacks
argument ofmodel.fit
.import keras as keras import numpy as np from keras.optimizers import SGD from sklearn.metrics import roc_auc_score model = keras.models.Sequential() # ... sgd = SGD(lr=0.001, momentum=0.9) model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy']) class Metrics(keras.callbacks.Callback): def on_train_begin(self, logs={}): self._data = [] def on_epoch_end(self, batch, logs={}): X_val, y_val = self.validation_data[0], self.validation_data[1] y_predict = np.asarray(model.predict(X_val)) y_val = np.argmax(y_val, axis=1) y_predict = np.argmax(y_predict, axis=1) self._data.append({ 'val_rocauc': roc_auc_score(y_val, y_predict), }) return def get_data(self): return self._data metrics = Metrics() history = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[metrics]) metrics.get_data()
you can pass a model.predict() in your AUC metric function. [this will iterate on bacthes so you might be better off using model.predict_on_batch(). Assuming you have something like a softmax layer as output (something that outputs probabilities), then you can use that together with sklearn.metric to get the AUC.
from sklearn.metrics import roc_curve, auc
from here
def sklearnAUC(test_labels,test_prediction):
n_classes = 2
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
# ( actual labels, predicted probabilities )
fpr[i], tpr[i], _ = roc_curve(test_labels[:, i], test_prediction[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
return round(roc_auc[0],3) , round(roc_auc[1],3)
now make your metric
# gives a numpy array like so [ [0.3,0.7] , [0.2,0.8] ....]
Y_pred = model.predict_on_batch ( X_test )
# Y_test looks something like [ [0,1] , [1,0] .... ]
# auc1 and auc2 should be equal
auc1 , auc2 = sklearnAUC( Y_test , Y_pred )