Return number of epochs for EarlyStopping callback in Keras
You can also leverage History() call back to find out the number of epochs it was ran for. Ex:
from keras.callbacks import History, EarlyStopping
history = History()
callback = [history, EarlyStopping(monitor='val_loss', patience=5, verbose=1, min_delta=1e-4)]
history = model.fit_generator(...., callbacks=callbacks)
number_of_epochs_it_ran = len(history.history['loss'])
Subtracting the patience
value from the total number of epochs - as suggested in this comment - might not work in some situations. For instance, if you set epochs=100
and patience=20
, if the best accuracy/loss value is found at epoch 90, the training will stop at epoch 100. So with this approach you would get a wrong number (100-20 = 80).
Moreover, as noted in this comment, using EarlyStopping.stopped_epoch
only gives you the epoch when the training has been stopped, but NOT the epoch when the best weights are saved. This is particularly useful when you set save_best_weights=True
or rely on ModelCheckpoint
to save the best model before stopping the training.
Therefore my solution is to get the index of model history array, with the best value. Assuming that the metric used is the validation accuracy, relying on numpy
, here is some code:
import numpy as np
model.fit(...)
hist = model.history.history['val_acc']
n_epochs_best = np.argmax(hist)
Use EarlyStopping.stopped_epoch
attribute: remember the callback in a separate variable, say callback
, and check callback.stopped_epoch
after the training stopped.