Keras callbacks keep skip saving checkpoints, claiming val_acc is missing

It is missing, not because the metric is missing, but because you have no validation data. Add some through the validation_data parameter to fit, or use validation_split.


You are trying to checkpoint the model using the following code

# Save the checkpoint in the /output folder
filepath = "output/text-gen-best.hdf5"

# Keep only a single checkpoint, the best over test accuracy.
checkpoint = ModelCheckpoint(filepath,
                            monitor='val_acc',
                            verbose=1,
                            save_best_only=True,
                            mode='max')

ModelCheckpoint will consider the argument monitor to take the decision of saving the model or not. In your code it is val_acc. So it will save the weights if there is a increase in the val_acc.

Now in your fit code,

model.fit(X_modified, Y_modified, epochs=100, batch_size=50, callbacks=[checkpoint])

you haven't provided any validation data. ModelCheckpoint can't save the weights because it don't have the monitor argument to check.

In order to do check pointing based on val_acc you must provide some validation data like this.

model.fit(X_modified, Y_modified, validation_data=(X_valid, y_valid), epochs=100, batch_size=50, callbacks=[checkpoint])

If you don't want to use validation data for whatever be the reason and implement check pointing, you have to change the ModelCheckpoint to work based on acc or loss like this

# Save the checkpoint in the /output folder
filepath = "output/text-gen-best.hdf5"

# Keep only a single checkpoint, the best over test accuracy.
checkpoint = ModelCheckpoint(filepath,
                            monitor='acc',
                            verbose=1,
                            save_best_only=True,
                            mode='max')

Keep in mind that you have to change mode to min if you are going to monitor the loss