CNN with keras, accuracy not improving
The issue is caused by a mis-match between the number of output classes (three) and your choice of final layer activation (sigmoid) and loss-function (binary cross entropy).
The sigmoid function 'squashes' real values into a value between [0, 1] but it is designed for binary (two class) problems only. For multiple classes you need to use something like the softmax function. Softmax is a generalised version of sigmoid (the two should be equivalent when you have two classes).
The loss value also needs to be updated to one that can handle multiple classes - categorical cross entropy will work in this case.
In terms of code, if you modify the model definition and compilation code to the version below it should work.
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
Finally you need to specify class_mode='categorical'
in your data generators. That will ensure that the output targets are formatted as a categorical 3-column matrix that has a one in the column corresponding to the correct value and zeroes elsewhere. This response format is needed by the categorical_cross_entropy
loss function.
Minor correction:
model.add(Dense(1))
Should be:
model.add(Dense(3))
It has to comply with number of classes in the output.