Keras input_shape for conv2d and manually loaded images
your input_shape dimension is correct i.e input_shape(286, 384, 1)
reshape your input_image to 4D [batch_size, img_height, img_width, number_of_channels]
input_image=input_image.reshape(85,286, 384,1)
during
model.fit(input_image,label)
Set the input_shape
to (286,384,1). Now the model expects an input with 4 dimensions. This means that you have to reshape your image with .reshape(n_images, 286, 384, 1)
. Now you have added an extra dimension without changing the data and your model is ready to run. Basically, you need to reshape your data to (n_images
, x_shape
, y_shape
, channels
).
The cool thing is that you also can use an RGB-image as input. Just change channels
to 3.
Check also this answer: Keras input explanation: input_shape, units, batch_size, dim, etc
Example
import numpy as np
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
from keras.layers.core import Flatten, Dense, Activation
from keras.utils import np_utils
#Create model
model = Sequential()
model.add(Convolution2D(32, kernel_size=(3, 3), activation='relu', input_shape=(286,384,1)))
model.add(Flatten())
model.add(Dense(2))
model.add(Activation('softmax'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
#Create random data
n_images=100
data = np.random.randint(0,2,n_images*286*384)
labels = np.random.randint(0,2,n_images)
labels = np_utils.to_categorical(list(labels))
#add dimension to images
data = data.reshape(n_images,286,384,1)
#Fit model
model.fit(data, labels, verbose=1)