keras: Getting ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (315, 720, 1280)

Hi,

I am trying to train a model on some grayscale images. The model I am using is:

model = Sequential()
model.add(Convolution2D(8, 3, 3, input_shape=(720, 1280,1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())

model.add(Convolution2D(16, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
#model.add(Dropout(0.6))

model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())

model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
model.add(Dropout(0.5))

model.add(Flatten())  # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

Then I compile it:

model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

and then fit it:

model.fit(np.array(X_train), np.array(y_train_cat), batch_size=32,
          epochs=10, verbose=1, validation_split=0.1)

The shape of the image is (1280,720) I get the error:

ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (315, 720, 1280)

If I don’t use the np.array in fit, I get the following error:

Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 315 arrays:

Can you please suggest what I should do? I have tried to resize it to 3D, something like (1280,720,1), but it’s not working.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 15 (1 by maintainers)

Most upvoted comments

I used he numpy function to change the dimention to the image. The exact function I’ll send soon.

In reshaping our test images, we should be careful with input_size of the image like :

model.predict(X_train.reshape(10,28,28,1) -->for 10 input images.

If we want to predict for single instance of image, we should consider only one image but not number of features. (Features could be what ever, like 789 or 287 e.t.c…)

model.predict(X_train.reshape(1,28,28,3))

You need X_train to have a fourth dimension. Just like the error message says. X_train.reshape([-1,720, 1280,1]).

This issue isn’t related to a bug/enhancement/feature request or other accepted types of issue.

To ask questions, please see the following resources :

Thanks!

If you think I made a mistake, please re-open this issue.

Hey @1q2q1q1q, use the np.resize function to solve that. I apologise for the late response. But use np.resize(img, (-1, <image shape>). That should solve the issue.