Error when checking model input: expected convolution2d_input_1 to have 4 dimensions, but got array with shape (32, 32, 3)

The input shape you have defined is the shape of a single sample. The model itself expects some array of samples as input (even if its an array of length 1). Your output really should be 4-d, with the 1st dimension to enumerate the samples. i.e. for a single image you should return a shape … Read more

How to add and remove new layers in keras after loading weights?

You can take the output of the last model and create a new model. The lower layers remains the same. model.summary() model.layers.pop() model.layers.pop() model.summary() x = MaxPooling2D()(model.layers[-1].output) o = Activation(‘sigmoid’, name=”loss”)(x) model2 = Model(inputs=in_img, outputs=[o]) model2.summary() Check How to use models from keras.applications for transfer learnig? Update on Edit: The new error is because you … Read more

TimeDistributed(Dense) vs Dense in Keras – Same number of parameters

TimeDistributedDense applies a same dense to every time step during GRU/LSTM Cell unrolling. So the error function will be between predicted label sequence and the actual label sequence. (Which is normally the requirement for sequence to sequence labeling problems). However, with return_sequences=False, Dense layer is applied only once at the last cell. This is normally … Read more

How do I get the weights of a layer in Keras?

If you want to get weights and biases of all layers, you can simply use: for layer in model.layers: print(layer.get_config(), layer.get_weights()) This will print all information that’s relevant. If you want the weights directly returned as numpy arrays, you can use: first_layer_weights = model.layers[0].get_weights()[0] first_layer_biases = model.layers[0].get_weights()[1] second_layer_weights = model.layers[1].get_weights()[0] second_layer_biases = model.layers[1].get_weights()[1] etc.

Negative dimension size caused by subtracting 3 from 1 for ‘conv2d_2/convolution’

By default, Convolution2D (https://keras.io/layers/convolutional/) expects the input to be in the format (samples, rows, cols, channels), which is “channels-last”. Your data seems to be in the format (samples, channels, rows, cols). You should be able to fix this using the optional keyword data_format=”channels_first” when declaring the Convolution2D layer. model.add(Convolution2D(32, (3, 3), activation=’relu’, input_shape=(1,28,28), data_format=”channels_first”))

Specify connections in NN (in keras)

The simplest way I can think of, if you have this matrix correctly shaped, is to derive the Dense layer and simply add the matrix in the code multiplying the original weights: class CustomConnected(Dense): def __init__(self,units,connections,**kwargs): #this is matrix A self.connections = connections #initalize the original Dense with all the usual arguments super(CustomConnected,self).__init__(units,**kwargs) def call(self,inputs): … Read more

When does keras reset an LSTM state?

Cheking with some tests, I got to the following conclusion, which is according to the documentation and to Nassim’s answer: First, there isn’t a single state in a layer, but one state per sample in the batch. There are batch_size parallel states in such a layer. Stateful=False In a stateful=False case, all the states are … Read more