Keras, how do I predict after I trained a model?

model.predict() expects the first parameter to be a numpy array. You supply a list, which does not have the shape attribute a numpy array has. Otherwise your code looks fine, except that you are doing nothing with the prediction. Make sure you store it in a variable, for example like this: prediction = model.predict(np.array(tk.texts_to_sequences(text))) print(prediction)

keras: how to save the training history attribute of the history object

What I use is the following: with open(‘/trainHistoryDict’, ‘wb’) as file_pi: pickle.dump(history.history, file_pi) In this way I save the history as a dictionary in case I want to plot the loss or accuracy later on. Later, when you want to load the history again, you can use: with open(‘/trainHistoryDict’, “rb”) as file_pi: history = pickle.load(file_pi) … Read more

Keras split train test set when using ImageDataGenerator

Keras has now added Train / validation split from a single directory using ImageDataGenerator: train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, validation_split=0.2) # set validation split train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_height, img_width), batch_size=batch_size, class_mode=”binary”, subset=”training”) # set as training data validation_generator = train_datagen.flow_from_directory( train_data_dir, # same directory as training data target_size=(img_height, img_width), batch_size=batch_size, class_mode=”binary”, subset=”validation”) # … Read more

How to reduce a fully-connected (`”InnerProduct”`) layer using truncated SVD

Some linear-algebra background Singular Value Decomposition (SVD) is a decomposition of any matrix W into three matrices: W = U S V* Where U and V are ortho-normal matrices, and S is diagonal with elements in decreasing magnitude on the diagonal. One of the interesting properties of SVD is that it allows to easily approximate … Read more

How to create caffe.deploy from train.prototxt

There are two main differences between a “train” prototxt and a “deploy” one: 1. Inputs: While for training data is fixed to a pre-processed training dataset (lmdb/HDF5 etc.), deploying the net require it to process other inputs in a more “random” fashion. Therefore, the first change is to remove the input layers (layers that push … Read more

How to implement Grad-CAM on a trained network

One thing I don’t get is if you’ve your own classifier (2) why then use imagenet_utils.decode_predictions? I’m not sure if my following answer will satisfy you or not. But here are some pointer. DataSet import tensorflow as tf import numpy as np (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() # train set / data x_train = … Read more

Fine Tuning of GoogLeNet Model

Assuming you are trying to do image classification. These should be the steps for finetuning a model: 1. Classification layer The original classification layer “loss3/classifier” outputs predictions for 1000 classes (it’s mum_output is set to 1000). You’ll need to replace it with a new layer with appropriate num_output. Replacing the classification layer: Change layer’s name … Read more