How can I visualize the weights(variables) in cnn in Tensorflow?

To visualize the weights, you can use a tf.image_summary() op to transform a convolutional filter (or a slice of a filter) into a summary proto, write them to a log using a tf.train.SummaryWriter, and visualize the log using TensorBoard.

Let’s say you have the following (simplified) program:

filter = tf.Variable(tf.truncated_normal([8, 8, 3]))
images = tf.placeholder(tf.float32, shape=[None, 28, 28])

conv = tf.nn.conv2d(images, filter, strides=[1, 1, 1, 1], padding="SAME")

# More ops...
loss = ...
optimizer = tf.GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss)

filter_summary = tf.image_summary(filter)

sess = tf.Session()
summary_writer = tf.train.SummaryWriter('/tmp/logs', sess.graph_def)
for i in range(10000):
  sess.run(train_op)
  if i % 10 == 0:
    # Log a summary every 10 steps.
    summary_writer.add_summary(filter_summary, i)

After doing this, you can start TensorBoard to visualize the logs in /tmp/logs, and you will be able to see a visualization of the filter.

Note that this trick visualizes depth-3 filters as RGB images (to match the channels of the input image). If you have deeper filters, or they don’t make sense to interpret as color channels, you can use the tf.split() op to split the filter on the depth dimension, and generate one image summary per depth.

Leave a Comment