How to add an attention mechanism in keras?

If you want to have an attention along the time dimension, then this part of your code seems correct to me:

activations = LSTM(units, return_sequences=True)(embedded)

# compute importance for each step
attention = Dense(1, activation='tanh')(activations)
attention = Flatten()(attention)
attention = Activation('softmax')(attention)
attention = RepeatVector(units)(attention)
attention = Permute([2, 1])(attention)

sent_representation = merge([activations, attention], mode="mul")

You’ve worked out the attention vector of shape (batch_size, max_length):

attention = Activation('softmax')(attention)

I’ve never seen this code before, so I can’t say if this one is actually correct or not:

K.sum(xin, axis=-2)

Further reading (you might have a look):

Leave a Comment