Specify connections in NN (in keras)

The simplest way I can think of, if you have this matrix correctly shaped, is to derive the Dense layer and simply add the matrix in the code multiplying the original weights:

class CustomConnected(Dense):

    def __init__(self,units,connections,**kwargs):

        #this is matrix A
        self.connections = connections                        

        #initalize the original Dense with all the usual arguments   
        super(CustomConnected,self).__init__(units,**kwargs)  


    def call(self,inputs):

        #change the kernel before calling the original call:
        self.kernel = self.kernel * self.connections

        #call the original calculations:
        super(CustomConnected,self).call(inputs)

Using:

model.add(CustomConnected(units,matrixA))
model.add(CustomConnected(hidden_dim2, matrixB,activation='tanh')) #can use all the other named parameters...

Notice that all the neurons/units have yet a bias added at the end. The argument use_bias=False will still work if you don’t want biases. You can also do exactly the same thing using a vector B, for instance, and mask the original biases with self.biases = self.biases * vectorB

Hint for testing: use different input and output dimensions, so you can be sure that your matrix A has the correct shape.


I just realized that my code is potentially buggy, because I’m changing a property that is used by the original Dense layer. If weird behaviors or messages appear, you can try another call method:

def call(self, inputs):
    output = K.dot(inputs, self.kernel * self.connections)
    if self.use_bias:
        output = K.bias_add(output, self.bias)
    if self.activation is not None:
        output = self.activation(output)
    return output

Where K comes from import keras.backend as K.

You may also go further and set a custom get_weights() method if you want to see the weights masked with your matrix. (This would not be necessary in the first approach above)

Leave a Comment