Keras Loss Function with Additional Dynamic Parameter

OK. Here is an example.

from keras.layers import Input, Dense, Conv2D, MaxPool2D, Flatten
from keras.models import Model
from keras.losses import categorical_crossentropy

def sample_loss( y_true, y_pred, is_weight ) :
    return is_weight * categorical_crossentropy( y_true, y_pred ) 

x = Input(shape=(32,32,3), name="image_in")
y_true = Input( shape=(10,), name="y_true" )
is_weight = Input(shape=(1,), name="is_weight")
f = Conv2D(16,(3,3),padding='same')(x)
f = MaxPool2D((2,2),padding='same')(f)
f = Conv2D(32,(3,3),padding='same')(f)
f = MaxPool2D((2,2),padding='same')(f)
f = Conv2D(64,(3,3),padding='same')(f)
f = MaxPool2D((2,2),padding='same')(f)
f = Flatten()(f)
y_pred = Dense(10, activation='softmax', name="y_pred" )(f)
model = Model( inputs=[x, y_true, is_weight], outputs=y_pred, name="train_only" )
model.add_loss( sample_loss( y_true, y_pred, is_weight ) )
model.compile( loss=None, optimizer="sgd" )
print model.summary()

Note, since you’ve add loss through add_loss(), you don’t have to do it through compile( loss=xxx ).

With regards to train a model, nothing is special except you move y_true to your input end. See below

import numpy as np 
a = np.random.randn(8,32,32,3)
a_true = np.random.randn(8,10)
a_is_weight = np.random.randint(0,2,size=(8,1))
model.fit( [a, a_true, a_is_weight] )

Finally, you can make a testing model (which share all weights in model) for easier use, i.e.

test_model = Model( inputs=x, outputs=y_pred, name="test_only" )
a_pred = test_model.predict( a )

Leave a Comment