Holding variables constant during optimizer
tf.stop_gradient(tensor) might be what you are looking for. The tensor will be treated as constant for gradient computation purposes. You can create two losses with different parts treated as constants. The other option (and often better) would be to create 2 optimizers but explicitly optimize only subsets of variables, e.g. train_a = tf.train.GradientDescentOptimizer(0.1).minimize(loss_a, var_list=[A]) train_b … Read more