Holding variables constant during optimizer

tf.stop_gradient(tensor) might be what you are looking for. The tensor will be treated as constant for gradient computation purposes. You can create two losses with different parts treated as constants. The other option (and often better) would be to create 2 optimizers but explicitly optimize only subsets of variables, e.g. train_a = tf.train.GradientDescentOptimizer(0.1).minimize(loss_a, var_list=[A]) train_b … Read more

How can I run Tensorflow on one single core?

To run Tensorflow on one single CPU thread, I use: session_conf = tf.ConfigProto( intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) sess = tf.Session(config=session_conf) device_count limits the number of CPUs being used, not the number of cores or threads. tensorflow/tensorflow/core/protobuf/config.proto says: message ConfigProto { // Map from device type name (e.g., “CPU” or “GPU” ) to maximum // number of devices … Read more

Tensorflow doesn’t seem to see my gpu

I came across this same issue in jupyter notebooks. This could be an easy fix. $ pip uninstall tensorflow $ pip install tensorflow-gpu You can check if it worked with: tf.test.gpu_device_name() Update 2020 It seems like tensorflow 2.0+ comes with gpu capabilities therefore pip install tensorflow should be enough

TensorFlow, why there are 3 files after saving the model?

Try this: with tf.Session() as sess: saver = tf.train.import_meta_graph(‘/tmp/model.ckpt.meta’) saver.restore(sess, “/tmp/model.ckpt”) The TensorFlow save method saves three kinds of files because it stores the graph structure separately from the variable values. The .meta file describes the saved graph structure, so you need to import it before restoring the checkpoint (otherwise it doesn’t know what variables … Read more

Tensorflow crashes with CUBLAS_STATUS_ALLOC_FAILED

For TensorFlow 2.2 none of the other answers worked when the CUBLAS_STATUS_ALLOC_FAILED problem was encountered. Found a solution on https://www.tensorflow.org/guide/gpu: import tensorflow as tf gpus = tf.config.experimental.list_physical_devices(‘GPU’) if gpus: try: # Currently, memory growth needs to be the same across GPUs for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) logical_gpus = tf.config.experimental.list_logical_devices(‘GPU’) print(len(gpus), “Physical GPUs,”, len(logical_gpus), “Logical … Read more