What is the proper way to install TensorFlow on Apple M1 in 2022

Distilling the official directions from Apple (as of 13 July 2022), one would create an environment using the following YAML: tf-metal-arm64.yaml name: tf-metal channels: – apple – conda-forge dependencies: – python=3.9 ## specify desired version – pip – tensorflow-deps ## uncomment for use with Jupyter ## – ipykernel ## PyPI packages – pip: – tensorflow-macos … Read more

Working with multiple graphs in TensorFlow

Your product is a global variable, and you’ve set it to point to “g2/MatMul”. In particular Try print product and you’ll see Tensor(“g2/MatMul:0”, shape=(1, 1), dtype=float32) So the system takes “g2/MatMul:0” since that’s the Tensor’s name, and tries to find it in the graph g1 since that’s the graph you set for the session. Incidentally … Read more

Create keras callback to save model predictions and targets for each batch during training

NOTE: this answer is outdated and only works with TF1. Check @bers’s answer for a solution tested on TF2. After model compilation, the placeholder tensor for y_true is in model.targets and y_pred is in model.outputs. To save the values of these placeholders at each batch, you can: First copy the values of these tensors into … Read more

Using Gensim Fasttext model with LSTM nn in keras

here the procedure to incorporate the fasttext model inside an LSTM Keras network # define dummy data and precproces them docs = [‘Well done’, ‘Good work’, ‘Great effort’, ‘nice work’, ‘Excellent’, ‘Weak’, ‘Poor effort’, ‘not good’, ‘poor work’, ‘Could have done better’] docs = [d.lower().split() for d in docs] # train fasttext from gensim api … Read more

How to convert .pb to TFLite format?

I am making a wild guess here, maybe you entered input_arrays=input. Which may not be true. Use this script to find the name of the input and output arrays of the frozen inference graph import tensorflow as tf gf = tf.GraphDef() m_file = open(‘frozen_inference_graph.pb’,’rb’) gf.ParseFromString(m_file.read()) with open(‘somefile.txt’, ‘a’) as the_file: for n in gf.node: the_file.write(n.name+’\n’) … Read more

TensorFlow: How to measure how much GPU memory each tensor takes?

Now that 1258 has been closed, you can enable memory logging in Python by setting an environment variable before importing TensorFlow: import os os.environ[‘TF_CPP_MIN_VLOG_LEVEL’]=’3′ import tensorflow as tf There will be a lot of logging as a result of this. You’ll want to grep the results to find the appropriate lines. For example: grep MemoryLogTensorAllocation … Read more

TensorFlow: Dst tensor is not initialized

For brevity, this error message is generated when there is not enough memory to handle the batch size. Expanding on Steven‘s link (I cannot post comments yet), here are a few tricks to monitor/control memory usage in Tensorflow: To monitor memory usage during runs, consider logging run metadata. You can then see the memory usage … Read more

Keras Loss Function with Additional Dynamic Parameter

OK. Here is an example. from keras.layers import Input, Dense, Conv2D, MaxPool2D, Flatten from keras.models import Model from keras.losses import categorical_crossentropy def sample_loss( y_true, y_pred, is_weight ) : return is_weight * categorical_crossentropy( y_true, y_pred ) x = Input(shape=(32,32,3), name=”image_in”) y_true = Input( shape=(10,), name=”y_true” ) is_weight = Input(shape=(1,), name=”is_weight”) f = Conv2D(16,(3,3),padding=’same’)(x) f = MaxPool2D((2,2),padding=’same’)(f) … Read more

Holding variables constant during optimizer

tf.stop_gradient(tensor) might be what you are looking for. The tensor will be treated as constant for gradient computation purposes. You can create two losses with different parts treated as constants. The other option (and often better) would be to create 2 optimizers but explicitly optimize only subsets of variables, e.g. train_a = tf.train.GradientDescentOptimizer(0.1).minimize(loss_a, var_list=[A]) train_b … Read more