looking for source code of from gen_nn_ops in tensorflow

You can’t find this source because the source is automatically generated by bazel. If you build from source, you’ll see this file inside bazel-genfiles. It’s also present in your local distribution which you can find using inspect module. The file contains automatically generated Python wrappers to underlying C++ implementations, so it basically consists of a bunch of 1-line functions. A shortcut to find underlying C++ implementation of such generated Python op is to convert snake case to camel case, ie conv2d_backprop_input -> Conv2dBackpropInput

# figure out where gen_nn_ops is
print(tf.nn.conv2d_transpose.__globals__['gen_nn_ops'])

from tensorflow.python.ops import gen_nn_ops
import inspect
inspect.getsourcefile('gen_nn_ops.conv2d_backprop_input')
'/Users/yaroslav/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/gen_nn_ops.py'

If you cared to find out how this file really came about, you could follow the trail of bazel dependencies in BUILD files. It to find Bazel target that generated it from tensorflow source tree:

fullname=$(bazel query tensorflow/python/ops/gen_nn_ops.py)
bazel query "attr('srcs', $fullname, ${fullname//:*/}:*)"

//tensorflow/python:nn_ops_gen

So now going to BUILD file inside tensorflow/python you see that this is a target of type tf_gen_op_wrapper_private_py which is defined here and calls gen_op_wrapper_py from tensorflow/tensorflow.bzl which looks like this

def tf_gen_op_wrapper_py(name, out=None, hidden=None, visibility=None, deps=[],
....
      native.cc_binary(
      name = tool_name,

This native.cc_binary construct is a way to have Bazel target that represents execution of an arbitrary command. In this case it calls tool_name with some arguments. With a couple more steps you can find that “tool” here is compiled from framework/python_op_gen_main.cc

The reason for this complication is that TensorFlow was designed to be language agnostic. So in ideal world you would have each op described in ops.pbtxt, and then each op would have one implementation per hardware type using REGISTER_KERNEL_BUILDER, so all implementations would be done in C++/CUDA/Assembly and become automatically available to all language front-ends. There would be an equivalent translator op like “python_op_gen_main” for every language and all client library code would be automatically generated. However, because Python is so dominant, there was pressure to add features on the Python side. So now there are two kinds of ops — pure TensorFlow ops seen in files like gen_nn_ops.py, and Python-only ops in files like nn_ops.py which typically wrap ops automatically generated files gen_nn_ops.py but add extra features/syntax sugar. Also, originally all names were camel-case, but it was decided that public facing release should be PEP compliant with more common Python syntax, so this is a reason for camel-case/snake-case mismatch between C++/Python interfaces of same op

Leave a Comment