The problem here is that each call to run_inference_on_image()
adds nodes to the same graph, which eventually exceeds the maximum size. There are at least two ways to fix this:
-
The easy but slow way is to use a different default graph for each call to
run_inference_on_image()
:for image in list_of_images: # ... with tf.Graph().as_default(): current_features = run_inference_on_image(images_folder+"https://stackoverflow.com/"+image) # ...
-
The more involved but more efficient way is to modify
run_inference_on_image()
to run on multiple images. Relocate yourfor
loop to surround thissess.run()
call, and you will no longer have to reconstruct the entire model on each call, which should make processing each image much faster.