tensorflow: Breakpoints do not stop inside tf.function

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04
  • TensorFlow installed from (source or binary): binary (pip)
  • TensorFlow version (use command below): 2.2.0
  • Python version: 3.7
  • CUDA/cuDNN version: CUDA 10.1 / cuDNN 7
  • GPU model and memory: GeForce RTX 2060 6GB GDDR6

Describe the current behavior

If in the code from below you set a breakpoint in the line print('Dummy function') it will not stop.

import tensorflow as tf

def read_tfrecord(x):
    print('Dummy function')
    return x

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(dataset)
dataset = dataset.map(lambda x: read_tfrecord(x))

Describe the expected behavior

The code execution should stop at that line and you should be able to debug that function.

Standalone code to reproduce the issue

import tensorflow as tf

def read_tfrecord(x):
    print('Dummy function')
    return x

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(dataset)
dataset = dataset.map(lambda x: read_tfrecord(x))

Other info / logs Include any logs or source code that would be helpful to

First I though it was a problem with the debugger that I was using so I created an issue there (https://github.com/jupyterlab/debugger/issues/435).

Then they said it might be to the underlying debugger that it was using (ptvsd / debugpy) since the problem was also present in Visual Studio Code, and both debuggers use the same, so I created an issue there as well (https://github.com/microsoft/debugpy/issues/228).

And they now point out that it might be related to how tensorflow is build, so maybe the problem comes from Tensorflow and the way it creates the threads, read comment: https://github.com/microsoft/debugpy/issues/228#issuecomment-624908204

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 4
  • Comments: 20 (9 by maintainers)

Most upvoted comments

Sorry for hijacking this thread, but the solution does not work for me with tensorflow 2.4.1, pycharm and windows. Using the example posted above and an additional tf.config.run_functions_eagerly(True) at the top yields the following error message:

“UserWarning: Even though the tf.config.experimental_run_functions_eagerly option is set, this option does not apply to tf.data functions. tf.data functions are still traced and executed as graphs”

Hmm, https://github.com/tensorflow/tensorflow/issues/30653 seems to give more information

On TF 2.2 was tf.config.experimental_run_functions_eagerly(True). I think the point is:

To get performant and portable models, use tf.function to make graphs out of your programs.

When using @tf.function, you can temporarily toggle graph execution by using tf.config.experimental_execute_functions_eagerly. This will effectively run the annotated code eagerly, without transformation. Since AutoGraph has semantics consistent with Eager, it’s an effective way to debug the code step-by-step.

The other thing is that even if is not eager, the debug should stop the breakpoints anyway after the graph is created and during the execution of the code

If the code is transformed debugging with the orignal breakpoint doesn’t make sense.