tensorflow: [Autograph] Inconsistent behaviour with lambda variable in loop

Click to expand!

Issue Type

Bug

Source

source

Tensorflow Version

master

Custom Code

No

OS Platform and Distribution

No response

Mobile device

No response

Python version

No response

Bazel version

No response

GCC/Compiler version

No response

CUDA/cuDNN version

No response

GPU model and memory

No response

Current Behaviour?

We have an inconsistent behavior with lambda variables in a loop in pure python and graph mode:
https://docs.python.org/3/faq/programming.html#why-do-lambdas-defined-in-a-loop-with-different-values-all-return-the-same-result

Standalone code to reproduce the issue

import tensorflow as tf

def test_a():
  fns = []
  for i in range(3):
    fns.append(lambda: print(i))
  for f in fns:
    f()

@tf.function
def test_b():
  fns = []
  for i in range(3):
    fns.append(lambda: print(i))
  for f in fns:
    f()

def test_c():
  fns = []
  for i in range(3):
    fns.append(lambda i=i: print(i))
  for f in fns:
    f()

@tf.function 
def test_d():
  fns = []
  for i in range(3):
    fns.append(lambda i=i: print(i))
  for f in fns:
    f()

test_a() 
print("=="*10)
tf.config.run_functions_eagerly(False)
test_b()
print("=="*10)
tf.config.run_functions_eagerly(True)
test_b()
print("=="*10)
test_c() 
print("=="*10)
tf.config.run_functions_eagerly(False)
test_d()
print("=="*10)
tf.config.run_functions_eagerly(True)
test_d() 
2
2
2
====================
0
1
2
====================
2
2
2
====================
0
1
2
====================
0
1
2
====================
0
1
2

Relevant log output

test_b is wrongly working “as expected” in graph mode:

# coding=utf-8
def tf__test():
    with ag__.FunctionScope('test', 'fscope', ag__.ConversionOptions(recursive=True, user_requested=True, optional_features=(), internal_convert_user_code=True)) as fscope:
        fns = []

        def get_state():
            return ()

        def set_state(block_vars):
            pass

        def loop_body(itr):
            i = itr
            ag__.converted_call(ag__.ld(fns).append, (ag__.autograph_artifact((lambda : ag__.ld(print)(ag__.ld(i)))),), None, fscope)
        i = ag__.Undefined('i')
        ag__.for_stmt(ag__.converted_call(ag__.ld(range), (3,), None, fscope), None, loop_body, get_state, set_state, (), {'iterate_names': 'i'})

        def get_state_1():
            return ()

        def set_state_1(block_vars):
            pass

        def loop_body_1(itr_1):
            f = itr_1
            ag__.converted_call(ag__.ld(f), (), None, fscope)
        f = ag__.Undefined('f')
        ag__.for_stmt(ag__.ld(fns), None, loop_body_1, get_state_1, set_state_1, (), {'iterate_names': 'f'})

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 5
  • Comments: 23 (11 by maintainers)

Most upvoted comments

Are there any instructions on how to hide this warning? 😛

python3.7/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.
Instructions for updating:
Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089