tensorflow: Incorrect INVALID_ARGUMENT error is thrown with `iter(dataset)` call
Issue Type
Bug
Have you reproduced the bug with TF nightly?
Yes
Source
source
Tensorflow Version
Nightly build version: 2.12.0-dev20230105 GIT Version: v1.12.1-87220-gbf3a8ec10be
Custom Code
Yes
OS Platform and Distribution
Linux 4799aa259243 5.10.147 x86_64 GNU/Linux
Mobile device
N/A
Python version
3.8
Bazel version
N/A
GCC/Compiler version
N/A
CUDA/cuDNN version
libcudnn8=8.6.0.163-1+cuda11.8
GPU model and memory
Tesla T4 15109MiB
Current Behaviour?
When `iter(dataset)` is executed, we run into an error that requires a value for placeholder tensor. Ideally, no value needs to be fed.
Standalone code to reproduce the issue
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='0'
import numpy as np
import tensorflow as tf
INPUT_SIZE = (1, 224, 224, 3)
tf.get_logger().setLevel('INFO')
data = tf.random.uniform(INPUT_SIZE)
dataset = tf.data.Dataset.from_tensor_slices(data)
dataset = dataset.repeat()
dataset = dataset.batch(32)
dataset = dataset.repeat()
dataset = dataset.prefetch(tf.data.AUTOTUNE)
data = next(iter(dataset))
Relevant log output
2023-01-06 00:14:49.489765: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1614] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 10904 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5
2023-01-06 00:14:49.517784: I tensorflow/core/common_runtime/executor.cc:1195] [/device:CPU:0] Executor start aborting: INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype float and shape [1,224,224,3]
[[{{node Placeholder/_0}}]]
2023-01-06 00:14:49.518018: I tensorflow/core/common_runtime/executor.cc:1195] [/device:CPU:0] Executor start aborting: INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype float and shape [1,224,224,3]
[[{{node Placeholder/_0}}]]
About this issue
- Original URL
- State: open
- Created a year ago
- Comments: 28 (15 by maintainers)
@SuryanarayanaY @mohantym I think there’s a misunderstanding on how to reproduce:
Docker Image: https://hub.docker.com/layers/tensorflow/tensorflow/nightly-gpu/images/sha256-ff8e778a1cb6811df47a550f9eea53fc4b164236f54b5e342f561e9ca7d66edb?context=explore
Or in a more direct fashion:
This bug exist in both:
tensorflow/tensorflow:nightly-gpuDocker Containertf-nightly-gpuas @pavanimajety was explaining.IMPORTANT: If you try to reproduce in Google Colab, C++ logs are not being shown. Hence you won’t see the problem in Colab. You need to reproduce outside of Google Colab
@mohantym I am able to consistently reproduce with the nightly build or any commit from almost first week of December. I believe it doesn’t matter that the issue is not reproducible in 2.11, since I am on the latest releases for TF, Cuda and Cudnn.
Hi @mohantym, I fixed the typo in issue. I used the nightly build -
2.12.0-dev20230105@SuryanarayanaY it’s not about creating an error. It’s about a highly confusing message. I think we can all agree that this error message should at least be rephrased so that it doesn’t appear as a “error logged as
INFO” and probably moved as aDEBUGlog.INFOlogs are supposed to be user-oriented and/or call to actions for the end-user. This message can only create confusion and unecessary github issues/tickets.Let’s clean that up before the next release (coming very soon) @reedwm