tensorflow: Exception when concatenating empty flattened layer
System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes, attached in colab.
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Colab & local Ubuntu 20.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: Not tried
- TensorFlow installed from (source or binary): Colab: bundled, local: using pip install
- TensorFlow version (use command below): Colab: v2.3.0-0-gb36436b087, local: 2.2.0
- Python version: local: 3.8.2
Describe the current behavior Using keras bundled with tensorflow, from one input with shape (None, num > 0) and another input with shape (None, num2 > 0, 0), the second is flattened, obtaining shape (None, 0), and, when trying to concatenate both, it fails during training because it assumes the shape of the second has double number of rows. Interestingly, the summary description of the model after compiling is correct. A minimum code example is in this colab.
The exception in colab is,
InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [5,1] vs. shape[1] = [10,0]
[[node functional_5/concatenate_2/concat (defined at <ipython-input-3-f5c5cd6d4c13>:29) ]] [Op:__inference_train_function_1468]
Function call stack:
train_function
While locally, I get
tensorflow.python.framework.errors_impl.InvalidArgumentError: All dimensions except 1 must match. Input 1 has shape [256 0] and doesn't match input 0 with shape [128 10].
[[{{node training/Adam/gradients/gradients/concatenate_1/concat_grad/ConcatOffset}}]]
Describe the expected behavior The flattened version of the input with a zero dimension should have the same number of rows.
Standalone code to reproduce the issue Colab example.
Other info / logs
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
--> 108 return method(self, *args, **kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1096 batch_size=batch_size):
1097 callbacks.on_train_batch_begin(step)
-> 1098 tmp_logs = train_function(iterator)
1099 if data_handler.should_sync:
1100 context.async_wait()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
778 else:
779 compiler = "nonXla"
--> 780 result = self._call(*args, **kwds)
781
782 new_tracing_count = self._get_tracing_count()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
838 # Lifting succeeded, so variables are initialized and we can run the
839 # stateless function.
--> 840 return self._stateless_fn(*args, **kwds)
841 else:
842 canon_args, canon_kwds = \
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
2827 with self._lock:
2828 graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
-> 2829 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
2830
2831 @property
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _filtered_call(self, args, kwargs, cancellation_manager)
1846 resource_variable_ops.BaseResourceVariable))],
1847 captured_inputs=self.captured_inputs,
-> 1848 cancellation_manager=cancellation_manager)
1849
1850 def _call_flat(self, args, captured_inputs, cancellation_manager=None):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
1922 # No tape is watching; skip to running the function.
1923 return self._build_call_outputs(self._inference_function.call(
-> 1924 ctx, args, cancellation_manager=cancellation_manager))
1925 forward_backward = self._select_forward_and_backward_functions(
1926 args,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager)
548 inputs=args,
549 attrs=attrs,
--> 550 ctx=ctx)
551 else:
552 outputs = execute.execute_with_cancellation(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:
InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [5,1] vs. shape[1] = [10,0]
[[node functional_5/concatenate_2/concat (defined at <ipython-input-3-f5c5cd6d4c13>:29) ]] [Op:__inference_train_f
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 19 (13 by maintainers)
The issue is replicating in TF 2.11V.However to be more specific the issue exists with only Graph mode
model.compile(...,run_eagerly=False)which is default withmodel.compile().If we change to eager executionmodel.compile(...,run_eagerly=True)there is no Error.Please refer to attached gist.