tfx: "Type already exists with different properties error" when running TFX 0.21.2 and TFX 0.23.0 in same kubeflow
One of the users of my kubeflow cluster is using TFX 0.21.4 and another is using TFX 0.23.0
After any pipeline is run that uses TFX 0.23.0, the ones that use TFX 0.21.4 fail with the following error during BigQueryExampleGen
Traceback (most recent call last):
File "/tfx-src/tfx/orchestration/kubeflow/container_entrypoint.py", line 382, in <module>
main()
File "/tfx-src/tfx/orchestration/kubeflow/container_entrypoint.py", line 375, in main
execution_info = launcher.launch()
File "/tfx-src/tfx/orchestration/launcher/base_component_launcher.py", line 197, in launch
self._exec_properties)
File "/tfx-src/tfx/orchestration/launcher/base_component_launcher.py", line 166, in _run_driver
component_info=self._component_info)
File "/tfx-src/tfx/components/base/base_driver.py", line 289, in pre_execution
contexts=contexts)
File "/tfx-src/tfx/orchestration/metadata.py", line 601, in update_execution
registered_artifacts_ids=registered_output_artifact_ids))
File "/tfx-src/tfx/orchestration/metadata.py", line 538, in _artifact_and_event_pairs
a.set_mlmd_artifact_type(self._prepare_artifact_type(a.artifact_type))
File "/tfx-src/tfx/orchestration/metadata.py", line 184, in _prepare_artifact_type
artifact_type=artifact_type, can_add_fields=True)
File "/opt/venv/lib/python3.6/site-packages/ml_metadata/metadata_store/metadata_store.py", line 268, in put_artifact_type
self._call('PutArtifactType', request, response)
File "/opt/venv/lib/python3.6/site-packages/ml_metadata/metadata_store/metadata_store.py", line 131, in _call
return self._call_method(method_name, request, response)
File "/opt/venv/lib/python3.6/site-packages/ml_metadata/metadata_store/metadata_store.py", line 162, in _call_method
raise _make_exception(e.details(), e.code().value[0])
tensorflow.python.framework.errors_impl.AlreadyExistsError: Type already exists with different properties.
I completely wiped out my kubeflow cluster/metadata to test from scratch, and the same thing happens. Is there any way to avoid that short of forcing everyone to be on the same tfx version?
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 30 (18 by maintainers)
Commits related to this issue
- Supports forward compatibility when evolving tfx artifact types. Previously when the new tfx releases changes artifact types [1], the old release jobs cannot run against the same mlmd backend. The ch... — committed to tensorflow/tfx by tfx-copybara 4 years ago
- Supports forward compatibility when evolving tfx artifact types. Previously when the new tfx releases changes artifact types [1], the old release jobs cannot run against the same mlmd backend. The ch... — committed to tensorflow/tfx by tfx-copybara 4 years ago
- Supports forward compatibility when evolving tfx artifact types. Previously when the new tfx releases changes artifact types [1], the old release jobs cannot run against the same mlmd backend. The ch... — committed to tensorflow/tfx by tfx-copybara 4 years ago
- Supports forward compatibility when evolving tfx artifact types. Previously when the new tfx releases changes artifact types [1], the old release jobs cannot run against the same mlmd backend. The ch... — committed to tensorflow/tfx by tfx-copybara 4 years ago
- Supports forward compatibility when evolving tfx artifact types. Previously when the new tfx releases changes artifact types [1], the old release jobs cannot run against the same mlmd backend. The ch... — committed to tensorflow/tfx by dhruvesh09 4 years ago
sg. we are working on this and will do patch release for 0.21.x and 0.22.x.
Is that an “expected” incompatibility that can be introduced in future versions? It seems acceptable although annoying that within a single pipeline the upgrade is a one-way process but for it to affect completely unrelated pipelines is quite bad.
How would I go about creating and using a parallel metadata DB?