mlflow: Tracking Server not working as a proxy for localhost
Willingness to contribute
No. I cannot contribute a bug fix at this time.
MLflow version
1.25.1
System information
- localhost: GitBash
- Remote Host: Kubernetes POD
- Artifact Destination: AWS S3
- Python 3.7.2
Describe the problem
I am having similar issue to what was posted here: https://github.com/mlflow/mlflow/issues/5659 Unfortunately, the solution provided hasn’t worked for me. When running a modeling script on the Remote Host, the artifacts get stored in S3 properly. When I run from localhost, I get:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I have determined that it expects to use local creds when running on localhost instead of the ones on the Tracking Server. Hoping somebody has more suggestions of things to try or look for.
Tracking information
Tracking & Artifact uri look as expected. Not sharing for security reasons.
Code to reproduce issue
mlflow.set_tracking_uri("masked")
mlflow.set_experiment("masked")
with mlflow.start_run():
.
.
.
plt.savefig('plot.png')
print(mlflow.get_tracking_uri())
print(mlflow.get_artifact_uri())
mlflow.log_artifact("plot.png")
Other info / logs
botocore.exceptions.NoCredentialsError: Unable to locate credentials
What component(s) does this bug affect?
-
area/artifacts: Artifact stores and artifact logging -
area/build: Build and test infrastructure for MLflow -
area/docs: MLflow documentation pages -
area/examples: Example code -
area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry -
area/models: MLmodel format, model serialization/deserialization, flavors -
area/pipelines: Pipelines, Pipeline APIs, Pipeline configs, Pipeline Templates -
area/projects: MLproject format, project running backends -
area/scoring: MLflow Model server, model deployment tools, Spark UDFs -
area/server-infra: MLflow Tracking server backend -
area/tracking: Tracking Service, tracking client APIs, autologging
What interface(s) does this bug affect?
-
area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server -
area/docker: Docker use across MLflow’s components, such as MLflow Projects and MLflow Models -
area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry -
area/windows: Windows support
What language(s) does this bug affect?
-
language/r: R APIs and clients -
language/java: Java APIs and clients -
language/new: Proposals for new client languages
What integration(s) does this bug affect?
-
integrations/azure: Azure and Azure ML integrations -
integrations/sagemaker: SageMaker integrations -
integrations/databricks: Databricks integrations
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 2
- Comments: 68 (36 by maintainers)
@harupy
Yes, I deleted/recreated new experiments each time.
The key is apparently to not use the
--default-artifact-rootargument as you stated in a previous comment (I didn’t tried this before). This works for me, thanks.@harupy that solved it. thanks 👍
@bkolb249 Can you launch the server with this command:
and create a new experiment, and then log artifacts?