mlflow: [BUG] Artifacts are not logged on the server, but locally

Thank you for submitting an issue. Please refer to our issue policy for information on what types of issues we address. For help with debugging your code, please refer to Stack Overflow.

Please fill in this template and do not delete it unless you are sure your issue is outside its scope.

System information

  • Have I written custom code (as opposed to using a stock example script provided in MLflow): no
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
  • MLflow installed from (source or binary): pip install
  • MLflow version (run mlflow --version): 1.4.0
  • Python version: 3.7.3 locally, 3.8.0 server
  • npm version, if running the dev UI:
  • Exact command to reproduce:

On the server

    mlflow server --port $PORT                                                                                            \
              --backend-store-uri sqlite:///${MNTP}/${TRACKING_URI}/${DB_FILE_NAME}      \
              --default-artifact-root file://${MNTP}/${ARTIFACT_URI}                                        \
              --host 0.0.0.0                                                                                                          \
              -w 1

and locally

    import mlflow

    remote_server_uri = "https://<uri>:<port>"
    mlflow.ser_tracking_uri(remote_server_uri)
    with open('file.txt', 'w') as f:
        f.write('hello')
    mlflow.log_artifact("file.txt")

Describe the problem

The file ‘file.txt’ gets logged locally to the path ${MNTP}/${ARTIFACT_URI} . I would expect it to get logged on the server to the path ${MNTP}/${ARTIFACT_URI} .

Other info / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

There are no error logs. This is simply not the expected behaviour.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 17 (2 by maintainers)

Most upvoted comments

From the docs, I was under the impression that the tracking server would handle remote artifact storage, but I’m having issues with this as well.

I got it working by specifying the artifact location in the experiment constructor.

You basically just call create_experiment as defined in the doc and specify the artifact_location (S3 bucket, ftp server, etc) or with the CLI. Then artifacts of the runs of that experiment are sent to that location.