mlflow: artifacts not shown in UI

System information

  • Have I written custom code (as opposed to using a stock example script provided in MLflow):
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • MLflow installed from (source or binary): pip
  • MLflow version (run mlflow --version): mlflow, version 0.8.2
  • Python version: Python 3.6.8 :: Anaconda, Inc.
  • **npm version (if running the dev UI): -
  • Exact command to reproduce: * mlflow ui --file-store .

Describe the problem

I have logged artifacts and can clearly see them in the file-system. /artifacts contains:

  • /images with a sample for each epoch
  • architecture.txt with a small description for the architecture.

But when I visit the mlflow-ui, it just shows: No artifacts recorded.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 33 (1 by maintainers)

Most upvoted comments

I have the same problem - artifacts not shown in UI. Im running backend-store-uri Postgres and --default-artifact-root is local directory. I do see artifacts in the specified location, but not in UI.

i do have the same problem, the artifacts exists physically but UI shows the metrics and parameters of each run but doesnt display the files from the artifacts.

i run it in a docker and kubernetes if that information helps.

I encounter the same issue as @geoHeil , according to this stackoverflow it’s currently the expected behavior.

I was able to solve the issue by editing the meta file path. Double check you meta files path and make sure this is a valid path.

Ok. So I had this issue. It turns out that what was happening in my own case is that it is the client that stores the artifacts, not the server. This means that if you have your artifact root set like:

artifact_location: /home/mlruns/1

… then the client will store the artifacts on its own /home/mlruns/1 folder, not on the server’s. Of course, if the server is running on a different file system than the client, then this is invisible to the server…

I assume that one must mandatorily use some shared location, such as s3 or similar. (havent tested this yet, but seems plausible).

Update: this was exactly the issue:

  • it is the client that stores to the artifactory, based on the path the user gives it, eg s3 or whatever
    • if the server is configured with a local filesystem, the client will try writing to taht filesystem path, on the client’s file system
  • solution is to use some shared storage, taht is accessible to both client and server, eg s3 (this means the client needs to be allowed to write directly to this storage of course).

I had the same issue. I used the relative path, so I changed it to the absolute path. It fixed. To check paths,

print('tracking uri:', mlflow.get_tracking_uri())
print('artifact uri:', mlflow.get_artifact_uri())

also checked meta.yaml

I had the same issue when using the file storage, when using the absolute paths; It worked perfectly.

mlflow server --backend-store-uri /private/tmp/ttt/expstore --default-artifact-root /private/tmp/ttt/expstore

When logging to mlflow:

import os
from mlflow import log_metric, log_param, log_artifact

# Log a parameter (key-value pair)
log_param("param1", 5)

# Log a metric; metrics can be updated throughout the run
log_metric("foo", 1)
log_metric("foo", 2)
log_metric("foo", 3)

# Log an artifact (output file)
with open("output.txt", "w") as f:
    f.write("Hello world!")
log_artifact("output.txt")

the lines above work great when running locally. But they fail to upload the artifact. Only the metadata and parameters are transferred. The artifact is missing.

docker-compose.yml

mlflow:
   build:
     context: my-contet
     dockerfile: mlflow.Dockerfile
   container_name: mlflow
   expose: 
      - "5000"
   ports:
     - "5000:5000"

with DOCKERFILE

FROM python:3.7.4

RUN pip install mlflow

RUN mkdir /mlflow/

CMD mlflow server \
    --backend-store-uri /mlflow \
    --default-artifact-root /mlflow \
    --host 0.0.0.0

I currently have the same problem using mlflow 1.1.0. I use the default locations for the backend- and artifact store and the artifacts appear in the file system for each run. But they do not show up in the UI, when I start mlflow ui.

The training and logging is done within a docker container, the call to the ui is done afterwards outside of docker.

EDIT: Just inspected the meta.yaml in a specific run and found out, that the artifact_uri was set to an invalid path!