mlflow: artifacts not shown in UI
System information
- Have I written custom code (as opposed to using a stock example script provided in MLflow):
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
- MLflow installed from (source or binary): pip
- MLflow version (run
mlflow --version): mlflow, version 0.8.2 - Python version: Python 3.6.8 :: Anaconda, Inc.
- **npm version (if running the dev UI): -
- Exact command to reproduce: * mlflow ui --file-store .
Describe the problem
I have logged artifacts and can clearly see them in the file-system. /artifacts contains:
/imageswith a sample for each epocharchitecture.txtwith a small description for the architecture.
But when I visit the mlflow-ui, it just shows: No artifacts recorded.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 33 (1 by maintainers)
I have the same problem - artifacts not shown in UI. Im running backend-store-uri Postgres and --default-artifact-root is local directory. I do see artifacts in the specified location, but not in UI.
i do have the same problem, the artifacts exists physically but UI shows the metrics and parameters of each run but doesnt display the files from the artifacts.
i run it in a docker and kubernetes if that information helps.
I encounter the same issue as @geoHeil , according to this stackoverflow it’s currently the expected behavior.
I was able to solve the issue by editing the meta file path. Double check you meta files path and make sure this is a valid path.
Ok. So I had this issue. It turns out that what was happening in my own case is that it is the client that stores the artifacts, not the server. This means that if you have your artifact root set like:
… then the client will store the artifacts on its own
/home/mlruns/1folder, not on the server’s. Of course, if the server is running on a different file system than the client, then this is invisible to the server…I assume that one must mandatorily use some shared location, such as s3 or similar. (havent tested this yet, but seems plausible).
Update: this was exactly the issue:
I had the same issue. I used the relative path, so I changed it to the absolute path. It fixed. To check paths,
also checked meta.yaml
I had the same issue when using the file storage, when using the absolute paths; It worked perfectly.
When logging to mlflow:
the lines above work great when running locally. But they fail to upload the artifact. Only the metadata and parameters are transferred. The artifact is missing.
docker-compose.yml
with DOCKERFILE
I currently have the same problem using mlflow 1.1.0. I use the default locations for the backend- and artifact store and the artifacts appear in the file system for each run. But they do not show up in the UI, when I startmlflow ui.The training and logging is done within a docker container, the call to the ui is done afterwards outside of docker.
EDIT: Just inspected the
meta.yamlin a specific run and found out, that theartifact_uriwas set to an invalid path!