mlflow: [BUG] meta.yaml not being created when using SQlite as tracking URI

Issues Policy acknowledgement

  • I have read and agree to submit bug reports in accordance with the issues policy

Willingness to contribute

Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.

MLflow version

mlflow, version 1.28.0

System information

  • Windows 10 64 bit:
  • 3.9:

Describe the problem

The piece of code provided below is what I am using, and when running this the meta.yaml file is not being created at all. Because of which I am unable to use mlflow UI command or serve the model via mlflow serve.

I can access the mlflow server and there I can register the models but thats it no way to serve the models.

Tracking information

No response

Code to reproduce issue


import numpy as np 
import pandas as pd
import warnings

from sklearn.model_selection import train_test_split

from sklearn.linear_model import LogisticRegression
from sklearn import metrics

import mlflow
import mlflow.sklearn

from preprocess import preprocess

import logging
mlflow.set_tracking_uri('sqlite:///mlflow.db')
logging.basicConfig(level=logging.WARN)
logger = logging.getLogger(__name__)

experiment = mlflow.set_experiment(experiment_name = "Logistic Regression") 
print("Experiment_id: {}".format(experiment.experiment_id))

X,y = preprocess() #Any classification data in features, labels format
if __name__ == "__main__":
	warnings.filterwarnings("ignore")
	np.random.seed(40)

	X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=.25,random_state=3)
	with mlflow.start_run(run_name = "Train Run") as run:
		log_reg=LogisticRegression(random_state=3, penalty = 'elasticnet', solver='saga', l1_ratio= 0.5)
		log_reg.fit(X_train,y_train)
		y_pred=log_reg.predict(X_test)
		acc_log_clf = metrics.accuracy_score(y_test,y_pred)
		mlflow.log_metric("Accuracy", acc_log_clf)
		print('Accuracy score:', acc_log_clf)
		mlflow.sklearn.log_model(
		sk_model=log_reg,
		artifact_path="sklearn-model",
		registered_model_name="sk-learn-logistic-reg-model"
		)
		print("Model saved in run %s" % mlflow.active_run().info.run_uuid)


Stack trace

mlflow ui INFO:waitress:Serving on http://127.0.0.1:5000 WARNING:root:Malformed experiment ‘1’. Detailed error Yaml file ‘.\mlruns\1\meta.yaml’ does not exist. Traceback (most recent call last): File “C:\Python Scripts\mlops\env\lib\site-packages\mlflow\store\tracking\file_store.py”, line 270, in list_experiments experiment = self._get_experiment(exp_id, view_type) File “C:\Python Scripts\mlops\env\lib\site-packages\mlflow\store\tracking\file_store.py”, line 394, in _get_experiment meta = FileStore._read_yaml(experiment_dir, FileStore.META_DATA_FILE_NAME) File “C:\Python Scripts\mlops\env\lib\site-packages\mlflow\store\tracking\file_store.py”, line 1049, in _read_yaml return _read_helper(root, file_name, attempts_remaining=retries) File “C:\Python Scripts\mlops\env\lib\site-packages\mlflow\store\tracking\file_store.py”, line 1042, in _read_helper result = read_yaml(root, file_name) File “C:\Python Scripts\mlops\env\lib\site-packages\mlflow\utils\file_utils.py”, line 181, in read_yaml raise MissingConfigException(“Yaml file ‘%s’ does not exist.” % file_path) mlflow.exceptions.MissingConfigException: Yaml file ‘.\mlruns\1\meta.yaml’ does not exist.

Other info / logs

No response

What component(s) does this bug affect?

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

What interface(s) does this bug affect?

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow’s components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

What language(s) does this bug affect?

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

What integration(s) does this bug affect?

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 19 (6 by maintainers)

Most upvoted comments

To amalgamate incase someone is having similar issues, on windows: Use

set MLFLOW_TRACKING_URI=sqlite:///mlflow.db 

Followed by

mlflow models serve --env-manager local --model-uri runs:/%run-id%/%artifact-model-name-used-while-saving%

Also to access mlflow ui while having a SQlite storage reference:


mlflow ui --backend-store-uri sqlite:///mlflow.db
mlflow models serve --env-manager local --model-uri runs:/e2fc4c05beea4b9a89096037b58921cd/sklearn-model
2022/12/07 11:46:59 INFO mlflow.models.flavor_backend_registry: Selected backend for flavor 'python_function'
2022/12/07 11:46:59 INFO mlflow.pyfunc.backend: === Running command 'waitress-serve --host=127.0.0.1 --port=5000 --ident=mlflow mlflow.pyfunc.scoring_server.wsgi:app'
INFO:waitress:Serving on http://127.0.0.1:5000

Its working, now need to test it out. Thank you @harupy @WeichenXu123

@ArchanGhosh Never mind the comment above. When running mlflow ui, you need to specify the tracking URI using --backend-store-uri like:

mlflow ui --backend-store-uri sqlite:///mlflow.db