mlflow: [BUG] Error when opening the registered-models page on the UI due to a file store backend
Thank you for submitting an issue. Please refer to our issue policy for additional information about bug reports. For help with debugging your code, please refer to Stack Overflow.
Please fill in this bug report template to ensure a timely and thorough response.
Willingness to contribute
The MLflow Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the MLflow code base?
- Yes. I can contribute a fix for this bug independently.
- Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.
- No. I cannot contribute a bug fix at this time.
System information
- Have I written custom code (as opposed to using a stock example script provided in MLflow):
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- MLflow installed from (source or binary):
- MLflow version (run
mlflow --version): - Python version:
- npm version, if running the dev UI:
- Exact command to reproduce:
Describe the problem
Describe the problem clearly here. Include descriptions of the expected behavior and the actual behavior.
When I tried to open the registered-models page on the UI, I got an error.
The error is caused by a failed API request to get registered models (/2.0/preview/mlflow/registered-models/list) and the response says:
{
"error_code": "INVALID_PARAMETER_VALUE",
"message": "Invalid value for request parameter max_results.It must be at most 1000, but got value 50000"
}
It looks like list_registered_models calls search_registered_models with max_results = 50000 which exceeds SEARCH_REGISTERED_MODEL_MAX_RESULTS_DEFAULT (= 1000). I’m not sure if this is the right approach but specifying max_results in model-registry/actions.js solves the error.
# model-registry/actions.js
export const listRegisteredModelsApi = (id = getUUID()) => ({
type: LIST_REGISTRED_MODELS,
payload: wrapDeferred(Services.listRegisteredModels, { max_results: 1000 }),
meta: { id },
});
Code to reproduce issue
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
Other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
What component(s), interfaces, languages, and integrations does this bug affect?
Components
-
area/artifacts: Artifact stores and artifact logging -
area/build: Build and test infrastructure for MLflow -
area/docs: MLflow documentation pages -
area/examples: Example code -
area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry -
area/models: MLmodel format, model serialization/deserialization, flavors -
area/projects: MLproject format, project running backends -
area/scoring: Local serving, model deployment tools, spark UDFs -
area/tracking: Tracking Service, tracking client APIs, autologging
Interface
-
area/uiux: Front-end, user experience, JavaScript, plotting -
area/docker: Docker use across MLflow’s components, such as MLflow Projects and MLflow Models -
area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry -
area/windows: Windows support
Language
-
language/r: R APIs and clients -
language/java: Java APIs and clients
Integrations
-
integrations/azure: Azure and Azure ML integrations -
integrations/sagemaker: SageMaker integrations
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 15 (9 by maintainers)
@harupy Confirmed this just now - installing mlflow 1.9.0 and running the UI doesn’t show any error on the registered models page for me.
I’ll track this issue in the UI PR and close when that merges. Thanks (and let me know if I’m missing some particular configuration here)!
Thanks @cafeal! I’ll take a look when I have some time. Don’t worry about being new at it! We’re very thankful for contributions
I think, for the time being, proceeding with a frontend fix would make sense
That sounds like a better thing to do @harupy! If you end up making a PR for this soon, I’d be happy to review