mlflow: [BUG] Error with `mlflow models serve no--conda` under Windows 10

Thank you for submitting an issue. Please refer to our issue policy for additional information about bug reports. For help with debugging your code, please refer to Stack Overflow.

Please fill in this bug report template to ensure a timely and thorough response.

Willingness to contribute

The MLflow Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the MLflow code base?

  • Yes. I can contribute a fix for this bug independently.
  • Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.
  • No. I cannot contribute a bug fix at this time.

System information

  • Have I written custom code (as opposed to using a stock example script provided in MLflow): NO
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • MLflow installed from (source or binary): Binary
  • MLflow version (run mlflow --version): 1.10.0
  • Python version: 3.8.5
  • npm version, if running the dev UI:
  • Exact command to reproduce: mlflow models serve -m runs:/75614813307443a48a8c6fb80b9959d5/model --no-conda

Describe the problem

Describe the problem clearly here. Include descriptions of the expected behavior and the actual behavior. Adding switch --no-conda to mlflow models serve fails with error message.

Code to reproduce issue

Provide a reproducible test case that is the bare minimum necessary to generate the problem. Generate a trained model on Linux and copy it over to Windows 10 (see #3331 - mlflow run --no-conda works well on Windows 10, but the trained model is not saved with or without --no-conda switch): mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=5.0 --no-conda

Serve trained model on Windows 10: mlflow models serve -m runs:/75614813307443a48a8c6fb80b9959d5/model --no-conda

Other info / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

2020/08/26 09:40:47 INFO mlflow.models.cli: Selected backend for flavor 'python_function'
2020/08/26 09:40:47 INFO mlflow.pyfunc.backend: === Running command 'waitress-serve --host=127.0.0.1 --port=5000 --ident=mlflow mlflow.pyfunc.scoring_server.wsgi:app'
Traceback (most recent call last):
  File "z:\miniconda3\envs\autorouting_v1\lib\runpy.py", line 195, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "z:\miniconda3\envs\autorouting_v1\lib\runpy.py", line 88, in _run_code
    exec(code, run_globals)
  File "Z:\miniconda3\envs\autorouting_v1\Scripts\mlflow.exe\__main__.py", line 7, in <module>
  File "z:\miniconda3\envs\autorouting_v1\lib\site-packages\click\core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "z:\miniconda3\envs\autorouting_v1\lib\site-packages\click\core.py", line 782, in main
    rv = self.invoke(ctx)
  File "z:\miniconda3\envs\autorouting_v1\lib\site-packages\click\core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "z:\miniconda3\envs\autorouting_v1\lib\site-packages\click\core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "z:\miniconda3\envs\autorouting_v1\lib\site-packages\click\core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "z:\miniconda3\envs\autorouting_v1\lib\site-packages\click\core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "z:\miniconda3\envs\autorouting_v1\lib\site-packages\mlflow\models\cli.py", line 55, in serve
    return _get_flavor_backend(model_uri,
  File "z:\miniconda3\envs\autorouting_v1\lib\site-packages\mlflow\pyfunc\backend.py", line 98, in serve
    subprocess.Popen([command.split(" ")], env=command_env).wait()
  File "z:\miniconda3\envs\autorouting_v1\lib\subprocess.py", line 854, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "z:\miniconda3\envs\autorouting_v1\lib\subprocess.py", line 1247, in _execute_child
    args = list2cmdline(args)
  File "z:\miniconda3\envs\autorouting_v1\lib\subprocess.py", line 549, in list2cmdline
    for arg in map(os.fsdecode, seq):
  File "z:\miniconda3\envs\autorouting_v1\lib\os.py", line 818, in fsdecode
    filename = fspath(filename)  # Does type-checking of `filename`.
TypeError: expected str, bytes or os.PathLike object, not list

What component(s), interfaces, languages, and integrations does this bug affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/projects: MLproject format, project running backends
  • area/scoring: Local serving, model deployment tools, spark UDFs
  • area/server-infra: MLflow server, JavaScript dev server
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, JavaScript, plotting
  • area/docker: Docker use across MLflow’s components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 1
  • Comments: 20 (9 by maintainers)

Most upvoted comments

According to the documentation of subprocess.Popen:

On Windows, if args is a sequence, it will be converted to a string in a manner described in Converting an argument sequence to a string on Windows. This is because the underlying CreateProcess() operates on strings.

The error under --no-conda is caused by line 98 in module mlflow.pyfunc.backend.serve (mlflow 1.10.0):

subprocess.Popen([command.split(" ")], env=command_env).wait()

where command is a string such as waitress-serve --host=127.0.0.1 --port=5000 --ident=mlflow mlflow.pyfunc.scoring_server.wsgi:app. A workaround is to provide the string command as argument to subprocess.Popen():

subprocess.Popen(command, env=command_env).wait()

This workaround was successfully tested with:

  • Python 3.5, 3.6, 3.7, and 3.8
  • Anaconda Prompt and Anaconda Powershell Prompt

This should be fixed now, since the pull request was accepted and merged. Big thanks to @harupy !

When will this be fixed? It seems to me like all thats needed is the workaround of @mpbrigham to be merged into the repo. I just ran into this bug, and the workaround also fixed the issue for me.