model_analyzer: Fail to analyze ensemble model: "inference.ModelConfig" should not have multiple "scheduling_choice" oneof fields
When I use model-analyzer to analyze a ensemble model with local luanch mode, it always fails with following error:
root@dl:/inference# model-analyzer profile --checkpoint-directory checkpoints -m $PWD/model_repo --profile-models quartznet-ensemble --output-model-repository-path=/output_repo/temp --override-output-model-repository --client-protocol grpc --run-config-search-max-concurrency 800 --run-config-search-max-instance-count 2 --run-config-search-max-preferred-batch-size 64
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/google/protobuf/json_format.py", line 538, in _ConvertFieldValuePair
raise ParseError('Message type "{0}" should not have multiple '
google.protobuf.json_format.ParseError: Message type "inference.ModelConfig" should not have multiple "scheduling_choice" oneof fields.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/model-analyzer", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/dist-packages/model_analyzer/entrypoint.py", line 315, in main
analyzer.profile(client=client)
File "/usr/local/lib/python3.8/dist-packages/model_analyzer/analyzer.py", line 104, in profile
self._model_manager.run_model(model=model)
File "/usr/local/lib/python3.8/dist-packages/model_analyzer/model_manager.py", line 84, in run_model
self._run_model_with_search(model)
File "/usr/local/lib/python3.8/dist-packages/model_analyzer/model_manager.py", line 138, in _run_model_with_search
self._run_model_config_sweep(model, search_model_config=True)
File "/usr/local/lib/python3.8/dist-packages/model_analyzer/model_manager.py", line 167, in _run_model_config_sweep
self._run_config_generator.generate_run_config_for_model_sweep(
File "/usr/local/lib/python3.8/dist-packages/model_analyzer/config/run/run_config_generator.py", line 98, in generate_run_config_for_model_sweep
model_config = ModelConfig.create_from_dictionary(
File "/usr/local/lib/python3.8/dist-packages/model_analyzer/triton/model/model_config.py", line 117, in create_from_dictionary
protobuf_message = json_format.ParseDict(model_dict,
File "/usr/local/lib/python3.8/dist-packages/google/protobuf/json_format.py", line 454, in ParseDict
parser.ConvertMessage(js_dict, message)
File "/usr/local/lib/python3.8/dist-packages/google/protobuf/json_format.py", line 485, in ConvertMessage
self._ConvertFieldValuePair(value, message)
File "/usr/local/lib/python3.8/dist-packages/google/protobuf/json_format.py", line 599, in _ConvertFieldValuePair
raise ParseError(str(e))
google.protobuf.json_format.ParseError: Message type "inference.ModelConfig" should not have multiple "scheduling_choice" oneof fields.
The model repository I used can be downloaded here.
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 2
- Comments: 16 (5 by maintainers)
23.02 has been released. Ensemble support is available
@jmuppave Ensemble model support is in the repository, but was not in 23.01 release. It will be in the 23.02 release which should come out in the next few days.
@dhaval24 it’s on our short term roadmap since it is a high priority feature. I can share a more granular timeline with you over the email thread we have
@okanlv We don’t have any updates regarding the ensemble support. We’ll update this issue as soon as more information is available.