label-studio-ml-backend: get_result_from_job_id AssertionError while initializing redeployed LS/ML NER backend
I am trying to deploy NER example model trained on my local machine along with the Label Studio project to another machine. I’ve gone through following steps:
- Recreated Label Studio and ML Backend environments similarly as on a target machine
- Copied folder with the model itself (folder named with just integers) to target machine ML Backend folder.
- Extracted content (data, annotations and predictions) of the project through Label Studio API into json format (using
...export?exportType=JSON&download_all_tasks=truecommand) - Imported project json file into the newly created Label Studio project.
When trying to initialize and pair LS and ML Backend on a new machine, i am getting :
[2022-05-30 10:18:56,133] [ERROR] [label_studio_ml.model::get_result_from_last_job::128] 1647350146 job returns exception: Traceback (most recent call last): File "/Users/user/Projects/label-studio-ml-backend/label_studio_ml/model.py", line 126, in get_result_from_last_job result = self.get_result_from_job_id(job_id) File "/Users/user/Projects/label-studio-ml-backend/label_studio_ml/model.py", line 108, in get_result_from_job_id assert isinstance(result, dict) AssertionErrorand it keeps repeating for each job
Should any additional steps be performed during deploy of the project/model to other environments ?
I’ve tried with following LS versions (1.1.1 - my initial one, 1.4.1post1 - most recent one) and the most current code base of ML backend. Using Python 3.8 and MacOS for both source and target environments.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 23 (10 by maintainers)
Hi @TrueWodzu
Yes, but intention for this error is to get message in case of anybody tried to load model that wasn’t trained successfully. I will add this flag so anybody can ignore such errors in future.
Hello, I have the same problem:
I’ve created a custom backend based on example in mmdetection.py. I don’t use active learning (I think, i did not set that up). Every time I will switch to next image for annotation, I am getting this output in console:
The exception is caused because _get_result_from_job_id is returning None because os.path.exists(result_file) is returning False:
What is strange for me is that I do have job_result.json file in the required directory but probably it is not there when the check occurs? It must be created later. The contents of file is empty json.
and here is my predict() method:
But I dont think this is due to predict() as I’ve said earlier, the check:
if not os.path.exists(result_file):is failing for some reason.I run into the exact same problem with my custom backend.
I am in the process of upgrading my system to the latest LS and backend. Everything was working fine with LS 1.1.1 and the backend from a year ago.
After training, another job is sent for some reason, and then
train_outputis being cleared causing the backend to lose the knowledge about the last trained model.I already set
LABEL_STUDIO_ML_BACKEND_V2_DEFAULT = True