MONAILabel: RuntimeError when "brats_mri_segmentation_v0.2.1" from monaibundle is used.
Describe the bug MONAI Label server is giving the following error when “brats_mri_segmentation_v0.2.1” is used for brain tumor segmentation.
RuntimeError: Given groups=1, weight of size [16, 4, 3, 3, 3], expected input[1, 240, 240, 240, 160] to have 4 channels, but got 240 channels instead
To Reproduce Steps to reproduce the behavior:
- pip install monailabel
- monailabel apps --download --name monaibundle --output apps
- monailabel datasets --download --name Task01_BrainTumour --output datasets
- monailabel start_server --app apps/monaibundle --studies datasets/Task01_BrainTumour/imagesTr --conf models brats_mri_segmentation_v0.2.1
- Run the model in 3D slicer with any image from the dataset.
Expected behavior Segmentation should be displayed in 3D slicer.
Screenshots

Environment
Ensuring you use the relevant python executable, please paste the output of:
python -c 'import monai; monai.config.print_debug_info()'
================================ Printing MONAI config…
MONAI version: 1.0.0 Numpy version: 1.22.4 Pytorch version: 1.12.1+cpu MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False MONAI rev id: 170093375ce29267e45681fcec09dfa856e1d7e7 MONAI file: C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\site-packages\monai_init_.py
Optional dependencies: Pytorch Ignite version: 0.4.10 Nibabel version: 4.0.2 scikit-image version: 0.19.3 Pillow version: 9.2.0 Tensorboard version: 2.10.0 gdown version: 4.5.1 TorchVision version: 0.13.1+cpu tqdm version: 4.64.0 lmdb version: 1.3.0 psutil version: 5.9.1 pandas version: 1.4.3 einops version: 0.4.1 transformers version: NOT INSTALLED or UNKNOWN VERSION. mlflow version: NOT INSTALLED or UNKNOWN VERSION. pynrrd version: 0.4.3
About this issue
- Original URL
- State: open
- Created 2 years ago
- Reactions: 1
- Comments: 57 (1 by maintainers)
Those 4-channel NIFTI images in BRATS is a complete nonsense, because 4 completely independent images are resampled and dumped into a single image file. This misuse is possible in NIFTI (although it breaks several rules of the standard and you lose information about what kind of images you have in the file), but it is not even possible in DICOM. If you want to store images in DICOM then you need to create a separate series from each channel.
@diazandr3s
Thank you for answering my question.
I downloaded BraTS2021 dataset as you mention.
Should I run using apps/radiology with BraTS2021 dataset?
After starting monailabel server using command ‘monailabel start_server --app apps/radiology --studies datasets/Task01_BrainTumour/imagesTr --conf models segmentation’ in Window Powershell, I can’t run 3D-Slicer.
Because It doesn’t support segmentation model associated with brain tumor.
Person that I posted in Project-MONAI/model-zoo#239 is also me.
@SachidanandAlle OK. I will try.
Hi @diazandr3s
Thanks for the clarification.
I went ahead and trained a model with the converted images(i.e images that converted to single modality). The following are the changes I made in config files before training the model.
The model was trained successfully with 300 epochs and with average dice score of around 81 . But when I tried inference, only one of the label was being segmented.
Is there anything I have missed here?
Hi @diazandr3s ,
Thanks for the reply
I think the images converted to DICOM are not with 4 modalities. I have tried 2 ways to convert the images.
Is there a way to preserve the modality when converted to DICOM?
Hi @diazandr3s ,
Many thanks for the suggestions,
I have seen that dataset too, but it does not contain 3D images and also it does not have annotations. We would have to annotate hemorrhages by ourselves which might lead to wrong labeling. I was hoping to get a dataset already annotated by experts like the Task01_BrainTumor dataset or INSTANCE 2022 dataset.
In case I don’t find any pre-annotated dataset, as the last resort I will attempt to label the segmentations using 3D slicer. There are couple of questions in this section.
Hi @diazandr3s I have registered for the challenge and they are asking to sign an agreement and send by email. I have done that too but did not get any reply from them.
Although no brain hemorrhage segmentation model (using CT images) is available in MONAI Label, it shouldn’t be difficult for you to create one from a public dataset like this one: https://instance.grand-challenge.org/
You may find this useful as well: https://github.com/Project-MONAI/MONAILabel/discussions/1055#discussioncomment-3830237
Regarding brain tumor segmentation model (using MR images), you could the same Task01_BrainTumour but with a single modality.
Hope this helps,
Hi @diazandr3s
Thanks for the video. I have tried the suggestions and got the prediction. The segmentation looks fine in 3D but nothing comes up in the other slides.
Hi @PranayBolloju,
I have tried myself this model and I’ve got the same error.
I’ve also changed the LoadImage args and managed to get a prediction. I think the quality of the model can be easily improved. Please watch this video:
https://user-images.githubusercontent.com/11991079/194732741-6d55c171-0eb6-4661-97fc-8fa0004897be.mp4
One thing you could do is first update both the inference and train files (add ensure_channel_first arg) and then re-train the model using the Task01_BrainTumour dataset.
Please follows these steps: https://github.com/Project-MONAI/MONAILabel/discussions/1055#discussioncomment-3830237
BTW, there is another unsolved issue regarding multimodality/multiparametric images in Slicer. When a NIfTI file has more than one modality, Slicer reads only one.
NIfTI can be messy and that’s why I make Slicer not consider the orientation. Ugly solution 😕
MONAI Label does support multiparametric, but Slicer can’t read multiple images when loaded in a single NIfTI image. More of this here: https://github.com/Project-MONAI/MONAILabel/pull/729#discussion_r872369612
Hi @tangy5 ,
Thanks for the response. Can you suggest a way to preprocess the data i.e. transpose images?
HI @PranayBolloju ,
For the BRATS bundle, each data contains 4 channels as input volume. The brats_mri_segmentation_v0.2.1 needs a pre-processing step for BRATS data later than 2018. For the data you have downloaded from Task01, four modalities MRI images are already in one NIFTI file, but the channel dimension is at the last, e.g., (240, 240, 160, 4), the 4 is at index 3 as the input data. A solution is to preprocess the data to compatible with the bundle input: transpose the image to (4, 240, 240,160).
Thanks for reporting this. We’d better to add note in the bundle Readme or MONAI Label side to remind users on pre-processing BRATS data. Hope this helps to solve you problem.