anomalib: [Bug]: efficientad training its own dataset reports an error

Describe the bug

I encountered an error while training my own dataset using EfficientAD. I only made modifications to the dataset section of the configuration file provided by the official EfficientAD repository. Based on the same modifications, I was able to train models like CFA and PatchCore successfully, but I encountered an error specifically when training EfficientAD.

The yaml file for my efficientad is as follows (I only changed the dataset section, the rest is consistent)

dataset:
  name: mydata
  format: folder
  path: ./MyDataset/HC_ZT_ROI
  normal_dir: normal #  name of the folder containing normal images.
  abnormal_dir: abnormal #  name of the folder containing abnormal images.
  normal_test_dir: null #  name of the folder containing normal test images.
  task: classification
  mask: null
  extensions: null
  train_batch_size: 32
  test_batch_size: 32
  num_workers: 0
  image_size: 500 # dimensions to which images are resized (mandatory)
  center_crop: null # dimensions to which images are center-cropped after resizing (optional)
  normalization: null # data distribution to which the images will be normalized: [none, imagenet]
  transform_config:
    train: null
    eval: null
  test_split_mode: from_dir # options: [from_dir, synthetic]
  test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
  val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
  val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
  tiling:
    apply: false
    tile_size: null
    stride: null
    remove_border_count: 0
    use_random_tiling: False
    random_tile_count: 16


I have tentatively determined that the cause of my error is the parameter "normalization"

When I when I set normalization: imagenet or normalization: none the exact error message is:

2023-07-13 13:56:58,938 - anomalib.models.efficientad.lightning_model - INFO - Load pretrained teacher model from pre_trained\efficientad_pretrained_weights\pretrained_teacher_small.pth
Traceback (most recent call last):
  File "E:/Code/Anomalib/0.6.0/anomalib-0-6-0/tools/train.py", line 82, in <module>
    train(args)
  File "E:/Code/Anomalib/0.6.0/anomalib-0-6-0/tools/train.py", line 59, in train
    model = get_model(config)
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\models\__init__.py", line 106, in get_model
    model = getattr(module, f"{_snake_to_pascal_case(config.model.name)}Lightning")(config)
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\models\efficientad\lightning_model.py", line 289, in __init__
    super().__init__(
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\models\efficientad\lightning_model.py", line 95, in __init__
    self.prepare_imagenette_data()
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\models\efficientad\lightning_model.py", line 121, in prepare_imagenette_data
    imagenet_dataset = ImageFolder(imagenet_dir, transform=TransformsWrapper(t=self.data_transforms_imagenet))
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\site-packages\torchvision\datasets\folder.py", line 310, in __init__
    super().__init__(
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\site-packages\torchvision\datasets\folder.py", line 145, in __init__
    classes, class_to_idx = self.find_classes(self.root)
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\site-packages\torchvision\datasets\folder.py", line 219, in find_classes
    return find_classes(directory)
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\site-packages\torchvision\datasets\folder.py", line 43, in find_classes
    raise FileNotFoundError(f"Couldn't find any class folder in {directory}.")
FileNotFoundError: Couldn't find any class folder in datasets\imagenette.

Then I referred to the related answer in #1148, and I see that @alexriedel1 explains that it should be set to normalization: null.

When I when I set normalization: null the exact error message is:

C:\ProgramData\anaconda3\envs\HC_Anomalib\python.exe E:/Code/Anomalib/0.6.0/anomalib-0-6-0/tools/train.py
E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\config\config.py:275: UserWarning: config.project.unique_dir is set to False. This does not ensure that your results will be written in an empty directory and you may overwrite files.
  warn(
Global seed set to 42
2023-07-13 14:08:23,153 - anomalib.data - INFO - Loading the datamodule
Traceback (most recent call last):
  File "E:/Code/Anomalib/0.6.0/anomalib-0-6-0/tools/train.py", line 82, in <module>
    train(args)
  File "E:/Code/Anomalib/0.6.0/anomalib-0-6-0/tools/train.py", line 57, in train
    datamodule = get_datamodule(config)
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\data\__init__.py", line 116, in get_datamodule
    datamodule = Folder(
  File "E:\Code\Anomalib\0.6.0\anomalib-0-6-0\src\anomalib\data\folder.py", line 270, in __init__
    normalization=InputNormalizationMethod(normalization),
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\enum.py", line 339, in __call__
    return cls.__new__(cls, value)
  File "C:\ProgramData\anaconda3\envs\HC_Anomalib\lib\enum.py", line 663, in __new__
    raise ve_exc
ValueError: None is not a valid InputNormalizationMethod

Dataset

Folder

Model

Other (please specify in the field below)

Steps to reproduce the behavior

efficientad training its own dataset reports an error

OS information

Anomalib: 0.6.0 torch: 1.12.1+cu113 OS: windows

Expected behavior

Hello @alexriedel1@nelson1425, As someone who is most familiar with efficiented, can you answer the following questions?

1: What is the reason for this error in efficiented and how should I fix it.

2: Is the performance of efficiented really as good as in the paper, in fact I am more concerned about the speed of efficiented as in the paper. In the paper, it is mentioned that the FPS reaches 269 with efficientAD-M and 614 with efficientAD-S. Is it really possible to achieve this in real tests? If not, what is the FPS of your implementation for different sizes of images. (Although I realize this may be affected and limited by specific hardware)

3:What are the advantages of Efficiented over other tools in Anomalib, and what situations is it more suitable for.

Looking forward to your answer, thanks!

Screenshots

No response

Pip/GitHub

pip

What version/branch did you use?

No response

Configuration YAML

-

Logs

-

Code of Conduct

  • I agree to follow this project’s Code of Conduct

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 21 (11 by maintainers)

Most upvoted comments

Looking forward to hearing from them, and thank you very much for your patience, again! @blaz-r

Thank you very much @blaz-r

hello @alexriedel1 , @blaz-r Thank you very much for your help and patience in answering, I have successfully trained it when I set the batch size to 1. But I have a question, if the maximum value is 2**24, then when I start training with batch_size set to 32 and image_size set to 500. Logically, 500*500*32<2**24. should match, so why the error?

The quantile calculation is not based on the input image but on feature maps from the teacher model. the tensor shape for 500x500 images and batch size 32 is [32, 384, 117, 117] -> 168,210,432 > 2**24

hello @alexriedel1 , @blaz-r Thank you very much for your help and patience in answering, I have successfully trained it when I set the batch size to 1. But I have a question, if the maximum value is 2**24, then when I start training with batch_size set to 32 and image_size set to 500. Logically, 500*500*32<2**24. should match, so why the error?

This indeed seems like a bug, that was already addressed in one PR, but it was only fixed in lighting model as it seems. I believe that quantile should also be implemented differently when calculating d_hard. Maybe @nelson1425 can confirm. The problem is that quantile only works with input up to 2**24.

I think this will need to be fixed the same way as it was done in lightning model. If you are able to fix this, a PR would be very welcome.