anomalib: [Bug]: EfficientAD - RuntimeError: Calculated padded input size per channel: (7 x 7). Kernel size: (8 x 8).

Describe the bug

I’m trying to train an EfficientAD model (tried medium and small) but am running into this issue. Have used similar config for other models successfuly (except model specific configs). If my interpretation is correct, the output size of one of the Encoder layers doesnt match the expected input size of the following (last) layer.

Dataset

Folder

Model

Other (please specify in the field below)

Steps to reproduce the behavior

train efficientAD using my config in anomalib 0.6

OS information

OS information:

  • OS: win 10
  • Python version: 3.8.1.6
  • Anomalib version: 0.6.0
  • PyTorch version:
  • CUDA/cuDNN version: 11.6
  • GPU models and configuration: rtx3050ti
  • Any other relevant information: custom data

Expected behavior

Training should continue normally

Screenshots

No response

Pip/GitHub

pip

What version/branch did you use?

0.6.0

Configuration YAML

dataset:
  abnormal_dir: bad
  center_crop: null
  extensions: null
  format: folder
  image_size: 256  # mandatory
  mask: ground_truth
  name: vespa
  normal_dir: good
  normal_test_dir: null
  normalization: imagenet
  num_workers: 8
  path: ./../data/images/data_structure_anomalib
  split_ratio: 0.2
  task: classification
  test_batch_size: 16
  test_split_mode: from_dir
  test_split_ratio: 0.2
  tiling:
    apply: false
    random_tile_count: 16
    remove_border_count: 0
    stride: null
    tile_size: null
    use_random_tiling: false
  train_batch_size: 16
  transform_config:
    eval: c:\Users\benedict\dev\VespaUseCase\machine_learning\src\d01_config\transform_config_1.yaml
    train: null
  val_split_mode: from_test
  val_split_ratio: 0.5
logging:
  log_graph: false
  logger: []
metrics:
  image:
  - F1Score
  - AUROC
  pixel:
  - F1Score
  - AUROC
  threshold:
    manual_image: null
    manual_pixel: null
    method: adaptive
model:
  lr: 0.0001
  model_size: small
  name: efficientad
  normalization_method: min_max
  padding: false
  teacher_out_channels: 384
  weight_decay: 1.0e-05
optimization:
  export_mode: onnx
project:
  path: ./results
  seed: 42
trainer:
  accelerator: auto
  accumulate_grad_batches: 1
  auto_lr_find: false
  auto_scale_batch_size: false
  benchmark: false
  check_val_every_n_epoch: 1
  default_root_dir: null
  detect_anomaly: false
  deterministic: false
  devices: 1
  enable_checkpointing: true
  enable_model_summary: true
  enable_progress_bar: true
  fast_dev_run: false
  gradient_clip_algorithm: norm
  gradient_clip_val: 0
  limit_predict_batches: 1.0
  limit_test_batches: 1.0
  limit_train_batches: 1.0
  limit_val_batches: 1.0
  log_every_n_steps: 50
  max_epochs: 200
  max_steps: -1
  max_time: null
  min_epochs: null
  min_steps: null
  move_metrics_to_cpu: false
  multiple_trainloader_mode: max_size_cycle
  num_nodes: 1
  num_sanity_val_steps: 0
  overfit_batches: 0.0
  plugins: null
  precision: 32
  profiler: null
  reload_dataloaders_every_n_epochs: 0
  replace_sampler_ddp: true
  strategy: null
  sync_batchnorm: false
  track_grad_norm: -1
  val_check_interval: 1.0
visualization:
  image_save_path: null
  log_images: false
  mode: full
  save_images: true
  show_images: false

Albumentations Config YAML

__version__: 1.3.1
transform:
  __class_fullname__: Compose
  additional_targets: {}
  bbox_params: null
  is_check_shapes: true
  keypoint_params: null
  p: 1.0
  transforms:
  - __class_fullname__: Resize
    always_apply: true
    height: 256
    interpolation: 1
    p: 1
    width: 256
  - __class_fullname__: RandomBrightnessContrast
    always_apply: false
    brightness_by_max: true
    brightness_limit:
    - -0.3
    - 0.3
    contrast_limit:
    - -0.3
    - 0.3
    p: 0.5
  - __class_fullname__: RandomSunFlare
    always_apply: false
    angle_lower: 0
    angle_upper: 1
    flare_roi:
    - 0.3
    - 0.4
    - 0.7
    - 0.6
    num_flare_circles_lower: 1
    num_flare_circles_upper: 2
    p: 0.3
    src_color:
    - 255
    - 255
    - 255
    src_radius: 100
  - __class_fullname__: Rotate
    always_apply: false
    border_mode: 4
    crop_border: false
    interpolation: 1
    limit:
    - -10
    - 10
    mask_value: null
    p: 0.5
    rotate_method: largest_box
    value: null
  - __class_fullname__: Affine
    always_apply: false
    cval: 0
    cval_mask: 0
    fit_output: false
    interpolation: 1
    keep_ratio: false
    mask_interpolation: 0
    mode: 0
    p: 0.3
    rotate:
    - 0.0
    - 0.0
    rotate_method: largest_box
    scale:
      x:
      - 1.0
      - 1.0
      y:
      - 1.0
      - 1.0
    shear:
      x: &id001
      - -5
      - 5
      y: *id001
    translate_percent: null
    translate_px:
      x:
      - 0
      - 0
      y:
      - 0
      - 0
  - __class_fullname__: Normalize
    always_apply: true
    max_pixel_value: 255.0
    mean:
    - 0.485
    - 0.456
    - 0.406
    p: 1.0
    std:
    - 0.229
    - 0.224
    - 0.225
  - __class_fullname__: ToTensorV2
    always_apply: true
    p: 1.0
    transpose_mask: false

Logs

RuntimeError                              Traceback (most recent call last)
Cell In[16], line 1
----> 1 train_results = trainer.fit(model=model, datamodule=datamodule)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:608, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
    606 model = self._maybe_unwrap_optimized(model)
    607 self.strategy._lightning_module = model
--> 608 call._call_and_handle_interrupt(
    609     self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
    610 )

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\call.py:38, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
     36         return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
     37     else:
---> 38         return trainer_fn(*args, **kwargs)
     40 except _TunerExitException:
     41     trainer._call_teardown_hook()

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:650, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
    643 ckpt_path = ckpt_path or self.resume_from_checkpoint
    644 self._ckpt_path = self._checkpoint_connector._set_ckpt_path(
    645     self.state.fn,
    646     ckpt_path,  # type: ignore[arg-type]
    647     model_provided=True,
    648     model_connected=self.lightning_module is not None,
    649 )
--> 650 self._run(model, ckpt_path=self.ckpt_path)
    652 assert self.state.stopped
    653 self.training = False

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:1112, in Trainer._run(self, model, ckpt_path)
   1108 self._checkpoint_connector.restore_training_state()
   1110 self._checkpoint_connector.resume_end()
-> 1112 results = self._run_stage()
   1114 log.detail(f"{self.__class__.__name__}: trainer tearing down")
   1115 self._teardown()

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:1191, in Trainer._run_stage(self)
   1189 if self.predicting:
   1190     return self._run_predict()
-> 1191 self._run_train()

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:1214, in Trainer._run_train(self)
   1211 self.fit_loop.trainer = self
   1213 with torch.autograd.set_detect_anomaly(self._detect_anomaly):
-> 1214     self.fit_loop.run()

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\loop.py:199, in Loop.run(self, *args, **kwargs)
    197 try:
    198     self.on_advance_start(*args, **kwargs)
--> 199     self.advance(*args, **kwargs)
    200     self.on_advance_end()
    201     self._restarting = False

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\fit_loop.py:267, in FitLoop.advance(self)
    265 self._data_fetcher.setup(dataloader, batch_to_device=batch_to_device)
    266 with self.trainer.profiler.profile("run_training_epoch"):
--> 267     self._outputs = self.epoch_loop.run(self._data_fetcher)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\loop.py:199, in Loop.run(self, *args, **kwargs)
    197 try:
    198     self.on_advance_start(*args, **kwargs)
--> 199     self.advance(*args, **kwargs)
    200     self.on_advance_end()
    201     self._restarting = False

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py:213, in TrainingEpochLoop.advance(self, data_fetcher)
    210     self.batch_progress.increment_started()
    212     with self.trainer.profiler.profile("run_training_batch"):
--> 213         batch_output = self.batch_loop.run(kwargs)
    215 self.batch_progress.increment_processed()
    217 # update non-plateau LR schedulers
    218 # update epoch-interval ones only when we are at the end of training epoch

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\loop.py:199, in Loop.run(self, *args, **kwargs)
    197 try:
    198     self.on_advance_start(*args, **kwargs)
--> 199     self.advance(*args, **kwargs)
    200     self.on_advance_end()
    201     self._restarting = False

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\batch\training_batch_loop.py:88, in TrainingBatchLoop.advance(self, kwargs)
     84 if self.trainer.lightning_module.automatic_optimization:
     85     optimizers = _get_active_optimizers(
     86         self.trainer.optimizers, self.trainer.optimizer_frequencies, kwargs.get("batch_idx", 0)
     87     )
---> 88     outputs = self.optimizer_loop.run(optimizers, kwargs)
     89 else:
     90     outputs = self.manual_loop.run(kwargs)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\loop.py:199, in Loop.run(self, *args, **kwargs)
    197 try:
    198     self.on_advance_start(*args, **kwargs)
--> 199     self.advance(*args, **kwargs)
    200     self.on_advance_end()
    201     self._restarting = False

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py:202, in OptimizerLoop.advance(self, optimizers, kwargs)
    199 def advance(self, optimizers: List[Tuple[int, Optimizer]], kwargs: OrderedDict) -> None:
    200     kwargs = self._build_kwargs(kwargs, self.optimizer_idx, self._hiddens)
--> 202     result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
    203     if result.loss is not None:
    204         # automatic optimization assumes a loss needs to be returned for extras to be considered as the batch
    205         # would be skipped otherwise
    206         self._outputs[self.optimizer_idx] = result.asdict()

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py:249, in OptimizerLoop._run_optimization(self, kwargs, optimizer)
    241         closure()
    243 # ------------------------------
    244 # BACKWARD PASS
    245 # ------------------------------
    246 # gradient update with accumulated gradients
    247 else:
    248     # the `batch_idx` is optional with inter-batch parallelism
--> 249     self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
    251 result = closure.consume_result()
    253 if result.loss is not None:
    254     # if no result, user decided to skip optimization
    255     # otherwise update running loss + reset accumulated loss
    256     # TODO: find proper way to handle updating running loss

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py:370, in OptimizerLoop._optimizer_step(self, optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
    362     rank_zero_deprecation(
    363         "The NVIDIA/apex AMP implementation has been deprecated upstream. Consequently, its integration inside"
    364         " PyTorch Lightning has been deprecated in v1.9.0 and will be removed in v2.0.0."
   (...)
    367         " return True."
    368     )
    369     kwargs["using_native_amp"] = isinstance(self.trainer.precision_plugin, MixedPrecisionPlugin)
--> 370 self.trainer._call_lightning_module_hook(
    371     "optimizer_step",
    372     self.trainer.current_epoch,
    373     batch_idx,
    374     optimizer,
    375     opt_idx,
    376     train_step_and_backward_closure,
    377     on_tpu=isinstance(self.trainer.accelerator, TPUAccelerator),
    378     **kwargs,  # type: ignore[arg-type]
    379     using_lbfgs=is_lbfgs,
    380 )
    382 if not should_accumulate:
    383     self.optim_progress.optimizer.step.increment_completed()

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:1356, in Trainer._call_lightning_module_hook(self, hook_name, pl_module, *args, **kwargs)
   1353 pl_module._current_fx_name = hook_name
   1355 with self.profiler.profile(f"[LightningModule]{pl_module.__class__.__name__}.{hook_name}"):
-> 1356     output = fn(*args, **kwargs)
   1358 # restore current_fx when nested context
   1359 pl_module._current_fx_name = prev_fx_name

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\core\module.py:1754, in LightningModule.optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure, on_tpu, using_lbfgs)
   1675 def optimizer_step(
   1676     self,
   1677     epoch: int,
   (...)
   1683     using_lbfgs: bool = False,
   1684 ) -> None:
   1685     r"""
   1686     Override this method to adjust the default way the :class:`~pytorch_lightning.trainer.trainer.Trainer` calls
   1687     each optimizer.
   (...)
   1752 
   1753     """
-> 1754     optimizer.step(closure=optimizer_closure)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\core\optimizer.py:169, in LightningOptimizer.step(self, closure, **kwargs)
    166     raise MisconfigurationException("When `optimizer.step(closure)` is called, the closure should be callable")
    168 assert self._strategy is not None
--> 169 step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
    171 self._on_after_step()
    173 return step_output

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\strategies\strategy.py:234, in Strategy.optimizer_step(self, optimizer, opt_idx, closure, model, **kwargs)
    232 # TODO(fabric): remove assertion once strategy's optimizer_step typing is fixed
    233 assert isinstance(model, pl.LightningModule)
--> 234 return self.precision_plugin.optimizer_step(
    235     optimizer, model=model, optimizer_idx=opt_idx, closure=closure, **kwargs
    236 )

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\plugins\precision\precision_plugin.py:119, in PrecisionPlugin.optimizer_step(self, optimizer, model, optimizer_idx, closure, **kwargs)
    117 """Hook to run the optimizer step."""
    118 closure = partial(self._wrap_closure, model, optimizer, optimizer_idx, closure)
--> 119 return optimizer.step(closure=closure, **kwargs)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\torch\optim\lr_scheduler.py:69, in LRScheduler.__init__..with_counter..wrapper(*args, **kwargs)
     67 instance._step_count += 1
     68 wrapped = func.__get__(instance, cls)
---> 69 return wrapped(*args, **kwargs)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\torch\optim\optimizer.py:280, in Optimizer.profile_hook_step..wrapper(*args, **kwargs)
    276         else:
    277             raise RuntimeError(f"{func} must return None or a tuple of (new_args, new_kwargs),"
    278                                f"but got {result}.")
--> 280 out = func(*args, **kwargs)
    281 self._optimizer_step_code()
    283 # call optimizer step post hooks

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\torch\optim\optimizer.py:33, in _use_grad_for_differentiable.._use_grad(self, *args, **kwargs)
     31 try:
     32     torch.set_grad_enabled(self.defaults['differentiable'])
---> 33     ret = func(self, *args, **kwargs)
     34 finally:
     35     torch.set_grad_enabled(prev_grad)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\torch\optim\adamw.py:148, in AdamW.step(self, closure)
    146 if closure is not None:
    147     with torch.enable_grad():
--> 148         loss = closure()
    150 for group in self.param_groups:
    151     params_with_grad = []

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\plugins\precision\precision_plugin.py:105, in PrecisionPlugin._wrap_closure(self, model, optimizer, optimizer_idx, closure)
     92 def _wrap_closure(
     93     self,
     94     model: "pl.LightningModule",
   (...)
     97     closure: Callable[[], Any],
     98 ) -> Any:
     99     """This double-closure allows makes sure the ``closure`` is executed before the
    100     ``on_before_optimizer_step`` hook is called.
    101 
    102     The closure (generally) runs ``backward`` so this allows inspecting gradients in this hook. This structure is
    103     consistent with the ``PrecisionPlugin`` subclasses that cannot pass ``optimizer.step(closure)`` directly.
    104     """
--> 105     closure_result = closure()
    106     self._after_closure(model, optimizer, optimizer_idx)
    107     return closure_result

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py:149, in Closure.__call__(self, *args, **kwargs)
    148 def __call__(self, *args: Any, **kwargs: Any) -> Optional[Tensor]:
--> 149     self._result = self.closure(*args, **kwargs)
    150     return self._result.loss

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py:135, in Closure.closure(self, *args, **kwargs)
    134 def closure(self, *args: Any, **kwargs: Any) -> ClosureResult:
--> 135     step_output = self._step_fn()
    137     if step_output.closure_loss is None:
    138         self.warning_cache.warn("`training_step` returned `None`. If this was on purpose, ignore this warning...")

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py:419, in OptimizerLoop._training_step(self, kwargs)
    410 """Performs the actual train step with the tied hooks.
    411 
    412 Args:
   (...)
    416     A ``ClosureResult`` containing the training step output.
    417 """
    418 # manually capture logged metrics
--> 419 training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values())
    420 self.trainer.strategy.post_training_step()
    422 model_output = self.trainer._call_lightning_module_hook("training_step_end", training_step_output)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:1494, in Trainer._call_strategy_hook(self, hook_name, *args, **kwargs)
   1491     return
   1493 with self.profiler.profile(f"[Strategy]{self.strategy.__class__.__name__}.{hook_name}"):
-> 1494     output = fn(*args, **kwargs)
   1496 # restore current_fx when nested context
   1497 pl_module._current_fx_name = prev_fx_name

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\strategies\strategy.py:378, in Strategy.training_step(self, *args, **kwargs)
    376 with self.precision_plugin.train_step_context():
    377     assert isinstance(self.model, TrainingStep)
--> 378     return self.model.training_step(*args, **kwargs)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\anomalib\models\efficientad\lightning_model.py:217, in EfficientAD.training_step(***failed resolving arguments***)
    214     self.imagenet_iterator = iter(self.imagenet_loader)
    215     batch_imagenet = next(self.imagenet_iterator)[0]["image"].to(self.device)
--> 217 loss_st, loss_ae, loss_stae = self.model(batch=batch["image"], batch_imagenet=batch_imagenet)
    219 loss = loss_st + loss_ae + loss_stae
    220 self.log("train_st", loss_st.item(), on_epoch=True, prog_bar=True, logger=True)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\anomalib\models\efficientad\torch_model.py:286, in EfficientADModel.forward(self, batch, batch_imagenet)
    284 # Autoencoder and Student AE Loss
    285 aug_img = self.choose_random_aug_image(batch)
--> 286 ae_output_aug = self.ae(aug_img)
    288 with torch.no_grad():
    289     teacher_output_aug = self.teacher(aug_img)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\anomalib\models\efficientad\torch_model.py:182, in AutoEncoder.forward(self, x)
    180 def forward(self, x):
    181     x = imagenet_norm_batch(x)
--> 182     x = self.encoder(x)
    183     x = self.decoder(x)
    184     return x

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\anomalib\models\efficientad\torch_model.py:112, in Encoder.forward(self, x)
    110 x = F.relu(self.enconv4(x))
    111 x = F.relu(self.enconv5(x))
--> 112 x = self.enconv6(x)
    113 return x

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\torch\nn\modules\conv.py:463, in Conv2d.forward(self, input)
    462 def forward(self, input: Tensor) -> Tensor:
--> 463     return self._conv_forward(input, self.weight, self.bias)

File c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\torch\nn\modules\conv.py:459, in Conv2d._conv_forward(self, input, weight, bias)
    455 if self.padding_mode != 'zeros':
    456     return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
    457                     weight, bias, self.stride,
    458                     _pair(0), self.dilation, self.groups)
--> 459 return F.conv2d(input, weight, bias, self.stride,
    460                 self.padding, self.dilation, self.groups)

RuntimeError: Calculated padded input size per channel: (7 x 7). Kernel size: (8 x 8). Kernel size can't be greater than actual input size

Code of Conduct

  • I agree to follow this project’s Code of Conduct

Edit: added mask dir in config which was missing

Edit 2: Added content of transform_config_1.yaml below config section

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 33 (13 by maintainers)

Most upvoted comments

Epoch 0: 1%| | 22/2160 [01:06<1:47:10, 3.01s/it, loss=601, v_num=30, train_st_step=57.90, train_ae_step=0.881, train_stae_step=4.980, train_loss_step=63.80]

Changing the

train_batch_size = 1
test_batch_size = 1

seems to have fixed the issue.

Thank you very much for helping me out here!

Btw, greetings from berlin 😃

Can the image size be set to something else?not 256?

Hm strange. Can you do some debugging like printing len(maps_flat) in that particular function and check its value?

In the Configuration YAML you set image_size 256 and in the transform_config_1 you set CenterCrop to 224. Both should have the same value and both should be at least 256. Please also set normalization: null in the Configuration YAML and remove Normalization in the transform_config_1. Normalization is already done inside of EfficientAD by default

I think you’re right, i didn’t remove the center cropping correctly. I’ll try it out again with the correct size of 256x256.

Will respond whether it works now later, but thanks already!