ffcv: torch.nn.Module classes cannot be used in Pipeline
I tried to add color jittering augmentation to the ImageNet training through inserting line torchvision.transforms.ColorJitter(.4,.4,.4) right after RandomHorizontalFlip, but met this error:
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Failed in nopython mode pipeline (step: nopython frontend)
Untyped global name 'self': Cannot determine Numba type of <class 'ffcv.transforms.module.ModuleWrapper'>
File "../ffcv/ffcv/transforms/module.py", line 25:
def apply_module(inp, _):
res = self.module(inp)
^
During: resolving callee type: type(CPUDispatcher(<function ModuleWrapper.generate_code.<locals>.apply_module at 0x7f921d4c98b0>))
During: typing of call at (2)
During: resolving callee type: type(CPUDispatcher(<function ModuleWrapper.generate_code.<locals>.apply_module at 0x7f921d4c98b0>))
During: typing of call at (2)
File "/home/chengxuz/ffcv-imagenet", line 2:
<source missing, REPL/exec in use?>
Any idea on what’s happening here and how to fix this?
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 15 (4 by maintainers)
@vturrisi FFCV also has per-image randomness in its augmentations (so I think the only augmentations that don’t support this are the torchvision ones).
Since it looks like all the FFCV-related problems here are solved, I’ll close this issue for now—feel free to re-open if there’s anything we missed!
Memory is only pre-allocated for FFCV transforms, so the torchvision transforms there are probably allocating memory at each iteration. Rewriting the torchvision transform as an FFCV one will fix this!