torchio: Error when trying to GET torchio.Subject from torchio.SubjectDataset with multiple images with different spatial size

I am trying to construct torchio.SubjectDataset where each torchio.Subject has multiple images (different sequences). Potential problem is, all these images are DIFFERENT spatial sizes.

In order to bring them to same spatial size I’ve included CropOrPad transformation. But, when I try to GET one subject from the dataset, I am getting following error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-79-1b7789238bcb> in <module>
----> 1 subj = dataset.train_set[0]
      2 #help(tio.Subject)
      3 subj.__dict__

/opt/conda/lib/python3.7/site-packages/torchio/data/dataset.py in __getitem__(self, index)
     83         # Apply transform (this is usually the bottleneck)
     84         if self._transform is not None:
---> 85             subject = self._transform(subject)
     86         return subject
     87 

/opt/conda/lib/python3.7/site-packages/torchio/transforms/transform.py in __call__(self, data)
    124             subject = copy.copy(subject)
    125         with np.errstate(all='raise', under='ignore'):
--> 126             transformed = self.apply_transform(subject)
    127         if self.keep is not None:
    128             for name, image in images_to_keep.items():

/opt/conda/lib/python3.7/site-packages/torchio/transforms/augmentation/composition.py in apply_transform(self, subject)
     45     def apply_transform(self, subject: Subject) -> Subject:
     46         for transform in self.transforms:
---> 47             subject = transform(subject)
     48         return subject
     49 

/opt/conda/lib/python3.7/site-packages/torchio/transforms/transform.py in __call__(self, data)
    124             subject = copy.copy(subject)
    125         with np.errstate(all='raise', under='ignore'):
--> 126             transformed = self.apply_transform(subject)
    127         if self.keep is not None:
    128             for name, image in images_to_keep.items():

/opt/conda/lib/python3.7/site-packages/torchio/transforms/augmentation/composition.py in apply_transform(self, subject)
     45     def apply_transform(self, subject: Subject) -> Subject:
     46         for transform in self.transforms:
---> 47             subject = transform(subject)
     48         return subject
     49 

/opt/conda/lib/python3.7/site-packages/torchio/transforms/transform.py in __call__(self, data)
    124             subject = copy.copy(subject)
    125         with np.errstate(all='raise', under='ignore'):
--> 126             transformed = self.apply_transform(subject)
    127         if self.keep is not None:
    128             for name, image in images_to_keep.items():

/opt/conda/lib/python3.7/site-packages/torchio/transforms/preprocessing/spatial/crop_or_pad.py in apply_transform(self, subject)
    238 
    239     def apply_transform(self, subject: Subject) -> Subject:
--> 240         padding_params, cropping_params = self.compute_crop_or_pad(subject)
    241         padding_kwargs = {'padding_mode': self.padding_mode}
    242         if padding_params is not None:

/opt/conda/lib/python3.7/site-packages/torchio/transforms/preprocessing/spatial/crop_or_pad.py in _compute_center_crop_or_pad(self, subject)
    157             subject: Subject,
    158             ) -> Tuple[Optional[TypeSixBounds], Optional[TypeSixBounds]]:
--> 159         source_shape = subject.spatial_shape
    160         # The parent class turns the 3-element shape tuple (w, h, d)
    161         # into a 6-element bounds tuple (w, w, h, h, d, d)

/opt/conda/lib/python3.7/site-packages/torchio/data/subject.py in spatial_shape(self)
    116             (181, 217, 181)
    117         """
--> 118         self.check_consistent_spatial_shape()
    119         return self.get_first_image().spatial_shape
    120 

/opt/conda/lib/python3.7/site-packages/torchio/data/subject.py in check_consistent_spatial_shape(self)
    294 
    295     def check_consistent_spatial_shape(self) -> None:
--> 296         self.check_consistent_attribute('spatial_shape')
    297 
    298     def check_consistent_orientation(self) -> None:

/opt/conda/lib/python3.7/site-packages/torchio/data/subject.py in check_consistent_attribute(self, attribute, relative_tolerance, absolute_tolerance, message)
    282                         }),
    283                     )
--> 284                     raise RuntimeError(message)
    285         except TypeError:
    286             # fallback for non-numeric values

RuntimeError: More than one value for "spatial_shape" found in subject images:
{'T1w': (416, 512, 36), 'T2w': (448, 512, 36)}

In summary, when it tries to apply CropOrPad transformation, it fails on following code block:

/opt/conda/lib/python3.7/site-packages/torchio/data/subject.py in check_consistent_spatial_shape(self)
    294 
    295     def check_consistent_spatial_shape(self) -> None:
--> 296         self.check_consistent_attribute('spatial_shape')
    297 
    298     def check_consistent_orientation(self) -> None:

How do I turn off check_consistent_attribute('spatial_shape')? Because in the end all subjects WILL BE same size. But this breaks things before CropOrPad is applied.

NOTE: Dataset is created SUCCESSFULLY, it is only when I try to GET one of its Subjects, and when transformations are being applied that code fails.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 22 (11 by maintainers)

Most upvoted comments

@fepegar it might still make sense to add them as static class attributes. That way you could do something like this at the import and everything else would work the same:

import torchio as tio
tio.Subject.relative_attribute_tolerance = 1e-5 # applies to all instances since it is a static attribute
tio.Subject.absolute_attribute_tolerance = 1e-5

That way the function could default to None and use the class attributes if no other value was passed. It would also work with every instance, so you could do that once to change it globally.

Awesome! Thanks @fepegar for the help!

I opened the images in Slicer and made one green and the other, magenta:

Screenshot

So they occupy the same physical space. I ran these lines in Python:

import torchio as tio
t1 = tio.ScalarImage('T1w')
t2 = tio.ScalarImage('T2w')
subject = tio.Subject(T1w=t1, T2w=t2)
cp = tio.CropOrPad((512, 512, 408))
subject = tio.Subject(T1w=cp(t1), T2w=cp(t2))
subject.plot(reorient=False)

Figure_1

As you can see, this is not what we would like. This is because images are in different voxel spaces:

In [17]: t1
Out[17]: ScalarImage(shape: (1, 512, 512, 33); spacing: (0.47, 0.47, 5.00); orientation: LPS+; dtype: torch.ShortTensor; memory: 16.5 MiB)

In [18]: t2
Out[18]: ScalarImage(shape: (1, 512, 512, 408); spacing: (0.50, 0.50, 0.50); orientation: PIR+; dtype: torch.ShortTensor; memory: 204.0 MiB)

You want to resample them to the same space, using one of them as reference. Also, it’s good practice to use a standard orientation for all images, for which we can use ToCanonical:

transforms = tio.ToCanonical(), tio.Resample('T2w')
transform = tio.Compose(transforms)
fixed = transform(subject)
subject.plot(reorient=False)

Figure_1

That’s better!

I recommend preprocessing the images before training, though, as resampling takes time.