torchgeo: ValueError: empty range for randrange()

When using RandomBatchGeoSampler, 50% of the time the following error will occur. With no code change, this runs perfectly fine the other 50% of the time.

code:

sampler = RandomBatchGeoSampler(ds, size=1024, batch_size=5, length=5 * 5)
dl = DataLoader(ds, batch_sampler=sampler, collate_fn=stack_samples)

for idx, batch in enumerate(dl):
    for idx_s, image in enumerate(batch['image']):
        image = torch.squeeze(image)

error:

  File "/shared/ritwik/miniconda3/envs/dino/lib/python3.7/site-packages/torchgeo/samplers/batch.py", line 115, in __iter__
    bounding_box = get_random_bounding_box(bounds, self.size, self.res)
  File "/shared/ritwik/miniconda3/envs/dino/lib/python3.7/site-packages/torchgeo/samplers/utils.py", line 49, in get_random_bounding_box
    minx = random.randrange(int(width)) * res + bounds.minx
  File "/shared/ritwik/miniconda3/envs/dino/lib/python3.7/random.py", line 190, in randrange
    raise ValueError("empty range for randrange()")
ValueError: empty range for randrange()

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 18 (3 by maintainers)

Most upvoted comments

I think there are a few possible places we could address this issue:

GeoDataset

The index is first created when you instantiate a GeoDataset (usually via RasterDataset or VectorDataset). In the case of adjacent tiles, one solution would be to merge those tiles into a single bounding box. However, this won’t work since:

  1. A GeoDataset index entry also needs to point to a file (although we could change to a list of files)
  2. In general, we can’t predict a situation like dataset 1 having a single tile A and dataset 2 having a single tile B with overlapping edges/faces but zero area of overlap

So this location won’t work.

IntersectionDataset

The first time we recompute the index is when we compute the intersection of two datasets. With the example data above, this is where we get those pesky intersection bounding boxes with zero area. Many users might consider this to be a bug, and it might make sense to remove bounding boxes with zero area. However, there are a couple of problems with this:

  1. Not all datasets will include volumetric/areal data, some may involve point data. We don’t want all intersections with these datasets to be empty
  2. Even if we remove bounding boxes with zero area, we will still have very small intersection bboxes (smaller than the query bbox), but we don’t know the size of the query bbox until sampling time
  3. Not everyone will use IntersectionDataset, they might only need UnionDataset or not need to combine datasets at all (ChesapeakeCVPR)

So this location won’t work either.

GeoSampler

We compute the index that we sample from in the GeoSampler base class, but we can’t filter out intersections with a smaller H/W in this class because:

  1. We don’t know the size at that time
  2. Not all geospatial samplers will want to do this (i.e. for point data)

So this location won’t work either.

GeoSampler subclasses

I think this is where we’ll have to do things (remove intersection bboxes smaller than the query bbox). This is the first time we know the size of the query bbox, and these samplers are already specific enough that they only work for volumetric/areal data. If someone wants to work with point data, they would already need to create a custom sampler. It’s a shame we need to iterate through the R-tree in 4 different places just to get a list of locations to sample from, but I don’t know of a different way to do this.

Another question is what to do when the area of intersection is 0 < bbox < size. Do we throw away those small regions of overlap, or do we still support sampling from them? The former is probably more efficient, but the latter may be important for some applications. It isn’t clear to me what a good default would be. For example, if I’m using GridGeoSampler, I probably want to make predictions for all regions of data, even if they are smaller than the query bbox.

Here are the files zipped individually. I had to downsample two of the raster files and the error still remains. The for-loop now iterates from one to four times before the error triggers. All of the files are required to reproduce the error.

dsm_data.py:

from torchgeo.datasets import RasterDataset

class DsmData(RasterDataset):
    filename_glob = "*.tif"

dop_data.py:

from torchgeo.datasets import RasterDataset

class DopData(RasterDataset):
    filename_glob = "*.tif"

dop10rgbi_32_338_5677_1_nw_0.5.zip dop10rgbi_32_338_5678_1_nw_0.5.zip ndom50_32338_5677_1_nw_2019.zip ndom50_32338_5678_1_nw_2019.zip