gammapy: Investigate fail in FluxPointsEstimator
Gammapy version 1.0, 1.0.1
While running FluxPointsEstimator with multiple models present, and reoptimize=True, a confusing error has been sometimes reported
File ~/Gammapy-dev/gammapy/gammapy/datasets/evaluator.py:301, in MapEvaluator._compute_flux_spatial(self)
299 value = (values.quantity * weights).sum(axis=(1, 2), keepdims=True)
300 else:
--> 301 value = self._compute_flux_spatial_geom(self.geom)
303 return value
File ~/Gammapy-dev/gammapy/gammapy/datasets/evaluator.py:309, in MapEvaluator._compute_flux_spatial_geom(self, geom)
307 if not self.model.spatial_model.is_energy_dependent:
308 geom = geom.to_image()
--> 309 value = self.model.spatial_model.integrate_geom(geom)
311 if self.psf and self.model.apply_irf["psf"]:
312 value = self.apply_psf(value)
File ~/Gammapy-dev/gammapy/gammapy/modeling/models/spatial.py:245, in SpatialModel.integrate_geom(self, geom, oversampling_factor)
243 # Finally stack result
244 result._unit = integrated.unit
--> 245 result.stack(integrated)
246 else:
247 values = self.evaluate_geom(wcs_geom)
File ~/Gammapy-dev/gammapy/gammapy/maps/wcs/ndmap.py:878, in WcsNDMap.stack(self, other, weights, nan_to_num)
876 cutout_slices = Ellipsis, slices[0], slices[1]
877 else:
--> 878 raise ValueError(
879 "Can only stack equivalent maps or cutout of the same map."
880 )
882 data = other.quantity[cutout_slices].to_value(self.unit)
883 if nan_to_num:
ValueError: Can only stack equivalent maps or cutout of the same map.
I tried to produce a minimal notebook here where the error shows (on cell 24) https://github.com/AtreyeeS/test_notebooks/blob/master/fpe_fail.ipynb
from what i understood, the wcs_geom here (https://github.com/gammapy/gammapy/blob/18c2597f23816f465189f6dde4e6e75fd9e50fe6/gammapy/modeling/models/spatial.py#L227) ends up with width : 0.0 deg x 0.0 deg when the spatial model position is very close to the lat/lon allowed range, probably leading to a situation with very little overlap on geom
The error is rather unstable, and sometimes vanishes on re-running the script.
It would be nice to handle this error better, but I dont know how.
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 15 (15 by maintainers)
I must say I can’t see a good reason why one would re-optimize the source parameters during flux estimation. Yet I can see many bad reasons 😃 .
I think the default should never be this behavior. It even confuses us.
This creates an issue for the upper limit computations actually. Since the fitting is done per bin, the calculated upper limits may not even correspond to the same source (eg: different sizes/positions in different bins!)
I propose that along with improving the tutorials and the docstring, we add some information in the user guide about what a flux point estimation is supposed to mean, why spectral parameters must be frozen, and the implications of freezing spatial parameters
Thanks @AtreyeeS for investigating. I think the answer here is not obvious, because often there is a strong correlation between e.g. the amplitude and size of a model. So re-optimizing spatial parameters can make sense. however if the overall fit is unstable, things might break. The advantage of the current solution is that users can explicitly freeze the spatial parameters beforehand. This will set the model in the same state when re-optimizing for the flux point computation. If we always fix spatial parameters it is not possible to unfreeze, except if we introduce an option for this.
But I agree the most common method is to just re-optimize the norms of nearby sources. So we could show this more often in the docs:
@AtreyeeS A first side comment: it seems the cutout for the Gaussian model becomes too small. It seems to be at just 2x2 pixels…I am not sure what happens after PSF convolution with the geometry…