skpro: [BUG] cyclic boosting - sporadic test failures due to convergence failure
The recently added cyclic boosting estimator sporadically fails tests due to failed convergence of the loss, e.g.,:
FAILED skpro/regression/tests/test_cyclic_boosting.py::test_cyclic_boosting_with_manual_paramaters - cyclic_boosting.utils.ConvergenceError: Your cyclic boosting training seems to be diverging. In the 9. iteration the current loss: 52.52700124396056, is greater than the trivial loss with just mean predictions: 20.816666666666666.
About this issue
- Original URL
- State: open
- Created 5 months ago
- Comments: 30
I’m considering a situation where you already have them - they do not necessarily need to be fitted independently.
Ok, Cyclic Boosting 1.4.0 is there. @setoguchi-naoki will go ahead and make the relevant changes here (He already knows what is to do.). In short:
Slight adaptations will be needed: getting rid of the loop over QPD calls. But we can do that when I’m done in Cyclic Boosting.
@fkiraly I have found an upstream solution by allowing QPD to take full arrays rather than individual quantile values. I’m confident that will fix this here. Give me some days to work it out and build a new Cyclic Boosting release.
I suppose it’s good then that we have stringent tests. FYI, I think we also just found a bug in
sklearn: https://github.com/sktime/skpro/pull/192 They do not seem to be testing their probabilistic interfaces systematically!