kube-state-metrics: Expose CronJob FailedNeedsStart state in some way
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: We had a test cluster that was down for a long time due to a power outage, so some CronJobs failed to start and entered a FailedNeedsStart state. It was a while until we noticed this.
What you expected to happen: We expected that there would be some metric for this but it seems not. Maybe I am just missing something so please tell me if so.
How to reproduce it (as minimally and precisely as possible): I believe you can repro by creating a cronjob to be run soon, then shutting down your entire control plane until that time passes, but I have not bothered to do that.
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version): 1.10.5 - Kube-state-metrics image version: 1.3.1
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 9
- Comments: 35 (7 by maintainers)
Sorry to bump such old issue, but in my case this helped a lot https://world.hey.com/nathan/kubernetes-cronjob-freshness-monitoring-with-prometheus-7a32cbb0 .
I shudder to think of the collective amount of wasted human lifespan on telling these stale bots like fejta to not close our issues.