kubernetes: StatefulSet does not upgrade to a newer version of manifests
What happened:
If a StatefulSet stucks at the failing init container - it’s impossible to fix it via a redeploy.
What you expected to happen:
If a new version of StatefulSet is deployed - a new set of pods eventually should be created
How to reproduce it (as minimally and precisely as possible): Here is a repository with all the necessary manifests https://github.com/zerkms/statefulset-init-stuck
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version): 1.14.1 - Cloud provider or hardware configuration: bare metal
- OS (e.g:
cat /etc/os-release): ubuntu bionic 18.04 - Kernel (e.g.
uname -a): 4.15.0-50-generic #54-Ubuntu SMP Mon May 6 18:46:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux - Install tools: kubeadm
- Network plugin and version (if this is a network-related bug):
- Others:
About this issue
- Original URL
- State: open
- Created 5 years ago
- Comments: 39 (22 by maintainers)
This appears to be the same issue as https://github.com/kubernetes/kubernetes/issues/60164, which continually was closed due to going stale, despite having a lot of interest from the community.
It would be great to get someone from sig/apps or more appropriate group to chime in here since this is a very difficult issue to deal with.
I opened a PR and reproduced the issue with e2e test in #78182
I guess by ‘a never version’ you mean ‘a newer version’