kubernetes: Container spec hash changed. Container will be killed and recreated.
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
During the update from v1.7.1 to v1.8.0 all docker containers were restarted, because container spec hash changed.
What you expected to happen:
No restart of the docker containers, because they did not change at all.
How to reproduce it (as minimally and precisely as possible):
- get yourself a k8s cluster in v1.7.x
- update (at least the kubelet) to v1.8.0
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl get nodes):
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-22-250-20.eu-central-1.compute.internal Ready <none> 56m v1.8.0
ip-172-22-250-21.eu-central-1.compute.internal Ready <none> 55m v1.7.1
- Cloud provider or hardware configuration**:
AWS
- OS (e.g. from /etc/os-release):
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1409.2.0
VERSION_ID=1409.2.0
BUILD_ID=2017-06-19-2321
PRETTY_NAME="Container Linux by CoreOS 1409.2.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
- Kernel (e.g.
uname -a):
Linux ip-172-22-250-20.eu-central-1.compute.internal 4.11.6-coreos #1 SMP Mon Jun 19 22:57:42 UTC 2017 x86_64 Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz GenuineIntel GNU/Linux
- Install tools:
- Others:
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 9
- Comments: 18 (10 by maintainers)
We’ve hit this as well. Caused a cluster-wide restart of every container.
Just wondering, why don’t we calculate the container hash by
since container image is the only mutable field?
I know we have to do some refactoring to
HashContainer(container *v1.Container), but wouldn’t it be more reliable?@zhangxiaoyu-zidif thanks for digging out the old issue!
Yes, any new field added to the container spec would cause the hash to change. This is not a new issue but new fields are added to the container spec rarely (compared with pod spec) so it doesn’t happen so often. As discussed in #23104, a new way to generate the hash (e.g., using only updatable fields) or other method for kubelet to examine and compare the state of the container against the spec is needed.
Calling this (i.e., container restart) out in the 1.9 release note would be nice.
@hzxuzhonghu the result i tested was that the spew pkg also dumps the new add field(with nil value) to the hasher.
MountPropagationis the new added field. so, if the container useVolumeMounts, then new MountPropagationMode will be included into the hash input, the hash result changed. other new added fields are in the same situation.