kubernetes: Failed to update Node Allocatable Limits on 1.8.2
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened: We’ve deployed kube 1.8.2 and are seeing following errors:
Failed to update Node Allocatable Limits "": failed to set supported cgroup subsystems for cgroup : Failed to set config for supported subsystems : failed to write 8201408512 to memory.limit_in_bytes: write /var/lib/docker/devicemapper/mnt/69641d63999364ce6bed9ff9a37e18922d1738df76d84db691a854f0e931e435/rootfs/sys/fs/cgroup/memory/memory.limit_in_bytes: invalid argument
kubelet: E1116 09:26:19.176810 10132 helpers.go:138] readString: Failed to read "/sys/fs/cgroup/memory/system.slice/docker-d0b90be4bc0f7a2dd8669b5955b3355c16140eaff3df8264cd6f3c9236218067.scope/memory.limit_in_bytes": read /sys/fs/cgroup/memory/system.slice/docker-d0b90be4bc0f7a2dd8669b5955b3355c16140eaff3df8264cd6f3c9236218067.scope/memory.limit_in_bytes: no such device
kubelet: E1116 09:20:02.379813 15458 helpers.go:138] readString: Failed to read "/sys/fs/cgroup/memory/user.slice/user-997.slice/session-192267.scope/memory.soft_limit_in_bytes": read /sys/fs/cgroup/memory/user.slice/user-997.slice/session-192267.scope/memory.soft_limit_in_bytes: no such device
This is somehow related to https://github.com/kubernetes/kubernetes/issues/42701, but we thought it was fixed in 1.8.
The cgroup driver we use is cgroupfs.
What you expected to happen:
No errors
How to reproduce it (as minimally and precisely as possible):
Use kubespray to install kubernetes multi-master
Environment:
- Kubernetes version (use
kubectl version):
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2+coreos.0", GitCommit:"4c0769e81ab01f47eec6f34d7f1bb80873ae5c2b", GitTreeState:"clean", BuildDate:"2017-10-25T16:24:46Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2+coreos.0", GitCommit:"4c0769e81ab01f47eec6f34d7f1bb80873ae5c2b", GitTreeState:"clean", BuildDate:"2017-10-25T16:24:46Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): CentOS7
- Kernel (e.g.
uname -a): 3.10.0-693.2.2.el7.x86_64 - Install tools: kubespray
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 36 (4 by maintainers)
We have the same issue here, but on AKS. We are running 5 node cluster and we get the following error for all 5 nodes. Here the system infos:
and the error we get after updating von
1.8.1to1.8.7I had the same issue. According to the docs,
cgroups-per-qosis supposed to be enabled by default. This is how I resolved the issue:Add/change
--cgroups-per-qos=true --enforce-node-allocatable=podson to yourKUBELET_ARGSline located inside/etc/kubernetes/kubeletso it looks something like this:Then run
sudo systemctl restart kubelet.serviceon your worker nodes.I have the same issue on AKS (1.8.7) in westeurope. It is woking again after restart nodes. thanks to @otaviosoares