kubernetes: Kubelet inmediatelly eats all node's memory when livenessProbe command outputs infinite stdout
What happened: If a pod configures a livenessProbe command that just outputs infinitelly through stdout, kubelet quickly ends up eating all node’s memory, possibly ending up with OOM kills and the node going to unhealthy state.
What you expected to happen: Kubelet increases memory to store a certain amount of logs. The rest are just dropped.
How to reproduce it (as minimally and precisely as possible): You just need to create the next pod:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
readinessProbe:
exec:
command:
- "yes"
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
livenessProbe:
exec:
command:
- "yes"
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
): Reproduced on 1.6, 1.9 and 1.12 - Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): CoreOS 1800.7.0
- Kernel (e.g.
uname -a
): Linux ip-10-24-15-116.eu-west-1.compute.internal 4.14.63-coreos #1 SMP Wed Aug 15 22:26:16 UTC 2018 x86_64 Intel® Xeon® CPU E5-2686 v4 @ 2.30GHz GenuineIntel GNU/Linux - Install tools: kube-aws (except for 1.12, which was using DCOS)
- Others:
/kind bug
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 12
- Comments: 24 (21 by maintainers)
tested on 1.14.
After creating the pod on an empty cluster, on the node that the pod is scheduled kubelet exhausts the whole memory of the node, and after not-too-long, the node becomes unresponsive.