kubernetes: Statefuleset/DaemonSet reports invalid status due to ResourceQuota.
What happened?
When we limited the number of pods for the namespace (let’s say 2), then after we deployed the StatefulSet with 5 replicas. Only 2 pods were launched. This is expected behavior but when we see the status of StatefulSet it shows only 1 pod under the Ready column. It should show 2 pods in the column.
$ kubectl get resourcequota -n test2
NAME AGE REQUEST LIMIT
pj-quota 50m pods: 2/2
$ kubectl get pod -n test2
NAME READY STATUS RESTARTS AGE
test2-0 1/1 Running 0 37m
test2-1 1/1 Running 0 36m
$ kubectl get sts -n test2
NAME READY AGE
test2 1/5 33m
A similar issue also occurred with DaemonSet. When we deployed the DaemonSet in the namespace (on which limited the pod) on the multi-node cluster(1 master node, 2 workers node), it deployed one pod on 2 nodes out of 3 nodes. which is the expected behavior. But the same issue here when we see the status for DaemonSet. It shows 1 pod under the CURRENT and UP-TO-DATE column and 0 pods under the READY and AVAILABLE column. It should show 2 pods in these columns.
$ kubectl get resourcequota -n test
NAME AGE REQUEST LIMIT
pj-quota 141m pods: 2/2
$ kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
test2-lj58r 1/1 Running 0 5m19s
test2-xhq2m 1/1 Running 0 5m19s
$ kubectl get daemonset -n test
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
test2 3 1 0 1 0 <none> 5m57s
It is only when we apply ResourceQuota in the namespace. If we don’t apply ResourceQuota then it shows the expected status for the DaemonSet and StatefulSet.
I also checked this behavior with Deployment also, but in that case, it works fine.
$ kubectl get deployment -n test3
NAME READY UP-TO-DATE AVAILABLE AGE
test-deployment 2/4 2 2 23s
What did you expect to happen?
Statefuleset and DaemonSet should show the correct status when we applied ResourceQuota in the namespace.
For this specific issue it should show like:
StatefulSet:
$ kubectl get sts -n test2
NAME READY AGE
test2 2/5 33m
DaemonSet:
$ kubectl get daemonset -n test
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
test2 3 2 2 2 2 <none> 5m57s
How can we reproduce it (as minimally and precisely as possible)?
- Set up the multinode cluster (1 master, 2worker nodes):
minikube start --nodes 3 -p multinode-demo
- Create 2 namespaces one for StatefulSet and one for DaemonSet:
$ kubectl create ns test
$ kubectl create ns test2
- Apply the below ResourceQuota yaml to both namespaces:
apiVersion: v1
kind: ResourceQuota
metadata:
name: pj-quota
spec:
hard:
pods: "2"
- In namespace
test2applies below yaml’s for StatefulSet:
svc.yaml:
apiVersion: v1
kind: Service
metadata:
name: test2
labels:
app: test2
spec:
ports:
- port: 80
clusterIP: None
selector:
app: test2
sts.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: test2
name: test2
spec:
replicas: 5
serviceName: "test2"
selector:
matchLabels:
app: test2
template:
metadata:
labels:
app: test2
spec:
containers:
- command:
- sh
- -c
- "while :; do echo hogehoge; sleep 5 & wait; done"
image: docker.io/library/busybox
imagePullPolicy: Always
name: cli1
Now check the following commands:
kubectl get pod -n test2 kubectl get sts -n test2
- In namespace
testapplies below yaml for DeamonSet:
daemonset.yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: test2
name: test2
spec:
selector:
matchLabels:
app: test2
template:
metadata:
labels:
app: test2
spec:
containers:
- command:
- sh
- -c
- "while :; do echo hogehoge; sleep 5 & wait; done"
image: docker.io/library/busybox
imagePullPolicy: Always
name: cli1
Now check the following commands:
$ kubectl get pod -n test $ kubectl get daemonset -n test
Anything else we need to know?
No response
Kubernetes version
$ kubectl version
v1.24.3
$ kubernetes version
v1.24.1
$ minikube version
minikube version: v1.26.0
commit: f4b412861bb746be73053c9f6d2895f12cf78565
Cloud provider
OS version
N/A
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, …) and versions (if applicable)
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 36 (36 by maintainers)
After looking at #113726, I think it is ok to just close this now as the PR is not really related to this issue but the clean up.
/close
Hi @gjkim42
I think all related PR for this issue are merged except #113726.
For DemonSets:
Resolved in v1.27(master): #113787Cherry-pick it to v1.26: #114819Cherry-pick it to v1.25: #114818For StatefulSet:
Resolved in v1.25: #109694Cherry-pick it to v1.23: #112084Cherry-pick it to v1.24: #112083Am I right?
Hi @gjkim42 IMO, Someone from the Apps team can help with that.
IMHO, if we think this issue is important enough to document on the website, we may rather consider cherry-picking it.
I am not against it, but I think we could choose either cherry-pick https://github.com/kubernetes/kubernetes/pull/109694 or just leave it.
DaemonSet has the exact same issue that cannot update status if pod creation fails.
I’ll propose a PR to fix it.
The issue with statefulset has been already addressed since v1.25 kubernetes by https://github.com/kubernetes/kubernetes/pull/109694.
I’ll cherry-pick it to release-1.24 so that it can be fixed after the next 1.24.x release.
For the daemonset, I’ll look further.
/assign