kubernetes: PodTopologySpreadConstraints doesn't work
What happened: The kubernetes cluster have three worker nodes and a master one, their labels as follows:
[root@master-3 yaml]# kubectl get node -l'zone' --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready worker 4d3h v1.18.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,zone=1
slave-3.146 Ready worker 15d v1.18.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=slave-3.146,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,zone=2
zbw-0.211 Ready worker 15d v1.18.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=zbw-0.211,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=proxy,zone=3
[root@master-3 yaml]#
When i tried to use statefulset to verify podSpreadConstraints, the result was not expected.
[root@master-3 yaml]# kubectl get pod -ntest -owide -l'foo=bar'
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pause-0 1/1 Running 0 5m5s 172.31.126.231 zbw-0.211 <none> <none>
pause-1 1/1 Running 0 4m54s 172.31.126.224 zbw-0.211 <none> <none>
pause-2 1/1 Running 0 4m37s 172.31.126.246 zbw-0.211 <none> <none>
What you expected to happen: What i expected is all pods would be distributed on different worker nodes, but all of them were on the same one. How to reproduce it (as minimally and precisely as possible): Here is my yaml file
apiVersion: v1
kind: Service
metadata:
name: pause
namespace: test
spec:
clusterIP: None
selector:
foo: bar
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pause
namespace: test
labels:
foo: bar
spec:
# podManagementPolicy: Parallel
selector:
matchLabels:
foo: bar
serviceName: pause
replicas: 3
template:
metadata:
name: pause
labels:
foo: bar
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
foo: bar
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: zone
operator: Exists
containers:
- name: pause
image: lenhattan86/pause:3.1
imagePullPolicy: Always
Anything else we need to know?: This issue 91152 is also seen. Here are some references advance-usage pod-topology-spread-constraints
Environment:
-
Kubernetes version (use
kubectl version
): Client Version: version.Info{Major:“1”, Minor:“18”, GitVersion:“v1.18.6”, GitCommit:“e56f0e6297981b70192ae06db03f6c92301eb704”, GitTreeState:“clean”, BuildDate:“2020-07-17T05:36:26Z”, GoVersion:“go1.13.9”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“18”, GitVersion:“v1.18.6”, GitCommit:“dff82dc0de47299ab66c83c626e08b245ab19037”, GitTreeState:“clean”, BuildDate:“2020-07-15T16:51:04Z”, GoVersion:“go1.13.9”, Compiler:“gc”, Platform:“linux/amd64”} -
Cloud provider or hardware configuration:
-
OS (e.g:
cat /etc/os-release
): NAME=“CentOS Linux” VERSION=“7 (Core)” ID=“centos” ID_LIKE=“rhel fedora” VERSION_ID=“7” PRETTY_NAME=“CentOS Linux 7 (Core)” ANSI_COLOR=“0;31” CPE_NAME=“cpe:/o:centos:centos:7” HOME_URL=“https://www.centos.org/” BUG_REPORT_URL=“https://bugs.centos.org/” CENTOS_MANTISBT_PROJECT=“CentOS-7” CENTOS_MANTISBT_PROJECT_VERSION=“7” REDHAT_SUPPORT_PRODUCT=“centos” REDHAT_SUPPORT_PRODUCT_VERSION=“7” -
Kernel (e.g.
uname -a
): Linux master-3.145 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux -
Install tools: kubeadm
-
Network plugin and version (if this is a network-related bug):
-
Others:
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 21 (21 by maintainers)
@alculquicondor @Huang-Wei @knight42 First of all thanks for your help, The
EvenPodSpread
feature is not working because i used the wrongkubeadm
. Thekubeadm
I used was modified by my colleague, it was disabled. I tried it on this website play-with-k8s, it can work. I am very sorry for the trouble caused to you, and this issue can be closed.