kubernetes: ReplicaSet not spread amongst failure domains
/kind bug
I create a GKE cluster with three zones, one node in each. I create a Deployment/ReplicaSet with 3 replicas with kubectl run.
What happened: Two pods end up on node 1 and one pod on node 2.
What you expected to happen: One pod each on three nodes
How to reproduce it (as minimally and precisely as possible): Use kubectl run to create a deployment. (If you don’t get this outcome, delete it and try again until you do.)
$ kubectl get nodes --show-labels NAME STATUS AGE VERSION LABELS gke-gke-highlights-regional-pool-62b7bad4-26v0 Ready 10m v1.7.8-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=regional-pool,failure-domain.beta.kubernetes.io/region=europe-west1,failure-domain.beta.kubernetes.io/zone=europe-west1-c,kubernetes.io/hostname=gke-gke-highlights-regional-pool-62b7bad4-26v0 gke-gke-highlights-regional-pool-884432cb-qw4n Ready 10m v1.7.8-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=regional-pool,failure-domain.beta.kubernetes.io/region=europe-west1,failure-domain.beta.kubernetes.io/zone=europe-west1-b,kubernetes.io/hostname=gke-gke-highlights-regional-pool-884432cb-qw4n gke-gke-highlights-regional-pool-ca7f6c0d-dbsr Ready 8m v1.7.8-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=regional-pool,failure-domain.beta.kubernetes.io/region=europe-west1,failure-domain.beta.kubernetes.io/zone=europe-west1-d,kubernetes.io/hostname=gke-gke-highlights-regional-pool-ca7f6c0d-dbsr $ kubectl run hello-web --image=gcr.io/google-samples/hello-app:1.0 --port=8080 --replicas=3 deployment "hello-web" created $ kubectl describe pod | grep Node: Node: gke-gke-highlights-regional-pool-ca7f6c0d-dbsr/10.132.0.14 Node: gke-gke-highlights-regional-pool-884432cb-qw4n/10.132.0.15 Node: gke-gke-highlights-regional-pool-ca7f6c0d-dbsr/10.132.0.14
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
): 1.7.8 and 1.8.3:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-14T06:55:55Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.8-gke.0", GitCommit:"a7061d4b09b53ab4099e3b5ca3e80fb172e1b018", GitTreeState:"clean", BuildDate:"2017-10-10T18:48:45Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.3-gke.0", GitCommit:"86d3ac5eaf57223302c95e7d9fc1aeff55fb0c15", GitTreeState:"clean", BuildDate:"2017-11-08T21:42:58Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: GKE
- OS (e.g. from /etc/os-release): COS
About this issue
- Original URL
- State: open
- Created 7 years ago
- Comments: 20 (10 by maintainers)
@hakman That flake is probably #89178