kubernetes: 0/x nodes are available: 1 node(s) had volume node affinity conflict
I have a pv, a pvc and a pod which mounts pv, it gives my error while creating pod 0/5 nodes are available: 1 node(s) had volume node affinity conflict, 4 node(s) didn’t match node selector
my pv:
kind: PersistentVolume
apiVersion: v1
metadata:
name: gamelanguage-log-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: "Retain"
hostPath:
path: "/backoffice-logs/gamelanguage-svc"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node4
my pvc:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gamelanguage-log-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
and my pod:
kind: Pod
apiVersion: v1
metadata:
name: gamelanguage-svc
labels:
role: gamelanguage-svc
environment: test
spec:
volumes:
- name: gamelanguage-log-storage
persistentVolumeClaim:
claimName: gamelanguage-log-pvc
containers:
- name: gamelanguage-svc
image: ubuntu
command: ["/bin/bash", "-ec", "do stuff"]
volumeMounts:
- mountPath: "/logs/gamelanguage-svc"
name: gamelanguage-log-storage
restartPolicy: Never
nodeSelector:
"kubernetes.io/hostname": node2
I am using manual StorageClass and I have cluster created by kubespray on 5 ubuntu machines, what might be the reason? I’ve read answers about setting different zones might be the problem but I am not using GKE or AWS so I dont think I’ve problem about region
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 4
- Comments: 16 (2 by maintainers)
I think it’s because the
nodeSelector
in your pod doesn’t match thenodeAffinity
in yourPersistentVolume
. You’re requesting a volume innode4
but asking to schedule the pod innode2
, so Kubernetes won’t ever be able to bind them together.just edit your storage class and change its volumeBindingMode: WaitForFirstConsumer, this will make sure that volume is attached to the node where the pod is scheduled.
No one experiencing same issue ?
I am hitting this same issue and getting volume node affinity conflicts
sorry if this rotten issue gets reopened.
I am using StatefulSets and my pod doesn’t have nodeSelector defined.
/remove-lifecycle rotten
i am also facing this issue. it looks like a bug in AKS. my nodes are with failure-domain/zone=0 but pvs are being tagged for westus2-2 or -3.