kubernetes: local storage PV can be created even if there are no such path
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened: I was using the local storage feature, but found that the PV can always be created even if the local path does not exist. This will cause my pod failed to start.
What you expected to happen: Can we make the PV creation failed?
How to reproduce it (as minimally and precisely as possible):
- Two nodes in the cluster with local persistent volume enabled by feature gateway.
root@k8s001:~/cases/local-storage# kubectl get nodes
NAME STATUS AGE VERSION
k8s001 Ready 29d v1.8.0-alpha.1.685+950a09d982286a
k8s004 Ready 5h v1.8.0-alpha.1.685+950a09d982286a
- Create a PV with a non-exist local path, please note
/mnt/disks/vol11
does not exist on any of the hosts in the cluster.
root@k8s001:~/cases/local-storage-fail# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
annotations:
"volume.alpha.kubernetes.io/node-affinity": '{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{ "matchExpressions": [
{ "key": "kubernetes.io/hostname",
"operator": "In",
"values": ["k8s001"]
}
]}
]}
}'
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/vol11
root@k8s001:~/cases/local-storage-fail# kubectl create -f ./pv.yaml
persistentvolume "example-local-pv" created
root@k8s001:~/cases/local-storage-fail# kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
example-local-pv 1Gi RWO Delete Available local-storage 1s
- Create a PVC based on the PV
root@k8s001:~/cases/local-storage-fail# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: local-storage
root@k8s001:~/cases/local-storage-fail# kubectl create -f ./pvc.yaml
persistentvolumeclaim "example-local-claim" created
root@k8s001:~/cases/local-storage-fail# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
example-local-claim Bound example-local-pv 1Gi RWO local-storage 3s
- Create a pod with the pvc, pod creation failed due to the PV local path does not exist.
root@k8s001:~/cases/local-storage-fail# cat local.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx:1.8.1
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: example-local-claim
root@k8s001:~/cases/local-storage-fail# kubectl create -f ./local.yaml
pod "test-pd" created
root@k8s001:~/cases/local-storage-fail# kubectl describe pods test-pd
Name: test-pd
Namespace: default
Node: k8s001/192.168.56.11
Start Time: Tue, 04 Jul 2017 17:09:03 +0800
Labels: <none>
Annotations: <none>
Status: Pending
IP:
Containers:
test-container:
Container ID:
Image: nginx:1.8.1
Image ID:
Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/test-pd from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8557l (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
test-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: example-local-claim
ReadOnly: false
default-token-8557l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8557l
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
7s 7s 1 default-scheduler Normal Scheduled Successfully assigned test-pd to k8s001
7s 7s 1 kubelet, k8s001 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-8557l"
7s 3s 4 kubelet, k8s001 Warning FailedMount MountVolume.SetUp failed for volume "example-local-pv" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: /mnt/disks/vol11 /var/lib/kubelet/pods/6c9e7845-6098-11e7-a6a3-08002759511a/volumes/kubernetes.io~local-volume/example-local-pv [bind]
Output: mount: special device /mnt/disks/vol11 does not exist
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
):
root@k8s001:~/cases/local-storage# kubectl version
Client Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.0-alpha.1.685+950a09d982286a", GitCommit:"950a09d982286a32a045f956d91680e0defd71dd", GitTreeState:"clean", BuildDate:"2017-07-02T11:23:13Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.0-alpha.1.685+950a09d982286a", GitCommit:"950a09d982286a32a045f956d91680e0defd71dd", GitTreeState:"clean", BuildDate:"2017-07-02T11:23:13Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration**:
root@k8s001:~/cases/local-storage# cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
UBUNTU_CODENAME=xenial
- OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
):
root@k8s001:~/cases/local-storage# uname -a
Linux k8s001 4.4.0-83-generic #106-Ubuntu SMP Mon Jun 26 17:54:43 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
-
Install tools: Manually.
-
Others:
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 15 (15 by maintainers)
Validating this may be challenging as the validation path in the API server would have to evaluate the node selector, go to each of the nodes (if it evaluates to more than one. this could also be a problem in the future if you use the pv node affinity for things like regions or zones that could have a lot of nodes), gain access to the host system, and check that the path exists.
If you use the external local storage provisioner to manage and create your local PVs, then you will not have this issue.
This is work as intended. Right now, we trust admin to create the ‘right’ PV.