k3s: PVC not working with K3S
Describe the bug
Pods using PVCs are not starting.
To Reproduce
Do a kubectl apply -f busypvc.yaml where busypvc.yaml is:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: busyboxpv
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: busyboxpv
spec:
selector:
matchLabels:
app: busyboxpv
strategy:
type: Recreate
template:
metadata:
labels:
app: busyboxpv
spec:
containers:
- image: busybox
name: busyboxpv
command: ["sleep", "60000"]
volumeMounts:
- name: busyboxpv
mountPath: /mnt
volumes:
- name: busyboxpv
persistentVolumeClaim:
claimName: busyboxpv
With another cluster, it takes 10s to have a running busybox container, but here it is in a pending state forever:
/ # kubectl get pods
NAME READY STATUS RESTARTS AGE
busyboxpv-77665c79f4-f2fhp 0/1 Pending 0 3m44s
A describe over the pvc give:
/ # kubectl describe pvc busyboxpv
Name: busyboxpv
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"busyboxpv","namespace":"default"},"spec":{"accessMo...
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 9s (x19 over 4m20s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: busyboxpv-77665c79f4-f2fhp
Expected behavior
That the busybox pod is running with the defined pvc mounted.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 4
- Comments: 24 (7 by maintainers)
k3s doesn’t come with a default storage class. We are looking at including https://github.com/rancher/local-path-provisioner by default which just uses local disk, such that PVCs will at least work by default. You can try that storage class or install another third party one. More info here
I was able to get the local provisioner working. It’d be great if it enabled by default.
sudo mkdir /opt/local-path-provisionerkubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yamlkubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'I can confirm PVCs work with the localPath provisioner as described above, following the instructions in the corresponding Readme works just fine. For very small deployments, this may be all you ever need (it is for me). As long as this another solution isn’t included and enabled by default, could this be mentioned in the k3s Readme, at least in brief? I’d envision lots of people will be hitting this particular obstacle.
local-path-provisioner doesn’t work for me because it isn’t built for ARM
Any recommendations on what to use if “ReadWriteMany” is required? This doesn’t seem to support that access mode.
Just adding my 2 cents. I was able to get my PVC’s working using NFS (required for my specific case). I have followed https://github.com/kubernetes-incubator/external-storage/tree/master/nfs and even after doing all the steps I was still getting errors when bounding. The solution was to install nfs-commons on my Ubuntu 18.04 nodes. “sudo apt-get install nfs-common” .
FWIW, I am able to successfully run rook-ceph in a k3s-provisioned cluster (0.9.1). It may be related to setting the proper CSI path and installing rook using CSI instead of flexvolume (CSI is now the default in rook v0.9.x +):
csi.kubeletDirPath: /var/lib/rancher/k3s/agent/kubeletSee https://github.com/billimek/k8s-gitops/blob/master/rook-ceph/chart/rook-ceph-chart.yaml#L15-L17 for context.
Fixed with 1.10, tested it with k3d v1.3.4!
The PVC gets created under a path like:
If you create a file in /mnt, you will see it there.