rook: rook-ceph: failed to create clone from snapshot
Is this a bug report or feature request?
- Bug Report
Deviation from expected behavior:
When creating a PVC with dataSource being a (ready) VolumeSnapshot, provisioning fails with the following relevant log:
csi-provisioner I0929 21:31:46.758705 1 controller.go:1317] provision "test-namspace/test-pvc-clone" class "ssd-fs": started
csi-provisioner I0929 21:31:46.758921 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"test-namspace", Name:"test-pvc-clone", UID:"2471c4f6-242c-49a0-a0e6-8f5dc16b8903", APIVersion:"v1", ResourceVersion:"446481105", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "test-namspace/test-pvc-clone"
csi-provisioner W0929 21:31:46.956630 1 controller.go:943] Retrying syncing claim "2471c4f6-242c-49a0-a0e6-8f5dc16b8903", failure 0
csi-provisioner E0929 21:31:46.956702 1 controller.go:966] error syncing claim "2471c4f6-242c-49a0-a0e6-8f5dc16b8903": failed to provision volume with StorageClass "ssd-fs": rpc error: code = Aborted desc = pending
csi-provisioner I0929 21:31:46.956721 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"test-namspace", Name:"test-pvc-clone", UID:"2471c4f6-242c-49a0-a0e6-8f5dc16b8903", APIVersion:"v1", ResourceVersion:"446481105", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "ssd-fs": rpc error: code = Aborted desc = pending
csi-cephfsplugin E0929 21:31:46.956102 1 controllerserver.go:69] ID: 122 Req-ID: pvc-2471c4f6-242c-49a0-a0e6-8f5dc16b8903 failed to create clone from snapshot csi-snap-7fee3c7d-2168-11ec-b737-de52ee8d938f: pending
The PVC stays in status Pending.
Expected behavior:
The PV should be provisioned and the PVC should enter status Bound.
How to reproduce it (minimal and precise):
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: test-pvc-snap
namespace: test-namespace
spec:
volumeSnapshotClassName: ssd-fs-snapclass
source:
persistentVolumeClaimName: test-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc-clone
namespace: test-namespace
spec:
accessModes:
- ReadWriteMany
storageClassName: ssd-fs
resources:
requests:
storage: 64Gi
dataSource:
apiGroup: snapshot.storage.k8s.io
kind: VolumeSnapshot
name: test-pvc-snap
Environment:
- OS (e.g. from /etc/os-release): Debian 10
- Kernel (e.g.
uname -a): 4.19.0-17-amd64 - Cloud provider or hardware configuration: bare-metal
- Rook version (use
rook versioninside of a Rook Pod): v1.6.4 - Storage backend version (e.g. for ceph do
ceph -v): ceph/ceph:v15.2.13 - Kubernetes version (use
kubectl version): v1.19.15 - Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): kubeadm
- Storage backend status (e.g. for Ceph use
ceph healthin the Rook Ceph toolbox):HEALTH_WARN mons are allowing insecure global_id reclaim; 1 MDSs report oversized cache; 1 clients failing to respond to cache pressure; mons a,f are low on available space
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 16 (11 by maintainers)
@gmartynov you need to pass
--group_name=csiwhen running subvolume commands