ceph-csi: Can't dynamic create PersistentVolume after PVC created. Operation not permitted

Hello everyone. I’ve got issue with CEPH I have:

  • RKE cluster v2.3.5: 3 etcd, 2 controplane, 10 workernodes etcd and controlplane have inly Internal NIC workernodes have 3 NICs: Internal (internal access only), External (public access), Storage (for CEPH access)
  • Kubernetes 1.16.7
  • ceph-csi by manual and this one in CEPH configuration
client.kubernetes
    key: <key>
    caps: [mgr] allow *
    caps: [mon] allow *
    caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=rbd
╰ kubectl get all | grep csi
pod/csi-rbdplugin-67t8s               3/3   Running  0     22s
pod/csi-rbdplugin-h85qc               3/3   Running  0     22s
pod/csi-rbdplugin-j2sfn               3/3   Running  0     22s
pod/csi-rbdplugin-jbxq8               3/3   Running  0     22s
pod/csi-rbdplugin-k895x               3/3   Running  0     22s
pod/csi-rbdplugin-mfdgw               3/3   Running  0     22s
pod/csi-rbdplugin-provisioner-6956bdfdf9-6nwcj   6/6   Running  0     31s
pod/csi-rbdplugin-provisioner-6956bdfdf9-b8hsv   6/6   Running  0     31s
pod/csi-rbdplugin-provisioner-6956bdfdf9-wjr6k   6/6   Running  0     31s
pod/csi-rbdplugin-psks9               3/3   Running  0     22s
pod/csi-rbdplugin-wd5mn               3/3   Running  0     22s
pod/csi-rbdplugin-wdcc4               3/3   Running  0     22s
pod/csi-rbdplugin-wh8tm               3/3   Running  0     22s
service/csi-metrics-rbdplugin     ClusterIP  10.43.70.156  <none>    8080/TCP,8090/TCP  23s
service/csi-rbdplugin-provisioner   ClusterIP  10.43.118.252  <none>    8080/TCP,8090/TCP  32s
daemonset.apps/csi-rbdplugin  10    10    10   10      10     <none>     23s
deployment.apps/csi-rbdplugin-provisioner   3/3   3      3      32s
replicaset.apps/csi-rbdplugin-provisioner-6956bdfdf9   3     3     3    33s

after apply:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: rbd-pvc
spec:
 accessModes:
  - ReadWriteOnce
 volumeMode: Filesystem
 resources:
  requests:
   storage: 1Gi
 storageClassName: csi-rbd-sc

in csi-provisioner :

I0330 15:23:53.381131    1 controller.go:1199] provision "default/rbd-pvc" class "csi-rbd-sc": started
I0330 15:23:53.386399    1 controller.go:494] CreateVolumeRequest {Name:pvc-b8f3d915-b79d-4971-9bc9-3d3b8e6afe08 CapacityRange:required_bytes:1073741824 VolumeCapabilities:[mount:<fs_type:"ext4" mount_flags:"discard" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[clusterID:00f829de-9c40-4a0b-b8f2-8e270d3bded7 csi.storage.k8s.io/controller-expand-secret-name:csi-rbd-secret csi.storage.k8s.io/controller-expand-secret-namespace:default csi.storage.k8s.io/fstype:ext4 csi.storage.k8s.io/node-stage-secret-name:csi-rbd-secret csi.storage.k8s.io/node-stage-secret-namespace:default csi.storage.k8s.io/provisioner-secret-name:csi-rbd-secret csi.storage.k8s.io/provisioner-secret-namespace:default imageFeatures:layering pool:rbd] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:<nil> XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0330 15:23:53.386781    1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"rbd-pvc", UID:"b8f3d915-b79d-4971-9bc9-3d3b8e6afe08", APIVersion:"v1", ResourceVersion:"5630315", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/rbd-pvc"
I0330 15:23:53.398947    1 connection.go:180] GRPC call: /csi.v1.Controller/CreateVolume
I0330 15:23:53.399004    1 connection.go:181] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-b8f3d915-b79d-4971-9bc9-3d3b8e6afe08","parameters":{"clusterID":"00f829de-9c40-4a0b-b8f2-8e270d3bded7","imageFeatures":"layering","pool":"rbd"},"secrets":"**stripped**","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["discard"]}},"access_mode":{"mode":1}}]}
I0330 15:23:54.519042    1 connection.go:183] GRPC response: {}
I0330 15:23:54.519615    1 connection.go:184] GRPC error: rpc error: code = Internal desc = failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted
I0330 15:23:54.519669    1 controller.go:1016] Final error received, removing PVC b8f3d915-b79d-4971-9bc9-3d3b8e6afe08 from claims in progress

I can create PersistentVolume in Rancher UI and attach to created StorageClass and it shown as created by external provisioner rbd.csi.ceph.com Network connection is ok, credentials work

kubectl run ceph-test --image=ceph/ceph:v15.2 --restart=Never -n default:

[root@ceph-test-d6b968c66-n49jl /]# rbd -n client.kubernetes --keyring /etc/ceph/keyring info rbd/pvctest
rbd image 'pvctest':
	size 1 GiB in 256 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 1ed9de615f3bc8
	block_name_prefix: rbd_data.1ed9de615f3bc8
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features:
	flags:
	create_timestamp: Mon Mar 30 15:53:40 2020

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 21 (6 by maintainers)

Commits related to this issue

Most upvoted comments

Yes, even this issue is not present in 2.0.1 or 2.0.0 release