rook: Users created by create-external-cluster-resources.py --restricted-auth-permissions won't work for CephFS
Is this a bug report or feature request?
- Bug Report
Deviation from expected behavior:
When creating a new deployment and using a pvc that references my cephfs storage class, the pvc errors out with the following warning: failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Internal desc = rados: ret=-1, Operation not permitted
Using the admin keyring and not restricted users works fine.
Expected behavior: I expect the PVC (and volume) to be successfully created.
How to reproduce it (minimal and precise): create a external ceph cluster and generate the secrets with the create-external-cluster-resources.py in --restricted-auth-permissions mode. Then create a PVC with the rook-cephfs storage class.
Additional Notes: While this should work in theory, has someone actually tested this and managed to get it working? This might be related to https://github.com/ceph/ceph-csi/issues/2506 (though they are talking about subvolumes [which sound like a good approach to multi tenancy cephfs, maybe take a closer look at this in the future?])
File(s) to submit:
Ceph users:
client.csi-cephfs-node-co-staging-k8s
key: somkey
caps: [mds] allow rw
caps: [mgr] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs.co-staging-k8s_fs.data
client.csi-cephfs-provisioner-co-staging-k8s
key: somkey
caps: [mgr] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs metadata=cephfs.co-staging-k8s_fs.meta
kubectl describe on pvc
$ ~ > kubectl describe pvc test-cephfs
Name: test-cephfs
Namespace: default
StorageClass: rook-cephfs
Status: Pending
Volume:
Labels: cattle.io/creator=norman
Annotations: field.cattle.io/creatorId: u-qgg6x4f6xn
volume.beta.kubernetes.io/storage-provisioner: rook-ceph-external.cephfs.csi.ceph.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: test-cephfs-87d6c4f65-7wp9m
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 58m (x14 over 77m) rook-ceph-external.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-b55f5fc54-jwwrs_006741d9-8a23-442a-9c04-8749b082362f failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Internal desc = rados: ret=-1, Operation not permitted
Normal Provisioning 3m54s (x29 over 77m) rook-ceph-external.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-b55f5fc54-jwwrs_006741d9-8a23-442a-9c04-8749b082362f External provisioner is provisioning volume for claim "default/test-cephfs"
Normal ExternalProvisioning 2m16s (x303 over 77m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "rook-ceph-external.cephfs.csi.ceph.com" or manually created by system administrator
Storageclass:
$ ~ > kubectl get storageclasses.storage.k8s.io rook-cephfs -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: "2021-11-22T10:59:49Z"
managedFields:
- apiVersion: storage.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:allowVolumeExpansion: {}
f:parameters:
.: {}
f:clusterID: {}
f:csi.storage.k8s.io/controller-expand-secret-name: {}
f:csi.storage.k8s.io/controller-expand-secret-namespace: {}
f:csi.storage.k8s.io/node-stage-secret-name: {}
f:csi.storage.k8s.io/node-stage-secret-namespace: {}
f:csi.storage.k8s.io/provisioner-secret-name: {}
f:csi.storage.k8s.io/provisioner-secret-namespace: {}
f:fsName: {}
f:pool: {}
f:provisioner: {}
f:reclaimPolicy: {}
f:volumeBindingMode: {}
manager: HashiCorp
operation: Update
time: "2021-11-22T10:59:49Z"
name: rook-cephfs
resourceVersion: "8123725"
uid: 0952fd2a-6315-443c-af7d-f10f01315e0b
parameters:
clusterID: rook-ceph-external
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph-external
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph-external
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph-external
fsName: co-staging-k8s_fs
pool: cephfs.co-staging-k8s_fs.data
provisioner: rook-ceph-external.cephfs.csi.ceph.com
reclaimPolicy: Retain
volumeBindingMode: Immediate
Environment:
- OS (e.g. from /etc/os-release): Debian Bullseye
- Kernel (e.g.
uname -a): 5.14.0-1-amd64 - Rook version (use
rook versioninside of a Rook Pod): 2.7.8 - CSI Versions: k8s.gcr.io/sig-storage/csi-attacher:v3.3.0 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0 k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0 k8s.gcr.io/sig-storage/csi-resizer:v1.3.0 k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.0 quay.io/cephcsi/cephcsi:v3.4.0
- Storage backend version (e.g. for ceph do
ceph -v): 16.2.6 - Kubernetes version (use
kubectl version): v1.20.11 - Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): Rancher/Terraform managed K8s cluster
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 21 (14 by maintainers)
Commits related to this issue
- security: use correct osd tags for restricted caps The create-external-cluster-resources.py with --create-external-cluster-resources set to true sets a wrong osd application tag. It should use the fs... — committed to CO-lhageman/rook by CO-lhageman 3 years ago
- security: use correct osd tags for restricted caps The create-external-cluster-resources.py with --create-external-cluster-resources set to true sets a wrong osd application tag. It should use the fs... — committed to CO-lhageman/rook by CO-lhageman 3 years ago
- security: use correct osd tags for restricted caps The create-external-cluster-resources.py with --create-external-cluster-resources set to true sets a wrong osd application tag. It should use the fs... — committed to rook/rook by CO-lhageman 3 years ago
- security: use correct osd tags for restricted caps The create-external-cluster-resources.py with --create-external-cluster-resources set to true sets a wrong osd application tag. It should use the fs... — committed to parth-gr/rook by CO-lhageman 3 years ago
- security: use correct osd tags for restricted caps The create-external-cluster-resources.py with --create-external-cluster-resources set to true sets a wrong osd application tag. It should use the fs... — committed to parth-gr/rook by CO-lhageman 3 years ago
@CO-lhagman @parth-gr Sorry I could not get to this issue as I was busy with other things. Yes, the issue is with caps restriction. The caps setting for the users were incorrect w.r.t osd application tag. The osd application tag takes the key value pair which
'data|metadata'=<filesystem-name>notdata|metadata=<pool-name>Original user caps in your case
Change above to below caps
This should work. Please validate.
I changed the names to match the created ceph users as a workaround, that Issue should not be related to this issue
@parth-gr I can confirm it works without issues with RBD Images