ceph-csi: CephFS CSI driver not update quota when pvc request size resized

Describe the bug

Change in pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fileshare
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Mi
  storageClassName: csi-cephfs-sc

storage: 20Mi

pv and pvc object in kubernetes changed

k get pv
pvc-60f6f968-050b-4a28-9112-db8de1747e6b   20Mi       RWX            Delete           Bound       default/fileshare                               csi-cephfs-sc            120m
k get pvc
fileshare        Bound    pvc-60f6f968-050b-4a28-9112-db8de1747e6b   20Mi       RWX            csi-cephfs-sc   121m

but quota on cephfs directory not changed

getfattr -n ceph.quota.max_bytes 5f858910-7fcf-408d-a14b-4c192aaa546e/
# file: 5f858910-7fcf-408d-a14b-4c192aaa546e/
ceph.quota.max_bytes="10485760"

Environment details

  • Image/version of Ceph CSI driver 2.1.0
  • helm chart version
  • Kubernetes cluster version 1.16.9
  • Logs k logs ceph-csi-cephfs-provisioner-6d7cb9978b-b58nh -c csi-resizer
I0502 15:37:39.591857       1 controller.go:225] Started PVC processing "default/fileshare"
I0502 15:37:39.612923       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"fileshare", UID:"60f6f968-050b-4a28-9112-db8de1747e6b", APIVersion:"v1", ResourceVersion:"79966", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is resizing volume pvc-60f6f968-050b-4a28-9112-db8de1747e6b
I0502 15:37:39.622313       1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerExpandVolume
I0502 15:37:39.622335       1 connection.go:183] GRPC request: {"capacity_range":{"required_bytes":20971520},"secrets":"***stripped***","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["debug"]}},"access_mode":{"mode":5}},"volume_id":"0001-0024-bcd0d202-fba8-4352-b25d-75c89258d5ab-0000000000000001-22b377c5-8c7b-11ea-b1ba-ea0c41f9366d"}
I0502 15:37:41.505427       1 connection.go:185] GRPC response: {"capacity_bytes":20971520}
I0502 15:37:41.506125       1 connection.go:186] GRPC error: <nil>
I0502 15:37:41.506139       1 controller.go:364] Resize volume succeeded for volume "pvc-60f6f968-050b-4a28-9112-db8de1747e6b", start to update PV's capacity
I0502 15:37:41.543608       1 controller.go:370] Update capacity of PV "pvc-60f6f968-050b-4a28-9112-db8de1747e6b" to 20Mi succeeded
I0502 15:37:41.551288       1 controller.go:399] Resize PVC "default/fileshare" finished
I0502 15:37:41.551661       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"fileshare", UID:"60f6f968-050b-4a28-9112-db8de1747e6b", APIVersion:"v1", ResourceVersion:"79966", FieldPath:""}): type: 'Normal' reason: 'VolumeResizeSuccessful' Resize volume succeeded

Steps to reproduce

install csi cephfs driver from helm chart 2.1.0 config storage class create pvc, wait for pv change request storage size in pvc - look on cephfs quotas

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 23 (4 by maintainers)

Commits related to this issue

Most upvoted comments

I also had the same issue. In my case, I had deployed ceph csi v2.0 using rook v1.2.7 (kubernetes v1.18) As a result of testing in various ceph versions, it worked normally in ceph v14.2.2 and v14.2.7, but not in ceph v14.2.8.

In ceph v14.2.8, cephfs subvolume resize command has appeared. ( cephfs: mgr/volumes: fs subvolume resize command pr#31332 )

However, ceph-csi code still uses ceph fs subvolume create command when resizing cephfs volume. https://github.com/ceph/ceph-csi/blob/0ca07e465776a656c52e9d51dddd690a292d5fd2/internal/cephfs/controllerserver.go#L364-L367

In ceph v14.2.8, I tried resizing cephFS volume created with ceph-csi using ceph fs subvolume create command and ceph fs subvolume reisze command via ceph-client.

As a result, the ceph fs subvolume create command did not change the quota, but the ceph fs subvolume resize command changed the quota.

# try to cephFS subvolume resize (100MiB -> 2GiB)
# ceph fs subvolume create command
$ ceph fs subvolume create myfs csi-vol-da1a12e5-85ca-11ea-8eb5-0242ac110008 2147483648 --group_name csi --mode 777
$ getfattr -n ceph.quota.max_bytes  /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f8c257fe-b0e8-4bfa-828c-86085f262136/globalmount
getfattr: Removing leading '/' from absolute path names
# file: var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f8c257fe-b0e8-4bfa-828c-86085f262136/globalmount
ceph.quota.max_bytes="104857600"

# ceph fs subvolume resize command
$ ceph fs subvolume resize myfs csi-vol-da1a12e5-85ca-11ea-8eb5-0242ac110008 2147483648 --group_name csi --no_shrink
[
    {
        "bytes_used": 119537664
    },
    {
        "bytes_quota": 2147483648
    },
    {
        "bytes_pcent": "5.57"
    }
]
$ getfattr -n ceph.quota.max_bytes  /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f8c257fe-b0e8-4bfa-828c-86085f262136/globalmount
getfattr: Removing leading '/' from absolute path names
# file: var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f8c257fe-b0e8-4bfa-828c-86085f262136/globalmount
ceph.quota.max_bytes="2147483648"

I think it seems that ceph fs subvolume create command doesn’t work anymore for resizing as ceph version goes up

it has to be fixed in CSI layer as to use ceph fs subvolume resize for particular ceph versions ie greater than 14.2.8

Yeah, that’s pending on CSI side.