rook: How can I resize pv created by "ceph.rook.io/block"

How can I resize a pv with preserving data. PV is created dynamically through storageclass. provisioner: ceph.rook.io/block

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 18 (5 by maintainers)

Most upvoted comments

Login to rook toolbox pod, run rbd resize <pv_id> --size=<size, f.e. 1T> Then either go to a node where this PV is mapped to, if it’s being used, or rbd map <pv> && mount /dev/... /mnt in toolbox pod Then xfs_growfs mount_point Unmount, unmap if that was in toolbox. For running pods no need to stop pod - can be done on active one

Now sure how to do for ext4, I’ve done it for XFS only

I tried to resize (extend size, havn’t tried to shrink the image) ext4 format. Here the steps that work for me:

  1. stop the pods which are currently using the volume

  2. get the rbd image name

 kubectl describe pv <pv-name> | grep RBDImage
  1. login to the rook ceph tool container
# replace "rook-ceph" with the rook namespace 
kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pods | grep tool | tr -s ' ' | cut -d ' ' -f 1) bash
  1. in the ceph tool container
# resize the rbd image
rbd resize <rbd-image-name> --size=<size, e.g. 1Gi>

# map the rbd image
DEVICE=$(rbd map <rbd-image-name>)

e2fsck -f $DEVICE

resize2fs $DEVICE

# unmap
rbd unmap <rbd-image-name>
  1. restart the pod

Hi, here is a list of commands executed in the toolbox pod for xfs:

rbd resize pvc-0785a7e1-731b-11e9-aa6e-0050568292c7 --size=200G --pool=replicapool
rbd map replicapool/pvc-0785a7e1-731b-11e9-aa6e-0050568292c7
mount /dev/rbd0 /mnt
xfs_growfs /mnt
umount /mnt
rbd unmap replicapool/pvc-0785a7e1-731b-11e9-aa6e-0050568292c7

which seem to work (xfs_growfs stated: blocks changed from 2621440 to 52428800). A status gives me this:

rbd info replicapool/pvc-0785a7e1-731b-11e9-aa6e-0050568292c7
rbd image 'pvc-0785a7e1-731b-11e9-aa6e-0050568292c7':
	size 200 GiB in 51200 objects
	order 22 (4 MiB objects)
	id: f4116b8b4567
	block_name_prefix: rbd_data.f4116b8b4567
	format: 2
	features: layering
	op_features: 
	flags: 
	create_timestamp: Fri May 10 11:59:14 2019

rbd du replicapool/pvc-0785a7e1-731b-11e9-aa6e-0050568292c7
warning: fast-diff map is not enabled for pvc-0785a7e1-731b-11e9-aa6e-0050568292c7. operation may be slow.
NAME                                     PROVISIONED    USED 
pvc-0785a7e1-731b-11e9-aa6e-0050568292c7     200 GiB 8.2 GiB 

However my pod still thinks it is out of disk. What am I missing?

EDIT I had to go to the node the pod was on. I ran: bash cat /etc/mtab and found my volume: bash /dev/rbd0 /var/lib/kubelet/plugins/ceph.rook.io/rook-ceph-system/mounts/pvc-0785a7e1-731b-11e9-aa6e-0050568292c7 xfs rw,seclabel,relatime,attr2,inode64,sunit=8192,swidth=8192,noquota 0 0.

I then ran bash xfs_growfs /var/lib/kubelet/plugins/ceph.rook.io/rook-ceph-system/mounts/pvc-0785a7e1-731b-11e9-aa6e-0050568292c7and the pod started again.

regards

Other simply method (with ext4) :

  • On ceph dashboard, edit image and change the size
  • Delete the deployment used by volume
  • Run this job: https://pastebin.com/viKFsdW5
  • Re-deploy your app with the volume

If you add the package “resize2fs” on your app container, you can just run on it “resize2fs /dev/rbd[number]” after resize the image on Ceph admin for avoid service downtime.

Is there any tutorial/documentation how to get this working?

Yup, that’s fine. It doesn’t matter much after it’s been allocated.