kubernetes: Resizing the persistent volume in Azure AKS doesn't reflect the change at pod level.

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened: Currently I am using k8s version 1.11.2 on Azure AKS. I’ve tried resizing the persistent volume by adding allowVolumeExpansion: true for the storage class and editing the pvc to the desired size.

After restarting the pod the size of the pvc has changed to the desired size i.e from 2Ti -> 3Ti

kubectl get pvc
mongo-0     Bound     pvc-xxxx   3Ti        RWO            managed-premium   1h

but when i login to the pod and do a df -h the disk size still remains at 2Ti.

kubetl exec -it mongo-0 bash
root@mongo-0:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdc        2.0T  372M  2.0T   1% /mongodb

/sig storage

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 32 (11 by maintainers)

Commits related to this issue

Most upvoted comments

Hello, is there a way to get the pod to wait until the disk resize happens? Right now, after I kill the pod, it re-launches and the pvc never has time to resize.

This is troublesome because I am trying to resize a statefulset one by one. The StatefulSet is a Kafka cluser and it requires a certain amount of brokers to be online. I can’t just shutdown the entire thing to resize the nodes.

I also cannot add new nodes with a bigger size since there is no way to change the volumeClaimTemplate in a statefulset.

If I reduce the number of replicas by 1, I can resize the last instance, but I can’t resize the 2nd last instance. I would essentially need to shutdown half the cluster to be able to resize a disk which isn’t possible.

Is there a way to prevent a pod from running / mounting so I can resize one by one?

I had same issue with Kafka deployment on AKS. I did find a hacky way to get the PVCs resized. For each PVC, I resized the request. As stated, the Kafka PODs were bound the the respective PVCs so Azure would not increase the disk size. I placed a NoSchedule taint on every node in the cluster, thus preventing any new PODs to schedule. I then deleted all Kafka PODs. They did not restart due to taint which allowed AKS/Azure to up the size request. I then took off taint to all nodes and all Kafka PODs started and extended the file system on the resized disk. Just took a few minutes for Kafka to become healthy again. Hope this helps in similar situations.

@andyzhangx The way kubernetes is designed is so you can resize the PVC then restart the pod, and it is now resized. Your implementation should work as follows to be of any use:

Resize issued inside kubernetes.

Resize set to pending within azure.

Pod deleted causing disk to detach.

Disk is resized in Azure.

Pod starts as expected with resized disk.

Having to completely delete a deployment is a no go for most users especially if they use an operator.

@pydevops I have already deleted all the pods using the pvc before I started the kubectl edit pvc.... And that’s why the disks have been detached from the VM. And what do you mean by “the new pod”? So k8s is waiting for me to launch a pod and mount the pvc, rt?

yes correct.

@sharkymcdongles thanks for the info. It’s obvious that the disk size is not changed. You need to delete the pod or even Deployment/StatefulSet, make sure the disk is in unattached state and then run kubectl edit pvc ... operation again. After that operation, you could check in azure portal that the disk size has been changed to the new size. That’s the correct way since azure disk size could not be changed when it’s already attached to a VM.