rook: Resizing OSD (Virtual Disks) Does Not Increase Available Capacity
Is this a bug report or feature request?
- Bug Report
Deviation from expected behavior:
- Increasing OSD size (resize virtual disks) increases total capacity and used capacity rather than available capacity.
Expected behavior:
- Increasing OSD size (resize virtual disks) increases total capacity and available capacity.
Details:
- Increased total capacity on 7/11 by 200 GB (50 GB * 4 OSD) from ~400 GB to ~600 GB.
- Increased total capacity on 7/13 by 200 GB (50 GB * 4 OSD) from ~600 GB to ~800 GB.
Screenshot:
- Image from Grafana shows increases in capacity over the past few days. When total capacity increase you would think available capacity would increase too, but instead used capacity increases.
Environment:
- OS (e.g. from /etc/os-release): Ubuntu 22.04.2 LTS
- Kernel (e.g.
uname -a): Linux dev-master0 5.15.0-70-generic #77-Ubuntu SMP Tue Mar 21 14:02:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux - Cloud provider or hardware configuration: On-premise, self-managed K8s running on VMWare.
- Rook version (use
rook versioninside of a Rook Pod): v1.11.4 - Storage backend version (e.g. for ceph do
ceph -v): 17.2.6 - Kubernetes version (use
kubectl version): v1.26.4 - Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): Vanilla
- Storage backend status (e.g. for Ceph use
ceph healthin the Rook Ceph toolbox): HEALTH_WARN 1 nearfull osd(s); 12 pool(s) nearfull
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 16 (7 by maintainers)
@satoru-takeuchi @travisn
I was able to fix this by replacing the OSDs. Thanks for everything!
@satoru-takeuchi
Each node has 2 virtual disks. Disk 1 (/dev/sda) is used by the operating system. Disk 2 (/dev/sdb) is used by Ceph for OSD. Disk 2 on each of these 4 worker nodes was resized as described in the issue. Not sure if this is what you were asking about. Please let me know if there is anything else I can provide. Thank you!