rook: ceph-volume lvm batch is not creating OSDs on partitions in latest Nautilus v14.2.13 and Octopus v15.2.8
Is this a bug report or feature request?
- Bug Report
Deviation from expected behavior:
In the latest Nautilus v14.2.15 and Octopus 15.2.8, ceph-volume lvm batch is not allowing an OSD to be created on a raw partition.
In the integration tests we are seeing this with the following in the osd prepare job:
2020-12-17 06:43:11.875525 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/sda1 --report
2020-12-17 06:43:12.515146 D | exec: usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
2020-12-17 06:43:12.515335 D | exec: [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
2020-12-17 06:43:12.515413 D | exec: [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
2020-12-17 06:43:12.515480 D | exec: [--auto] [--no-auto] [--bluestore] [--filestore]
2020-12-17 06:43:12.515619 D | exec: [--report] [--yes]
2020-12-17 06:43:12.515976 D | exec: [--format {json,json-pretty,pretty}] [--dmcrypt]
2020-12-17 06:43:12.516441 D | exec: [--crush-device-class CRUSH_DEVICE_CLASS]
2020-12-17 06:43:12.521079 D | exec: [--no-systemd]
2020-12-17 06:43:12.521237 D | exec: [--osds-per-device OSDS_PER_DEVICE]
2020-12-17 06:43:12.521308 D | exec: [--data-slots DATA_SLOTS]
2020-12-17 06:43:12.521394 D | exec: [--block-db-size BLOCK_DB_SIZE]
2020-12-17 06:43:12.521460 D | exec: [--block-db-slots BLOCK_DB_SLOTS]
2020-12-17 06:43:12.521524 D | exec: [--block-wal-size BLOCK_WAL_SIZE]
2020-12-17 06:43:12.521575 D | exec: [--block-wal-slots BLOCK_WAL_SLOTS]
2020-12-17 06:43:12.521656 D | exec: [--journal-size JOURNAL_SIZE]
2020-12-17 06:43:12.521738 D | exec: [--journal-slots JOURNAL_SLOTS] [--prepare]
2020-12-17 06:43:12.521813 D | exec: [--osd-ids [OSD_IDS [OSD_IDS ...]]]
2020-12-17 06:43:12.521880 D | exec: [DEVICES [DEVICES ...]]
2020-12-17 06:43:12.521930 D | exec: ceph-volume lvm batch: error: /dev/sda1 is a partition, please pass LVs or raw block devices
Expected behavior: Raw partitions have been working and expected to continue working.
How to reproduce it (minimal and precise):
Attempt to create an OSD on a partition with v15.2.8.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 2
- Comments: 15 (10 by maintainers)
If you need to create OSDs on partitions, you’ll need to use Ceph v14.2.12 or v15.2.7 while we are following up on the issue.
Hi,
I have rook-ceph-v1.5.8 and I’m getting this issue if I use anything else than ceph/ceph:v15.2.7 image in my CephCluster definition. At least with these versions of ceph I’m facing this issue:
I seems there is a regression after 15.2.7 😦. But in fact isn’t it a pure ceph issue ? To summarize for others coming here: use ceph/ceph:v15.2.7 for the CephCluster image:
@DjangoCalendar You can upgrade and use your existing OSD’s. But if you want to setup a fresh OSD it wont work on partitions.
@travisn does this issue affects upgrades as well.
Let’s say I am using:
with this setup I am running on raw partitions.
Will I be able to upgrade to version for example:
Asking in shorter way: Does this is issue affects only fresh deployment or upgrades as well ?