ceph-ansible: Ceph-ansible OSD node add fail (ceph.volume Failed to find physical volume)
Using ceph nautilus (v 14.2.13) last couple of months and installed using ceph-ansible version 4. Recently facing a problem of OSD node failure. While readd the osd node by reinstalling the OS generates an error as below:
**Deployment Node:
An exception occurred during task execution. To see the full traceback, use vvv. The error was: TypeError: list indices must be integers, not str
fatal: [ceph1.bol-online.com]: FAILED! => changed=false
module_stderr: |
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_ceph_volume_payload_FSVLCL/_main__.py", line 685, in <module>
File "/tmp/ansible_ceph_volume_payload_FSVLCL/__main__.py", line 681, in main
File "/tmp/ansible_ceph_volume_payload_FSVLCL/__main__.py", line 648, in run_module
TypeError: list indices must be integers, not str
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
**OSD Node Error:
[2020-11-16 17:36:04,838][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SE
C,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdk
[2020-11-16 17:36:04,845][ceph_volume.process][INFO ] stdout NAME="sdk" KNAME="sdk" MAJ:MIN="8:160" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="INTEL SSDSC2KB01" SIZE="1.8T" STATE="running" O
WNER="root" GROUP="disk" MODE="brw-rw---" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2020-11-16 17:36:04,846][ceph_volume.process][INFO ] Running command: /sbin/blkid -p /dev/sdk
[2020-11-16 17:36:04,857][ceph_volume.process][INFO ] Running command: /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,
vg_extent_size /dev/sdk
[2020-11-16 17:36:04,885][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdk".
[2020-11-16 17:36:04,886][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdk
[2020-11-16 17:36:04,911][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdk: (2) No such file or directory
[2020-11-16 17:36:04,912][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdk
[2020-11-16 17:36:04,937][ceph_volume.process][INFO ] stderr unable to read label for /dev/sdk: (2) No such file or directory
[2020-11-16 17:36:04,937][ceph_volume.process][INFO ] Running command: /sbin/udevadm info --query=property /dev/sdk
[2020-11-16 17:36:04,943][ceph_volume.process][INFO ] stdout DEVLINKS=/dev/disk/by-id/wwn-0x55cd2e415222cdf8 /dev/disk/by-id/scsi-355cd2e415222cdf8 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:11:0
[2020-11-16 17:36:04,944][ceph_volume.process][INFO ] stdout DEVNAME=/dev/sdk
[2020-11-16 17:36:04,944][ceph_volume.process][INFO ] stdout DEVPATH=/devices/pci0000:00/0000:00:02.2/0000:02:00.0/host0/target0:0:11/0:0:11:0/block/sdk
[2020-11-16 17:36:04,944][ceph_volume.process][INFO ] stdout DEVTYPE=disk
[2020-11-16 17:36:04,944][ceph_volume.process][INFO ] stdout ID_BUS=scsi
[2020-11-16 17:36:04,944][ceph_volume.process][INFO ] stdout ID_MODEL=INTEL_SSDSC2KB01
[2020-11-16 17:36:04,944][ceph_volume.process][INFO ] stdout ID_MODEL_ENC=INTEL\x20SSDSC2KB01
[2020-11-16 17:36:04,944][ceph_volume.process][INFO ] stdout ID_PATH=pci-0000:02:00.0-scsi-0:0:11:0
[2020-11-16 17:36:04,945][ceph_volume.process][INFO ] stdout ID_PATH_TAG=pci-0000_02_00_0-scsi-0_0_11_0
[2020-11-16 17:36:04,945][ceph_volume.process][INFO ] stdout ID_REVISION=0120
[2020-11-16 17:36:04,945][ceph_volume.process][INFO ] stdout ID_SCSI=1
[2020-11-16 17:36:04,945][ceph_volume.process][INFO ] stdout ID_SCSI_SERIAL=PHYF006002QA1P9DGN
[2020-11-16 17:36:04,945][ceph_volume.process][INFO ] stdout ID_SERIAL=355cd2e415222cdf8
[2020-11-16 17:36:04,945][ceph_volume.process][INFO ] stdout ID_SERIAL_SHORT=55cd2e415222cdf8
[2020-11-16 17:36:04,945][ceph_volume.process][INFO ] stdout ID_TYPE=disk
[2020-11-16 17:36:04,945][ceph_volume.process][INFO ] stdout ID_VENDOR=ATA
[2020-11-16 17:36:04,945][ceph_volume.process][INFO ] stdout ID_VENDOR_ENC=ATA\x20\x20\x20\x20\x20
[2020-11-16 17:36:04,946][ceph_volume.process][INFO ] stdout ID_WWN=0x55cd2e415222cdf8
[2020-11-16 17:36:04,946][ceph_volume.process][INFO ] stdout ID_WWN_WITH_EXTENSION=0x55cd2e415222cdf8
[2020-11-16 17:36:04,946][ceph_volume.process][INFO ] stdout MAJOR=8
[2020-11-16 17:36:04,946][ceph_volume.process][INFO ] stdout MINOR=160
[2020-11-16 17:36:04,946][ceph_volume.process][INFO ] stdout SUBSYSTEM=block
[2020-11-16 17:36:04,946][ceph_volume.process][INFO ] stdout TAGS=:systemd:
[2020-11-16 17:36:04,946][ceph_volume.process][INFO ] stdout USEC_INITIALIZED=15991838
[2020-11-16 17:36:04,947][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdh -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,l
v_size
[2020-11-16 17:36:04,973][ceph_volume.process][INFO ] Running command: /bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SE
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 24 (4 by maintainers)
@dsavineau Currently I am using ubuntu 18.04 for my Ceph cluster version (ceph version 14.2.8 (2d095e947a02261ce61424021bb43bd3022d35cb) nautilus (stable)). Do you suggest me to use Ceph Nautilus release (like 14.2.12) as --no-auto not worked for version 14.2.8?
@sli720 its not in yml file. Its on ceph_volume.py and I see that command is passing.