lvm-localpv: fails to format

What steps did you take and what happened:

   ----              ----               -------
  Warning  FailedScheduling  6m2s              default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: selectedNode annotation reset for PVC "elasticsearch-elasticsearch-cdm-4qo1qel7-1"
  Normal   Scheduled         16s               default-scheduler  Successfully assigned openshift-logging/elasticsearch-cdm-4qo1qel7-1-6db94d4d88-lwtv7 to alp-dts-g-c01oco09
  Warning  FailedMount       5s (x5 over 13s)  kubelet            MountVolume.SetUp failed for volume "pvc-c9073859-fd54-4890-b444-b96e6f46dea1" : rpc error: code = Internal desc = failed to format and mount the volume error: mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t xfs -o defaults /dev/datavg/pvc-c9073859-fd54-4890-b444-b96e6f46dea1 /var/lib/kubelet/pods/0c34d38c-88f0-4a1c-bf6f-02e6b3ab05cd/volumes/kubernetes.io~csi/pvc-c9073859-fd54-4890-b444-b96e6f46dea1/mount
Output: mount: /var/lib/kubelet/pods/0c34d38c-88f0-4a1c-bf6f-02e6b3ab05cd/volumes/kubernetes.io~csi/pvc-c9073859-fd54-4890-b444-b96e6f46dea1/mount: wrong fs type, bad option, bad superblock on /dev/mapper/datavg-pvc--c9073859--fd54--4890--b444--b96e6f46dea1, missing codepage or helper program, or other error.

beacause:

 mkfs.xfs /dev/datavg/pvc-c9073859-fd54-4890-b444-b96e6f46dea1
mkfs.xfs: /dev/datavg/pvc-c9073859-fd54-4890-b444-b96e6f46dea1 appears to contain an existing filesystem (xfs).
mkfs.xfs: Use the -f option to force overwrite.

Maybe it should force by default or some notes added to the docs.

What did you expect to happen: formatting should happen

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

  • LVM Driver version
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Comments: 23 (2 by maintainers)

Most upvoted comments

This behavior is due to compatibility issues between the container and the host operating system. openebs/lvm-localpv 0.6.0 version is already erasing the fs signatures on LVM volume before creating the volume. Fix was merged via #88 . This issue can be reproduced by performing the following steps:

Hmm, then howcome I experience this problem with 0.8.0? BTW, when you format, do you pass the -f (force) option?

Yes, we are passing -f(force) option from 0.6.0 version onwards

Then it’s a bit surprising to meet this in the current release for two reasons:

  1. If volumes are wiped at creation the superblock should be wiped in the first place and the bug should not surface
  2. If force formatting it should ignore it and pass anyway

I’ll try to provoke this in a third cluster when I have time.

This behavior is due to compatibility issues between the container and the host operating system. openebs/lvm-localpv 0.6.0 version is already erasing the fs signatures on LVM volume before creating the volume. Fix was merged via #88 . This issue can be reproduced by performing the following steps:

  • Create a volume(PVC) with ext4 fs and launched pod.
  • Delete a pod & volume(PVC)
  • Create a volume with XFS fs and launched pod… then the issue will be reproducible. Note: If we create volume again with the same fs as previous one then the application is able to access it.

Note that the safest is to do wipe at create too.

@davidkarlsen that was the plannd item for LVM LocalPV. We already wipe the lvm partition when we delete the volume. From the error it looks like you already had some partition before and volume landed at the same offset. We need to clear the fs at the creation time also. We had planned this and somehow missed implementing it. Will take care of adding this enhancement.