mayastor: MSP operator is not able to find the file backed store (aio) or block device when creating an MSP

Describe the bug The MSP operator is not able to find a file backed store or formatted disk on the node. Using disks: ["aio:///tmp/disk1.img"] and/or disks: ["/dev/sdz"] results in a warning message The block device(s): /dev/sdz can not be found and stuck in creating for the MSP.

To Reproduce

cat <<EOF | kubectl create -f -
apiVersion: "openebs.io/v1alpha1"
kind: MayastorPool
metadata:
  name: filepool-2
  namespace: mayastor
spec:
  node: endor
  disks: ["aio:///tmp/disk1.img"]
EOF

Expected behavior The MSP will be created using either a file backed store or a block device.

** OS info (please complete the following information)😗*

  • Distro: [Fedora 33]
  • Kernel version - 5.12.9-200.fc33.x86_64
  • MayaStor revision v1.0.0

Additional context

k describe mayastorpool filepool-2 -n mayastor
Name:         filepool-2
Namespace:    mayastor
Labels:       <none>
Annotations:  <none>
API Version:  openebs.io/v1alpha1
Kind:         MayastorPool
Metadata:
  Creation Timestamp:  2022-02-07T22:47:15Z
  Finalizers:
    io.mayastor.pool/cleanup
  Generation:  1
  Managed Fields:
    API Version:  openebs.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:disks:
        f:node:
    Manager:      kubectl-client-side-apply
    Operation:    Update
    Time:         2022-02-07T22:47:15Z
    API Version:  openebs.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:available:
        f:capacity:
        f:state:
        f:used:
    Manager:      Mayastor pool operator
    Operation:    Update
    Time:         2022-02-07T22:47:16Z
    API Version:  openebs.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"io.mayastor.pool/cleanup":
    Manager:         unknown
    Operation:       Update
    Time:            2022-02-07T22:47:16Z
  Resource Version:  4045401
  UID:               abed04b8-302f-492f-9465-2a47974870f3
Spec:
  Disks:
    aio:///tmp/disk1.img
  Node:  endor
Status:
  Available:  0
  Capacity:   0
  State:      Creating
  Used:       0
Events:
  Type  Reason   Age   From          Message
  ----  ------   ----  ----          -------
  Warn  Missing  72s   msp-operator  The block device(s): aio:///tmp/disk1.img can not be found
  Warn  Missing  71s   msp-operator  The block device(s): aio:///tmp/disk1.img can not be found

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 25 (10 by maintainers)

Commits related to this issue

Most upvoted comments

Thank you for looking into it @tiagolobocastro!

I’ve just put together that uses a loopback device as per @hickersonj’s method, and in that case the storage pools immediately came online.

For anyone trying this too, the process looks something like this:

# Create file
dd if=/dev/zero of=/var/openebs/mayastor/images/disk1.img bs=4M count={{ disk_gb/4)* 1024 }}

# Create partition scheme (g, n, accept defaults, w)
sudo fdisk /var/openebs/mayastor/images/disk1.img

# Setup the loop device (and set this to run on startup somehow, TBA)
losetup /dev/loop0 /var/openebs/mayastor/images/disk1.img

Then your mayastor pools look like this:

apiVersion: openebs.io/v1alpha1
kind: MayastorPool
metadata:
  name: worker1-pool1
  namespace: mayastor
spec:
  disks: ["/dev/loop0"]
  node: worker1
```yaml

Adding the suggestion of @hickersonj and @adamcharnock as a workaround for this known issue to the V1.0.1 patch release notes.

The validation problem in the operator which has been identified will be addressed in a future release forward of V1.0.1

@tiagolobocastro I don’t think so. The ansible playbook was already using dd, but maybe I’ll speed things up and change it to fallocate now I have things working.

Update: Use of fallocate now tested, code above updated.

I’m back! After much doubt and cursing, I can also confirm that this is now working.

For anyone else coming across this: For a while I had the exact same problem still, and if I checked the mayastor-io-engine logs I could see reports of ENOSPC No space left on device, which seemed to appear against the DiskPool as the ... can not be found error.

In my case I was messing up how I was fdisking the files. I’ll leave some ansible tasks here in case it helps anyone:

Edited: Now uses fallocate as per @tiagolobocastro’s suggestion below

- name: Create mayastor disk image
  tags: [ 'k8s_app_mayastor' ]
  notify: partition-disk
  command:
    creates: "{{ disk_file }}"
    # We add an extra 4M at the end just to keep everything happy
    # (otherwise we seem to get issues when fdisking the file)
    cmd: "fallocate -l {{ (size_gb|int * 1024) + 4 }}MiB {{ disk_file }}"
    # EDITED. Command was previously this:
    # cmd: "dd if=/dev/zero of={{ disk_file }} bs=4M count={{ ((size_gb|int)/4 * 1024)|round(0, 'ceil')|int + 1 }}"

# Handlers

- name: Setup disk partition
  listen: partition-disk
  tags: [ 'k8s_app_mayastor' ]
  vars:
    sector_size: 512
    size: "{{ (size_gb|int) * 1024 * 1024 * 1024 / sector_size }}"
  command:
    cmd: sfdisk {{ disk_file }}
    stdin: |
      label: gpt
      label-id: 10000000-0000-4000-{{ "%04d" | format(host_number|int) }}-{{ "%012d" | format(disk_number|int) }}
      device: {{ disk_file }}
      unit: sectors
      first-lba: 2048
      last-lba: {{ size|int - 1 }}
      
      {{ disk_file }}1 : start=        2048, size=   {{ size|int - 2048 }}

Yes this is working, example:

> ssh 10.0.0.114 sudo fallocate -l 200MiB /var/local/io-engine/disk.img
> NODE=ksnode-2 DISK=aio:///var/local/io-engine/disk.img NAMESPACE=mayastor envsubst < ~/git/kube-helper/storage-pool.yaml | kubectl create -f -
> kubectl -n mayastor get dsp

Note: There’s currently a bug where we set the bdev UUID to 0 for these files, though shouldn’t have any real impact though as the pool uuid itself is auto-generated and not 0, more of an annoyance if you use cli to list… I’ll get this fixed too.