rook: skipping device loop0: Failed to complete 'lsblk /dev/loop0': exit status 1.

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior: PVC is not provisioning

Expected behavior: PVC provisioning

How to reproduce it (minimal and precise):

create a new rke cluster o clean ubuntu 18.04

nodes:
-- 3 nodes here
services:
  kubelet:
    extra_args:
      volume-plugin-dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
    extra_binds:
      - /usr/libexec/kubernetes/kubelet-plugins/volume/exec:/usr/libexec/kubernetes/kubelet-plugins/volume/exec
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h

Apply demo configs

cd cluster/examples/kubernetes/ceph
kubectl create -f common.yaml
kubectl create -f operator.yaml
kubectl create -f cluster.yaml
cd ../
kubectl create -f storageclass.yaml
kubectl create -f mysql.yaml

Environment:

  • OS (e.g. from /etc/os-release): NAME=“Ubuntu” VERSION=“18.04.2 LTS (Bionic Beaver)”
  • Kernel (e.g. uname -a): Linux rook-1 4.15.0-47-generic #50-Ubuntu SMP Wed Mar 13 10:44:52 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Cloud provider or hardware configuration: Bare metal (hetzner vms)
  • Rook version (use rook version inside of a Rook Pod): rook: v1.0.1
  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:“1”, Minor:“12”, GitVersion:“v1.12.2”, GitCommit:“17c77c7898218073f14c8d573582e8d2313dc740”, GitTreeState:“clean”, BuildDate:“2018-10-30T21:39:16Z”, GoVersion:“go1.11.1”, Compiler:“gc”, Platform:“darwin/amd64”} Server Version: version.Info{Major:“1”, Minor:“13”, GitVersion:“v1.13.5”, GitCommit:“2166946f41b36dea2c4626f90a77706f426cdea2”, GitTreeState:“clean”, BuildDate:“2019-03-25T15:19:22Z”, GoVersion:“go1.11.5”, Compiler:“gc”, Platform:“linux/amd64”}
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): rke
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):
[root@rook-2 /]# ceph status
  cluster:
    id:     cbef2087-a161-43d5-933b-b0161f69b62d
    health: HEALTH_WARN
            Reduced data availability: 100 pgs inactive
 
  services:
    mon: 3 daemons, quorum a,b,c (age 21h)
    mgr: a(active, since 12h)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   1 pools, 100 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             100 unknown

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 23 (8 by maintainers)

Most upvoted comments

@solohin i have resolve this issue, this is not bug. just a default storage change.

resolve: cluster.yml config

$ git diff
diff --git a/cluster/examples/kubernetes/ceph/cluster-test.yaml b/cluster/examples/kubernetes/ceph/cluster-test.yaml
index 64e08cac..388d3b06 100644
--- a/cluster/examples/kubernetes/ceph/cluster-test.yaml
+++ b/cluster/examples/kubernetes/ceph/cluster-test.yaml
@@ -31,9 +31,10 @@ spec:
     useAllDevices: false
     deviceFilter:
     config:
-      databaseSizeMB: "1024" # this value can be removed for environments with normal sized disks (100 GB or larger)
-      journalSizeMB: "1024"  # this value can be removed for environments with normal sized disks (20 GB or larger)
-      osdsPerDevice: "1" # this value can be overridden at the node or device level
+      storeType: filestore
+      # databaseSizeMB: "1024" # this value can be removed for environments with normal sized disks (100 GB or larger)
+      # journalSizeMB: "1024"  # this value can be removed for environments with normal sized disks (20 GB or larger)
+      # osdsPerDevice: "1" # this value can be overridden at the node or device level
     directories:
     - path: /var/lib/rook
 #    nodes:

because the rook v1.0 default storeType is change to bluestore, so i update it to filestore to use exist disk with folder.

current, the pv, pvc is work like a charm.

$ kubectl --kubeconfig=/Users/xiaods/Desktop/rook/y.yaml get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                    STORAGECLASS      REASON    AGE
persistentvolume/pvc-a82bc45d-7a55-11e9-84ee-960000264859   20Gi       RWO            Delete           Bound     default/mysql-pv-claim   rook-ceph-block             9s

NAME                                   STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
persistentvolumeclaim/mysql-pv-claim   Bound     pvc-a82bc45d-7a55-11e9-84ee-960000264859   20Gi       RWO            rook-ceph-block   11s

@solohin got it, you need resolve the rook issue without any extra disk.