rook: Timeout expired waiting for volumes to attach/mount for pod (rook plugin not found)

k8s version: v1.9 env: VirtualBox os: Coreos

It is 1 node kubernete cluster I followed the below steps:

  1. Followed https://rook.io/docs/rook/v0.5/k8s-pre-reqs.html and updated the kubelet with
Environment="RKT_OPTS=--volume modprobe,kind=host,source=/usr/sbin/modprobe \
  --mount volume=modprobe,target=/usr/sbin/modprobe \
  --volume lib-modules,kind=host,source=/lib/modules \
  --mount volume=lib-modules,target=/lib/modules \
  --uuid-file-save=/var/run/kubelet-pod.uuid"
  1. Installed ceph utility
 rbd -v
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)

All rook pods are working but MySQL pod fails with error ‘timeout expired waiting for volumes to attach/mount for pod’

➜  kubectl get pod -n rook-system
NAME                             READY     STATUS    RESTARTS   AGE
rook-agent-rqw6j                 1/1       Running   0          21m
rook-operator-5457d48c94-bhh2z   1/1       Running   0          22m
➜   kubectl get pod -n rook
NAME                             READY     STATUS    RESTARTS   AGE
rook-api-848df956bf-fhmg2        1/1       Running   0          20m
rook-ceph-mgr0-cfccfd6b8-8brxz   1/1       Running   0          20m
rook-ceph-mon0-xdd77             1/1       Running   0          21m
rook-ceph-mon1-gntgh             1/1       Running   0          20m
rook-ceph-mon2-srmg8             1/1       Running   0          20m
rook-ceph-osd-84wmn              1/1       Running   0          20m
➜   kubectl get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                    STORAGECLASS   REASON    AGE
pvc-6a4c5c2a-127d-11e8-a846-080027b424ef   20Gi       RWO           Delete          Bound     default/mysql-pv-claim   rook-block               15m
➜  kubectl get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
mysql-pv-claim   Bound     pvc-6a4c5c2a-127d-11e8-a846-080027b424ef   20Gi       RWO           rook-block     15m
kubectl get pods
NAME                               READY     STATUS              RESTARTS   AGE
wordpress-mysql-557ffc4f69-8zxsq   0/1       ContainerCreating   0          16m

Error when I describe pod : FailedMount Unable to mount volumes for pod “wordpress-mysql-557ffc4f69-8zxsq_default(6a932df1-127d-11e8-a846-080027b424ef)”: timeout expired waiting for volumes to attach/mount for pod “default”/“wordpress-mysql-557ffc4f69-8zxsq”. list of unattached/unmounted volumes=[mysql-persistent-storage]

Also added the following option to rook-operator.yaml

        - name: FLEXVOLUME_DIR_PATH
          value: "/var/lib/kubelet/volumeplugins"

Could you please help with this? Please let me know if you need further details. I checked the similar issues but a solution is not working.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 1
  • Comments: 34 (18 by maintainers)

Most upvoted comments

Hi everyone,

I’ve finally figured out why rook was not working on my setup.

Using the default deploy mode, kubespray reconfigures the location for volume plugin to /var/lib/kubelet/volume-plugins.

However, The Rook agent still deploy the flexvolume plugin in the default location: /usr/libexec/kubernetes/kubelet-plugins/volume/exec.

Thus when trying to mount the volume in the pod, the plugin is not reachable.

I try my rook manifests on a minikube or a kubeadm based kubespray deployment (which uses the default location) and it works as expected.

So it seems that rook does not support k8s cluster deployed with a custom volume plugin directory.

thx @jbw976 for pointing to the kubelet logs, it was the key to my debug.

@aolwas There is documentation available for Rook to change the flevolume plugin path, here: https://rook.io/docs/rook/master/flexvolume.html (see the FLEXVOLUME_DIR_PATH env var for the rook-operator). After adding the env var to the operator, you need to delete the rook-agent DaemonSet and restart the operator that it recreates the rook-agent DaemonSet.

Let us know if that fixes your issue with the flexvolume plugin path in the rook-agent.

Let’s consider backporting #2068 to 0.8 to resolve this issue

Same error here

  • k8s 1.9.3 deployed with kubespray on Openstack cluster
  • Calico network
  • rook operator v0.6.2 deployed with helm
  • rook cluster from doc’s example (bluestore in /var/lib/rook on every node)
  • pool with 1 replica

PVC and PV creation works fine but got a timeout when mounting the PVC in a pod

We also ran into the same issue on our 1.11.2 test cluster. Same symptoms as described in this issue and restarting the kubelet on the affected node helped.

@laevilgenius I noticed you fixed https://github.com/kubernetes/kubernetes/issues/60694 locally. Can you confirm this is indeed the root cause of this issue?

Has the common issues entry on this topic been useful for anyone yet? there are a few parts of it that may give some guidance: https://rook.io/docs/rook/master/common-problems.html#pod-using-rook-storage-is-not-running

the rook-agent logs and sometimes the kubelet logs for the node that the pod is scheduled on can be especially useful here too.

here’s an example of exact same issue, same setup as @aolwas

K8S v1.9.3 calic pool with replica size: 1 made sure to delete /var/lib/rook after numerous attempts to reinstall

uname -r
4.13.0-36-generic
Rook version: 
helm install --namespace rook --name rook --version v0.7.0-10.g3bcee98 .
kubectl describe po/wordpress-55cbcdd99b-k5dzz
Name:           wordpress-55cbcdd99b-k5dzz
Namespace:      default
Node:           srv-eu1/10.0.0.11
Start Time:     Thu, 08 Mar 2018 19:46:00 +0100
Labels:         app=wordpress
                pod-template-hash=1176788556
                tier=frontend
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicaSet/wordpress-55cbcdd99b
Containers:
  wordpress:
    Container ID:
    Image:          wordpress:4.6.1-apache
    Image ID:
    Port:           80/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:
      WORDPRESS_DB_HOST:      wordpress-mysql
      WORDPRESS_DB_PASSWORD:  changeme
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pw2xh (ro)
      /var/www/html from wordpress-persistent-storage (rw)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  wordpress-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  wp-pv-claim
    ReadOnly:   false
  default-token-pw2xh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-pw2xh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type     Reason                 Age               From               Message
  ----     ------                 ----              ----               -------
  Warning  FailedScheduling       4m (x2 over 4m)   default-scheduler  pod has unbound PersistentVolumeClaims (repeated 2 times)
  Normal   Scheduled              4m                default-scheduler  Successfully assigned wordpress-55cbcdd99b-k5dzz to srv-eu1
  Normal   SuccessfulMountVolume  4m                kubelet, srv-eu1   MountVolume.SetUp succeeded for volume "default-token-pw2xh"
  Warning  FailedMount            17s (x2 over 2m)  kubelet, srv-eu1   Unable to mount volumes for pod "wordpress-55cbcdd99b-k5dzz_default(f1d2edf1-2300-11e8-9b2c-901b0e95a1e0)": timeout expired waiting for volumes to attach/mount for pod "default"/"wordpress-55cbcdd99b-k5dzz". list of unattached/unmounted volumes=[wordpress-persistent-storage]
kubectl describe pvc/wp-pv-claim
Name:          wp-pv-claim
Namespace:     default
StorageClass:  rook-block
Status:        Bound
Volume:        pvc-f1b7e7a3-2300-11e8-9b2c-901b0e95a1e0
Labels:        app=wordpress
Annotations:   control-plane.alpha.kubernetes.io/leader={"holderIdentity":"8c350380-2300-11e8-8359-c27a04ee987b","leaseDurationSeconds":15,"acquireTime":"2018-03-08T18:45:59Z","renewTime":"2018-03-08T18:46:01Z","lea...
               pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
               volume.beta.kubernetes.io/storage-provisioner=rook.io/block
Finalizers:    []
Capacity:      20Gi
Access Modes:  RWO
Events:
  Type    Reason                 Age              From                                                                               Message
  ----    ------                 ----             ----                                                                               -------
  Normal  ExternalProvisioning   5m (x2 over 5m)  persistentvolume-controller                                                        waiting for a volume to be created, either by external provisioner "rook.io/block" or manually created by system administrator
  Normal  Provisioning           5m               rook.io/block rook-operator-7d886c8df7-p4x6g 8c350380-2300-11e8-8359-c27a04ee987b  External provisioner is provisioning volume for claim "default/wp-pv-claim"
  Normal  ProvisioningSucceeded  5m               rook.io/block rook-operator-7d886c8df7-p4x6g 8c350380-2300-11e8-8359-c27a04ee987b  Successfully provisioned volume pvc-f1b7e7a3-2300-11e8-9b2c-901b0e95a1e0
kubectl describe pv/pvc-f1b7e7a3-2300-11e8-9b2c-901b0e95a1e0
Name:            pvc-f1b7e7a3-2300-11e8-9b2c-901b0e95a1e0
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by=rook.io/block
StorageClass:    rook-block
Status:          Bound
Claim:           default/wp-pv-claim
Reclaim Policy:  Delete
Access Modes:    RWO
Capacity:        20Gi
Message:
Source:
    Type:    FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
    Driver:      Options:  %v

    FSType:                                                                                                                                  rook.io/rook
    SecretRef:
    ReadOnly:                                                                                                                                <nil>
%!(EXTRA bool=false, map[string]string=map[image:pvc-f1b7e7a3-2300-11e8-9b2c-901b0e95a1e0 pool:replicapool storageClass:rook-block])Events:  <none>
kubectl -n rook exec rook-tools -- ceph -s
  cluster:
    id:     727ef60b-8a8a-47e4-9b6a-5a775f6596fb
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum rook-ceph-mon1,rook-ceph-mon2,rook-ceph-mon0
    mgr: rook-ceph-mgr0(active)
    osd: 2 osds: 2 up, 2 in

  data:
    pools:   1 pools, 100 pgs
    objects: 6 objects, 51 bytes
    usage:   43013 MB used, 850 GB / 892 GB avail
    pgs:     100 active+clean
kubectl logs -f rook-operator-7d886c8df7-p4x6g -n rook
2018-03-08 18:43:05.898973 I | rook: starting Rook v0.7.0-10.g3bcee98 with arguments '/usr/local/bin/rook operator'
2018-03-08 18:43:05.899030 I | rook: flag values: --help=false, --log-level=INFO, --mon-healthcheck-interval=45s, --mon-out-timeout=5m0s
2018-03-08 18:43:05.899830 I | rook: starting operator
2018-03-08 18:43:05.906079 I | op-k8sutil: returning version v1.9.3 instead of v1.9.3+coreos.0
2018-03-08 18:43:09.215785 I | op-k8sutil: creating cluster role rook-agent
2018-03-08 18:43:09.333562 I | op-agent: getting flexvolume dir path from FLEXVOLUME_DIR_PATH env var
2018-03-08 18:43:09.333577 I | op-agent: flexvolume dir path env var FLEXVOLUME_DIR_PATH is not provided. Defaulting to: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
2018-03-08 18:43:09.333582 I | op-agent: discovered flexvolume dir path from source default. value: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
2018-03-08 18:43:09.405113 I | op-agent: rook-agent daemonset started
2018-03-08 18:43:09.406617 I | operator: rook-provisioner started
2018-03-08 18:43:09.406630 I | op-cluster: start watching clusters in all namespaces
2018-03-08 18:43:51.246731 I | op-cluster: starting cluster in namespace rook
2018-03-08 18:43:57.298451 I | op-mon: start running mons
2018-03-08 18:43:57.300475 I | exec: Running command: ceph-authtool --create-keyring /var/lib/rook/rook/mon.keyring --gen-key -n mon. --cap mon 'allow *'
2018-03-08 18:43:57.498425 I | exec: Running command: ceph-authtool --create-keyring /var/lib/rook/rook/client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mgr 'allow *' --cap mds 'allow'
2018-03-08 18:43:57.703699 I | op-mon: creating mon secrets for a new cluster
2018-03-08 18:43:57.800075 I | op-mon: saved mon endpoints to config map map[data: maxMonId:-1 mapping:{"node":{},"port":{}}]
2018-03-08 18:43:57.800343 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-03-08 18:43:57.800442 I | cephmon: generated admin config in /var/lib/rook/rook
2018-03-08 18:43:57.899434 I | op-mon: Found 2 running nodes without mons
2018-03-08 18:43:57.996917 I | op-mon: mon rook-ceph-mon0 running at 10.233.47.49:6790
2018-03-08 18:43:58.006831 I | op-mon: saved mon endpoints to config map map[maxMonId:2 mapping:{"node":{"rook-ceph-mon0":{"Name":"srv-eu1","Hostname":"srv-eu1","Address":"10.0.0.11"},"rook-ceph-mon1":{"Name":"srv-eu2","Hostname":"srv-eu2","Address":"10.0.0.12"},"rook-ceph-mon2":{"Name":"srv-eu1","Hostname":"srv-eu1","Address":"10.0.0.11"}},"port":{}} data:rook-ceph-mon0=10.233.47.49:6790]
2018-03-08 18:43:58.007074 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-03-08 18:43:58.007160 I | cephmon: generated admin config in /var/lib/rook/rook
2018-03-08 18:43:58.010185 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-03-08 18:43:58.010257 I | cephmon: generated admin config in /var/lib/rook/rook
2018-03-08 18:43:58.098522 I | op-mon: mons created: 1
2018-03-08 18:43:58.098557 I | op-mon: waiting for mon quorum
2018-03-08 18:43:58.098757 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/761476367
2018-03-08 18:44:09.899727 I | op-mon: Ceph monitors formed quorum
2018-03-08 18:44:09.923979 I | op-mon: mon rook-ceph-mon0 running at 10.233.47.49:6790
2018-03-08 18:44:09.932781 I | op-mon: mon rook-ceph-mon1 running at 10.233.29.104:6790
2018-03-08 18:44:09.939251 I | op-mon: saved mon endpoints to config map map[data:rook-ceph-mon0=10.233.47.49:6790,rook-ceph-mon1=10.233.29.104:6790 maxMonId:2 mapping:{"node":{"rook-ceph-mon0":{"Name":"srv-eu1","Hostname":"srv-eu1","Address":"10.0.0.11"},"rook-ceph-mon1":{"Name":"srv-eu2","Hostname":"srv-eu2","Address":"10.0.0.12"},"rook-ceph-mon2":{"Name":"srv-eu1","Hostname":"srv-eu1","Address":"10.0.0.11"}},"port":{}}]
2018-03-08 18:44:09.939478 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-03-08 18:44:09.939546 I | cephmon: generated admin config in /var/lib/rook/rook
2018-03-08 18:44:09.939976 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-03-08 18:44:09.940036 I | cephmon: generated admin config in /var/lib/rook/rook
2018-03-08 18:44:09.943597 I | op-mon: replicaset rook-ceph-mon0 already exists
2018-03-08 18:44:09.945865 I | op-mon: mons created: 2
2018-03-08 18:44:09.945880 I | op-mon: waiting for mon quorum
2018-03-08 18:44:09.945974 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/075027746
2018-03-08 18:44:11.607707 W | op-mon: failed to find initial monitor rook-ceph-mon1 in mon map
2018-03-08 18:44:16.607881 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/643249689
2018-03-08 18:44:22.600092 I | op-mon: Ceph monitors formed quorum
2018-03-08 18:44:22.709809 I | op-mon: mon rook-ceph-mon0 running at 10.233.47.49:6790
2018-03-08 18:44:22.724581 I | op-mon: mon rook-ceph-mon1 running at 10.233.29.104:6790
2018-03-08 18:44:22.732459 I | op-mon: mon rook-ceph-mon2 running at 10.233.45.89:6790
2018-03-08 18:44:22.741383 I | op-mon: saved mon endpoints to config map map[data:rook-ceph-mon0=10.233.47.49:6790,rook-ceph-mon1=10.233.29.104:6790,rook-ceph-mon2=10.233.45.89:6790 maxMonId:2 mapping:{"node":{"rook-ceph-mon0":{"Name":"srv-eu1","Hostname":"srv-eu1","Address":"10.0.0.11"},"rook-ceph-mon1":{"Name":"srv-eu2","Hostname":"srv-eu2","Address":"10.0.0.12"},"rook-ceph-mon2":{"Name":"srv-eu1","Hostname":"srv-eu1","Address":"10.0.0.11"}},"port":{}}]
2018-03-08 18:44:22.741631 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-03-08 18:44:22.741723 I | cephmon: generated admin config in /var/lib/rook/rook
2018-03-08 18:44:22.742964 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-03-08 18:44:22.743031 I | cephmon: generated admin config in /var/lib/rook/rook
2018-03-08 18:44:22.747207 I | op-mon: replicaset rook-ceph-mon0 already exists
2018-03-08 18:44:22.753191 I | op-mon: replicaset rook-ceph-mon1 already exists
2018-03-08 18:44:22.755555 I | op-mon: mons created: 3
2018-03-08 18:44:22.755567 I | op-mon: waiting for mon quorum
2018-03-08 18:44:22.755659 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/128899748
2018-03-08 18:44:24.206563 W | op-mon: initial monitor rook-ceph-mon0 is not in quorum list
2018-03-08 18:44:29.206959 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/149706163
2018-03-08 18:44:30.806884 W | op-mon: initial monitor rook-ceph-mon2 is not in quorum list
2018-03-08 18:44:35.807028 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/331359094
2018-03-08 18:44:37.402227 I | op-mon: Ceph monitors formed quorum
2018-03-08 18:44:37.406414 I | op-cluster: creating initial crushmap
2018-03-08 18:44:37.406428 I | cephclient: setting crush tunables to firefly
2018-03-08 18:44:37.406519 I | exec: Running command: ceph osd crush tunables firefly --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format plain --out-file /tmp/104659549
2018-03-08 18:44:39.000514 I | exec: adjusted tunables profile to firefly
2018-03-08 18:44:39.000733 I | cephclient: succeeded setting crush tunables to profile firefly:
2018-03-08 18:44:39.001086 I | exec: Running command: crushtool -c /tmp/115711000 -o /tmp/561291927
2018-03-08 18:44:39.300888 I | exec: Running command: ceph osd setcrushmap -i /tmp/561291927 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/596828938
2018-03-08 18:44:41.204955 I | exec: 3
2018-03-08 18:44:41.205084 I | op-cluster: created initial crushmap
2018-03-08 18:44:41.206941 I | op-mgr: start running mgr
2018-03-08 18:44:41.208367 I | exec: Running command: ceph auth get-or-create-key mgr.rook-ceph-mgr0 mon allow * --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/176429281
2018-03-08 18:44:42.906535 I | exec: Running command: ceph mgr module enable prometheus --force --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/487563980
2018-03-08 18:44:44.610925 I | op-mgr: rook-ceph-mgr0 service started
2018-03-08 18:44:44.703887 I | op-mgr: rook-ceph-mgr0 deployment started
2018-03-08 18:44:44.703900 I | op-api: starting the Rook api
2018-03-08 18:44:44.719227 I | op-api: API service running at 10.233.7.77:8124
2018-03-08 18:44:44.728779 I | op-k8sutil: creating role rook-api in namespace rook
2018-03-08 18:44:44.786864 I | op-api: api deployment started
2018-03-08 18:44:44.786877 I | op-osd: start running osds in namespace rook
2018-03-08 18:44:44.795799 I | op-k8sutil: creating role rook-ceph-osd in namespace rook
2018-03-08 18:44:44.869227 I | exec: Running command: ceph osd set noscrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/607330747
2018-03-08 18:44:47.406456 I | exec: noscrub is set
2018-03-08 18:44:47.406611 I | exec: Running command: ceph osd set nodeep-scrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/096727518
2018-03-08 18:44:49.504762 I | exec: nodeep-scrub is set
2018-03-08 18:44:49.511063 I | op-osd: osd daemon set started
2018-03-08 18:44:49.511168 I | exec: Running command: ceph osd unset noscrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/946778021
2018-03-08 18:44:51.919912 I | exec: noscrub is unset
2018-03-08 18:44:51.920058 I | exec: Running command: ceph osd unset nodeep-scrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/529257152
2018-03-08 18:44:54.006552 I | exec: nodeep-scrub is unset
2018-03-08 18:44:54.006631 I | op-cluster: Done creating rook instance in namespace rook
2018-03-08 18:44:54.017069 I | op-pool: start watching pool resources in namespace rook
2018-03-08 18:44:54.017086 I | op-object: start watching object store resources in namespace rook
2018-03-08 18:44:54.017093 I | op-file: start watching filesystem resource in namespace rook
2018-03-08 18:44:54.096309 I | op-k8sutil: returning version v1.9.3 instead of v1.9.3+coreos.0
2018-03-08 18:44:54.203085 I | op-cluster: added finalizer to cluster rook
2018-03-08 18:44:54.203209 I | op-cluster: update event for cluster rook
2018-03-08 18:44:54.203218 I | op-cluster: update event for cluster rook is not supported
2018-03-08 18:44:54.203224 I | op-cluster: update event for cluster rook
2018-03-08 18:44:54.203228 I | op-cluster: update event for cluster rook is not supported
2018-03-08 18:44:54.203234 I | op-cluster: update event for cluster rook
2018-03-08 18:44:54.203239 I | op-cluster: update event for cluster rook is not supported
2018-03-08 18:45:11.025351 I | op-pool: creating pool replicapool in namespace rook
2018-03-08 18:45:11.025456 I | exec: Running command: ceph osd crush rule create-simple replicapool default host --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/392418591
2018-03-08 18:45:13.501274 I | exec: Running command: ceph osd pool create replicapool 0 replicated replicapool --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/726812146
2018-03-08 18:45:15.570854 I | exec: pool 'replicapool' created
2018-03-08 18:45:15.570990 I | exec: Running command: ceph osd pool set replicapool size 1 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/411756713
2018-03-08 18:45:17.805062 I | exec: set pool 1 size to 1
2018-03-08 18:45:17.805212 I | exec: Running command: ceph osd pool application enable replicapool replicapool --yes-i-really-mean-it --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/194917364
2018-03-08 18:45:19.905761 I | exec: enabled application 'replicapool' on pool 'replicapool'
2018-03-08 18:45:19.905850 I | cephclient: creating pool replicapool succeeded, buf:
2018-03-08 18:45:19.905858 I | op-pool: created pool replicapool
2018-03-08 18:45:48.603967 I | op-provisioner: creating volume with configuration {pool:replicapool clusterName:rook fstype:}
2018-03-08 18:45:48.603987 I | exec: Running command: rbd create replicapool/pvc-eb112dda-2300-11e8-9b2c-901b0e95a1e0 --size 20480 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring
2018-03-08 18:45:50.498517 I | exec: Running command: rbd ls -l replicapool --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json
2018-03-08 18:45:50.898477 I | op-provisioner: Rook block image created: pvc-eb112dda-2300-11e8-9b2c-901b0e95a1e0
2018-03-08 18:45:50.898521 I | op-provisioner: successfully created Rook Block volume &FlexVolumeSource{Driver:rook.io/rook,FSType:,SecretRef:nil,ReadOnly:false,Options:map[string]string{image: pvc-eb112dda-2300-11e8-9b2c-901b0e95a1e0,pool: replicapool,storageClass: rook-block,},}
2018-03-08 18:45:59.809113 I | op-provisioner: creating volume with configuration {pool:replicapool clusterName:rook fstype:}
2018-03-08 18:45:59.809132 I | exec: Running command: rbd create replicapool/pvc-f1b7e7a3-2300-11e8-9b2c-901b0e95a1e0 --size 20480 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring
2018-03-08 18:46:00.034861 I | exec: Running command: rbd ls -l replicapool --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json
2018-03-08 18:46:00.599656 I | op-provisioner: Rook block image created: pvc-f1b7e7a3-2300-11e8-9b2c-901b0e95a1e0
2018-03-08 18:46:00.599682 I | op-provisioner: successfully created Rook Block volume &FlexVolumeSource{Driver:rook.io/rook,FSType:,SecretRef:nil,ReadOnly:false,Options:map[string]string{image: pvc-f1b7e7a3-2300-11e8-9b2c-901b0e95a1e0,pool: replicapool,storageClass: rook-block,},}

screen shot 2018-03-08 at 1 53 17 pm