kubernetes: Cinder volume doesn't get attached to the pod

Kubernetes version (use kubectl version): Server Version: version.Info{Major:“1”, Minor:“4”, GitVersion:“v1.4.0+coreos.0”, GitCommit:“278a1f7034bdba61cba443722647da1a8204a6fc”, GitTreeState:“clean”, BuildDate:“2016-09-26T20:48:37Z”, GoVersion:“go1.6.3”, Compiler:“gc”, Platform:“linux/amd64”}

Environment:

  • Cloud provider or hardware configuration: Kolla Openstack Mitaka version with libvirt
  • OS (e.g. from /etc/os-release): CoreOS stable (1122.2.0)
  • Kernel (e.g. uname -a): 4.7.0-coreos
  • Install tools: none
  • Others: none

What happened: Cinder volume gets attached to a kubernetes worker node but not to a pod:

I1012 11:35:28.757270       1 reconciler.go:168] Started AttachVolume for volume "kubernetes.io/cinder/762bf306-9748-4be0-a6c0-914821001116" to node "k8s-worker-1"
I1012 11:35:30.205320       1 attacher.go:95] Attach operation successful: volume "762bf306-9748-4be0-a6c0-914821001116" attached to node "69283384-874d-4b64-a2e3-ae628f79ec6d".
I1012 11:35:30.512329       1 attacher.go:104] Attach volume "762bf306-9748-4be0-a6c0-914821001116" to instance "69283384-874d-4b64-a2e3-ae628f79ec6d" failed with volume 762bf306-9748-4be0-a6c0-914821001116 is not attached to 69283384-874d-4b64-a2e3-ae628f79ec6d
E1012 11:35:30.512479       1 nestedpendingoperations.go:253] Operation for "\"kubernetes.io/cinder/762bf306-9748-4be0-a6c0-914821001116\"" failed. No retries permitted until 2016-10-12 11:35:31.012447432 +0000 UTC (durationBeforeRetry 500ms). Error: Failed to attach volume "vol01" on node "k8s-worker-1" with: volume 762bf306-9748-4be0-a6c0-914821001116 is not attached to 69283384-874d-4b64-a2e3-ae628f79ec6d
I1012 11:35:30.512632       1 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mypod2", UID:"f9bb7961-906f-11e6-a1fb-fa163e94e8c1", APIVersion:"v1", ResourceVersion:"335533", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "vol01" on node "k8s-worker-1" with: volume 762bf306-9748-4be0-a6c0-914821001116 is not attached to 69283384-874d-4b64-a2e3-ae628f79ec6d
I1012 11:35:31.062210       1 reconciler.go:168] Started AttachVolume for volume "kubernetes.io/cinder/762bf306-9748-4be0-a6c0-914821001116" to node "k8s-worker-1"
E1012 11:35:32.745559       1 openstack_volumes.go:64] Failed to attach 762bf306-9748-4be0-a6c0-914821001116 volume to 69283384-874d-4b64-a2e3-ae628f79ec6d compute
I1012 11:35:32.745596       1 attacher.go:97] Attach volume "762bf306-9748-4be0-a6c0-914821001116" to instance "69283384-874d-4b64-a2e3-ae628f79ec6d" failed with Expected HTTP response code [200] when accessing [POST http://openstack-ha:8774/v2/470e93b0a7d649c28c62353c5f36c93b/servers/69283384-874d-4b64-a2e3-ae628f79ec6d/os-volume_attachments], but got 400 instead {"badRequest": {"message": "Invalid volume: volume '762bf306-9748-4be0-a6c0-914821001116' status must be 'available'. Currently in 'in-use'", "code": 400}}
E1012 11:35:32.745724       1 nestedpendingoperations.go:253] Operation for "\"kubernetes.io/cinder/762bf306-9748-4be0-a6c0-914821001116\"" failed. No retries permitted until 2016-10-12 11:35:33.74569124 +0000 UTC (durationBeforeRetry 1s). Error: Failed to attach volume "vol01" on node "k8s-worker-1" with: Expected HTTP response code [200] when accessing [POST http://openstack-ha:8774/v2/470e93b0a7d649c28c62353c5f36c93b/servers/69283384-874d-4b64-a2e3-ae628f79ec6d/os-volume_attachments], but got 400 instead {"badRequest": {"message": "Invalid volume: volume '762bf306-9748-4be0-a6c0-914821001116' status must be 'available'. Currently in 'in-use'", "code": 400}}
I1012 11:35:32.745861       1 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mypod2", UID:"f9bb7961-906f-11e6-a1fb-fa163e94e8c1", APIVersion:"v1", ResourceVersion:"335533", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "vol01" on node "k8s-worker-1" with: Expected HTTP response code [200] when accessing [POST http://openstack-ha:8774/v2/470e93b0a7d649c28c62353c5f36c93b/servers/69283384-874d-4b64-a2e3-ae628f79ec6d/os-volume_attachments], but got 400 instead {"badRequest": {"message": "Invalid volume: volume '762bf306-9748-4be0-a6c0-914821001116' status must be 'available'. Currently in 'in-use'", "code": 400}}

Pod hangs in ‘ContainerCreating’ state and kubelet complains that volume is “not yet attached”. After manually detaching Cinder volume from an Openstack instance, it get’s attached again and the error repeats.

What you expected to happen: Cinder volume gets mounted to a pod and pod goes to a ‘Running’ state

How to reproduce it (as minimally and precisely as possible): Use openstack as a cloud provider for Kubernetes 1.4.0 and try creating a pod with Cinder backed volume - either using a PersistentVolumeClaim or directly in a pod definition.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 16 (6 by maintainers)

Most upvoted comments

Found my issue. I had cloud-provider set on the kube-controller-manager, api server, etc but I was missing it on the kubelet. This caused the weird behavior of being able to attach volumes to worker nodes, but the worker nodes were unable to do anything with the mounted storage. I logged https://github.com/kubernetes/kubernetes/issues/42013 to address it. @mikebryant those chages are necessary as well, +1 from me.

As a workaround until #41498 is merged, we added udev rules to create additional links that matched the current patterns

$ cat /etc/udev/rules.d/99-qemu-vd.rules 
ACTION=="add|change", ENV{ID_MODEL}=="*QEMU*", KERNEL=="sd*[!0-9]", SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL_SHORT}"
ACTION=="add|change", ENV{ID_MODEL}=="*QEMU*", KERNEL=="sd*[0-9]", SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL_SHORT}-part%n"
$ sudo udevadm trigger --attr-match=subsystem=block
$ sudo udevadm control --reload-rule

I’m experiencing the same Issue. any news on that one?

Similar to what @eviloop mentioned.

A couple additional details:

dmesg:
[176553.462158] scsi 2:0:0:2: Direct-Access     QEMU     QEMU HARDDISK    2.0. PQ: 0 ANSI: 5
...
$ ls -l /dev/disk/by-id
lrwxrwxrwx 1 root root  9 Feb 20 02:11 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_205be308-85b6-4124-b -> ../../sdb
lrwxrwxrwx 1 root root  9 Feb 21 03:05 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_84f6b94e-699b-49c5-8 -> ../../sdc