kubernetes: Azure disk fails to attach and mount, causing rescheduled pod to stall following node disruption

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): Yes

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):

Azure, disk, detach, attach, ContainerCreating


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug report

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:34:56Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Azure
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie)
  • Kernel (e.g. uname -a): 4.4.0-28-generic
  • Install tools: Azure ACS
  • Others:

What happened: We have a small k8s cluster deployed on Azure through Azure Container Service. Configuration includes some database servers pods, managed by deployments (single pod per deployment) and using dynamic provisioning of volumes via PersistentVolumClaim and StorageClass as a storage backend. For some cloud provider related reason, some cluster nodes went down, as shown by this example kube controller manager log entry: I0524 00:36:58.826461 1 nodecontroller.go:608] NodeController detected that zone westeurope::1 is now in state FullDisruption. Judging by the logs that followed, nodes came back up, and k8s rescheduled pods. However some DB server pods seem to have been unable to attach/mount the persistent volumes, and end end up stuck in ContainerCreatingstate with messages such as:

Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason          Message
  ---------     --------        -----   ----                            -------------   --------        ------          -------
  1h            3s              36      kubelet, k8s-agent-4ba79e32-1                   Warning         FailedMount   Unable to mount volumes for pod "mongodb-deployment-1225361271-d1z50_default(b39cc50b-4098-11e7-beb6-000d3a290a1b)": timeout expired waiting for volumes to
attach/mount for pod "default"/"mongodb-deployment-1225361271-d1z50". list of unattached/unmounted volumes=[mongo-data]
  1h            3s              36      kubelet, k8s-agent-4ba79e32-1                   Warning         FailedSync      Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"mongodb-deployment-1225361271-d1z50". list of unattached
/unmounted volumes=[mongo-data]

We tried to force pod deletion in some cases to try to understand what was going on, which resulted in them being recreated (as they’re managed by the deployments), and some volumes were correctly reattached, some weren’t. It thus seems that:

  • some volumes may never be detached from the previous hosts, or at least the subsequent Azure storage leases are not unlocked
  • the rescheduled pods keep attempting to attach and mount the volumes unsuccessfully, apparently because they fail to acquire the lease on the azure disk
  • the behavior is not consistent since a forced pod restart “worked” in some cases but not for all of them

What you expected to happen: Rescheduled pods being able to attach and mount the PVs.

How to reproduce it (as minimally and precisely as possible): We have not been able to reproduce exactly similar situations yet. Similar error messages showed up after shutting down and restarting (some) k8s agent instances of the cluster, but eventually resolved themselves autonomously.

Example message shown on the dashboard:

Failed to attach volume "pvc-eebe3ea2-25e6-11e7-beb6-000d3a290a1b" on node "k8s-agent-4ba79e32-2" with: Attach volume "XXXXXX-dynamic-pvc-eebe3ea2-25e6-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-2" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="AcquireDiskLeaseFailed" Message="Failed to acquire lease while creating disk 'XXXXXX-dynamic-pvc-eebe3ea2-25e6-11e7-beb6-000d3a290a1b.vhd' using blob with URI https://XXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXX-dynamic-pvc-eebe3ea2-25e6-11e7-beb6-000d3a290a1b.vhd. Blob is already in use."
controller-manager
-
8
2017-05-25T08:57 UTC
2017-05-25T09:02 UTC
warning
Unable to mount volumes for pod "XXXXXX-deployment-2280494963-38lj9_default(14aaed41-4128-11e7-a262-000d3a290a1b)": timeout expired waiting for volumes to attach/mount for pod "default"/"XXXXXX-deployment-2280494963-38lj9". list of unattached/unmounted volumes=[XXXXXX-data]
kubelet k8s-agent-4ba79e32-2
-
4
2017-05-25T08:58 UTC
2017-05-25T09:05 UTC
warning
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"XXXXXX-deployment-2280494963-38lj9". list of unattached/unmounted volumes=[XXXXXX-data]

Our setup is the following:

  • k8s 4 nodes (3+1 master) cluster on Azure deployed via ACS
  • several database pods managed via deployments and using dynamically provisioned storage (PVC + StorageClass)

Anything else we need to know:

Here is a (filtered) kuber controller log:

E0524 00:53:33.979078       1 azure_storage.go:65] azure attach failed, err: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="AcquireDiskLeaseFailed" Message="Failed to acquire lease while creating disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' using blob with URI https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd. Blob is already in use."
E0524 00:53:37.075006       1 azure_storage.go:65] azure attach failed, err: compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="AttachDiskWhileBeingDetached" Message="Cannot attach data disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' to VM 'k8s-agent-4BA79E32-1' because the disk is currently being detached. Please wait until the disk is completely detached and then try again."
I0524 00:53:37.075135       1 attacher.go:110] Attach volume "https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-1" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="AttachDiskWhileBeingDetached" Message="Cannot attach data disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' to VM 'k8s-agent-4BA79E32-1' because the disk is currently being detached. Please wait until the disk is completely detached and then try again."
I0524 00:53:37.075952       1 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"orientdb-deployment-923453513-g0r2b", UID:"b4ecbb07-4019-11e7-beb6-000d3a290a1b", APIVersion:"v1", ResourceVersion:"4738054", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b" on node "k8s-agent-4ba79e32-1" with: Attach volume "XXXXXXX-dynamic-pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-1" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="AttachDiskWhileBeingDetached" Message="Cannot attach data disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' to VM 'k8s-agent-4BA79E32-1' because the disk is currently being detached. Please wait until the disk is completely detached and then try again."
E0524 00:53:37.076042       1 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/azure-disk/XXXXXXX-dynamic-pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b.vhd\"" failed. No retries permitted until 2017-05-24 00:54:09.07528436 +0000 UTC (durationBeforeRetry 32s). Error: Failed to attach volume "pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b" on node "k8s-agent-4ba79e32-1" with: Attach volume "XXXXXXX-dynamic-pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-1" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="AttachDiskWhileBeingDetached" Message="Cannot attach data disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' to VM 'k8s-agent-4BA79E32-1' because the disk is currently being detached. Please wait until the disk is completely detached and then try again."
I0524 00:53:54.316036       1 attacher.go:110] Attach volume "https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="AcquireDiskLeaseFailed" Message="Failed to acquire lease while creating disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' using blob with URI https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd. Blob is already in use."
E0524 00:53:54.316247       1 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/azure-disk/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd\"" failed. No retries permitted until 2017-05-24 00:53:55.31614553 +0000 UTC (durationBeforeRetry 1s). Error: Failed to attach volume "pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b" on node "k8s-agent-4ba79e32-0" with: Attach volume "XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="AcquireDiskLeaseFailed" Message="Failed to acquire lease while creating disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' using blob with URI https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd. Blob is already in use."
I0524 00:53:54.316314       1 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mongodb-deployment-1225361271-6k1lx", UID:"b5b165b6-4019-11e7-beb6-000d3a290a1b", APIVersion:"v1", ResourceVersion:"4738080", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b" on node "k8s-agent-4ba79e32-0" with: Attach volume "XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="AcquireDiskLeaseFailed" Message="Failed to acquire lease while creating disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' using blob with URI https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd. Blob is already in use."
I0524 00:53:55.395612       1 reconciler.go:213] Started AttachVolume for volume "kubernetes.io/azure-disk/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd" to node "k8s-agent-4ba79e32-0"
E0524 00:54:00.812549       1 azure_storage.go:65] azure attach failed, err: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="AcquireDiskLeaseFailed" Message="Failed to acquire lease while creating disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' using blob with URI https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd. Blob is already in use."
I0524 00:54:00.812607       1 azure_storage.go:69] failed to acquire disk lease, try detach
E0524 00:54:10.965224       1 azure_storage.go:65] azure attach failed, err: compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="AttachDiskWhileBeingDetached" Message="Cannot attach data disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' to VM 'k8s-agent-4BA79E32-1' because the disk is currently being detached. Please wait until the disk is completely detached and then try again."
I0524 00:54:10.965376       1 attacher.go:110] Attach volume "https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-1" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="AttachDiskWhileBeingDetached" Message="Cannot attach data disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' to VM 'k8s-agent-4BA79E32-1' because the disk is currently being detached. Please wait until the disk is completely detached and then try again."
E0524 00:54:10.965733       1 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/azure-disk/XXXXXXX-dynamic-pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b.vhd\"" failed. No retries permitted until 2017-05-24 00:55:14.965698079 +0000 UTC (durationBeforeRetry 1m4s). Error: Failed to attach volume "pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b" on node "k8s-agent-4ba79e32-1" with: Attach volume "XXXXXXX-dynamic-pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-1" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="AttachDiskWhileBeingDetached" Message="Cannot attach data disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' to VM 'k8s-agent-4BA79E32-1' because the disk is currently being detached. Please wait until the disk is completely detached and then try again."
I0524 00:54:10.965845       1 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"orientdb-deployment-923453513-g0r2b", UID:"b4ecbb07-4019-11e7-beb6-000d3a290a1b", APIVersion:"v1", ResourceVersion:"4738054", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b" on node "k8s-agent-4ba79e32-1" with: Attach volume "XXXXXXX-dynamic-pvc-eed5bb6d-25e6-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-1" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="AttachDiskWhileBeingDetached" Message="Cannot attach data disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' to VM 'k8s-agent-4BA79E32-1' because the disk is currently being detached. Please wait until the disk is completely detached and then try again."
I0524 00:54:21.111111       1 attacher.go:110] Attach volume "https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="AcquireDiskLeaseFailed" Message="Failed to acquire lease while creating disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' using blob with URI https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd. Blob is already in use."
E0524 00:54:21.111248       1 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/azure-disk/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd\"" failed. No retries permitted until 2017-05-24 00:54:23.111221475 +0000 UTC (durationBeforeRetry 2s). Error: Failed to attach volume "pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b" on node "k8s-agent-4ba79e32-0" with: Attach volume "XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="AcquireDiskLeaseFailed" Message="Failed to acquire lease while creating disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' using blob with URI https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd. Blob is already in use."
I0524 00:54:21.111803       1 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mongodb-deployment-1225361271-6k1lx", UID:"b5b165b6-4019-11e7-beb6-000d3a290a1b", APIVersion:"v1", ResourceVersion:"4738080", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Failed to attach volume "pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b" on node "k8s-agent-4ba79e32-0" with: Attach volume "XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd" to instance "k8s-agent-4BA79E32-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="AcquireDiskLeaseFailed" Message="Failed to acquire lease while creating disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' using blob with URI https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd. Blob is already in use."
I0524 00:54:23.185104       1 reconciler.go:213] Started AttachVolume for volume "kubernetes.io/azure-disk/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd" to node "k8s-agent-4ba79e32-0"
E0524 00:54:41.756587       1 azure_storage.go:65] azure attach failed, err: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=200 -- Original Error: Long running operation terminated with status 'Failed': Code="AcquireDiskLeaseFailed" Message="Failed to acquire lease while creating disk 'XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd' using blob with URI https://XXXXXXXstandardstorage.blob.core.windows.net/vhds/XXXXXXX-dynamic-pvc-f71bb8b5-25dd-11e7-beb6-000d3a290a1b.vhd. Blob is already in use."
I0524 00:54:41.756666       1 azure_storage.go:69] failed to acquire disk lease, try detach

I must admit I am unsure wether this could be an Kubernetes issue with Azure Storage or an ACS related issue. Happy to help/assist in understanding this.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 3
  • Comments: 25 (11 by maintainers)

Most upvoted comments

Is there any update on this? I am still seeing this issue while running OpenShift 3.6/Kubernetes 1.6 on Azure.

I’ve been able to verify that the disk is unmounted from the node. But according to Azure, the disk is still leased to the node. I’ve manually broken the lease, but that leaves it in the Broken status. So Kubernetes still won’t attempt to mount it to the next node.

Barring an actual solution, does anyone know how to set the disk back to an Available status?

I’m experiencing the same problems with Azure Disks and Tectonic.

Getting 409

failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="AttachDiskWhileBeingDetached" Message="Cannot attach data disk 'kubernetes-dynamic-pvc-xxxxxxxxxxxxxxxxxx' to VM 'xxxxxxxxx-worker-1' because the disk is currently being detached or the last detach operation failed. Please wait until the disk is completely detached and then try again or delete/detach the disk explicitly again."

@christianhjelmslund yes, it would. 1.16.3 on vmss would still have that issue if you attach multiple disks to one node at same time, pls patch with fix #85158 if possible.

@DonMartin76 this bug only exists in blob based VM in v1.8.x and v1.9.x, so if you specify ManagedDisks when creating k8s cluster, it won’t have this issue.

    "agentPoolProfiles": [
      {
        ...
        "storageProfile" : "ManagedDisks",
        ...
      }

@danielshiplett It seems the only way to release the disk is to reboot the node which it is attached to (or wait some random condition that will do it).