kubernetes: unable to mount the volume on windows node VMs

What happened: unable to mount the volume on windows node VM.

mount is failing with

E0625 00:01:07.109001 1760 mount_windows.go:236] mklink failed: fork/exec C:\Windows\system32\cmd.exe: invalid argument, output: “”

Steps:

  1. Created SC, PVC, and POD.
  2. Note down node on which pod was created.
  3. deleted pod.
  4. cordoned node noted in the step-2.
  5. recreated pod using the same claim.
  6. Pod creation is failing with FailedMount.

What you expected to happen: Volume should be mounted on node VM.

How to reproduce it (as minimally and precisely as possible): See the steps above.

Anything else we need to know?:

kubelet logs

E0625 00:00:29.874854    1760 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/vsphere-volume/[vsanDatastore] 64ac5c5e-1447-dc56-80b1-e4434bbb68dc/d33d55d7-7191-422f-a2f6-2ee49079436a.internal-dynamic-pvc-85ad18e4-46e7-48ea-9e83-57148de58110.vmdk podName: nodeName:}" failed. No retries permitted until 2020-06-25 00:01:01.8748544 +0000 GMT m=+95776.642056701 (durationBeforeRetry 32s). Error: "MountVolume.MountDevice failed for volume \"pvc-85ad18e4-46e7-48ea-9e83-57148de58110\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] 64ac5c5e-1447-dc56-80b1-e4434bbb68dc/d33d55d7-7191-422f-a2f6-2ee49079436a.internal-dynamic-pvc-85ad18e4-46e7-48ea-9e83-57148de58110.vmdk\") pod \"aspnet\" (UID: \"06d7f8e9-187f-4b38-b41a-5bce7a82f5ba\") : fork/exec C:\\Windows\\system32\\cmd.exe: invalid argument"
W0625 00:00:30.177808    1760 setters.go:162] replacing cloudprovider-reported hostname of 1f90280e-01be-4e49-aa66-5045023e264c with overridden hostname of 30.0.0.15
W0625 00:00:40.190831    1760 setters.go:162] replacing cloudprovider-reported hostname of 1f90280e-01be-4e49-aa66-5045023e264c with overridden hostname of 30.0.0.15
W0625 00:00:50.195721    1760 setters.go:162] replacing cloudprovider-reported hostname of 1f90280e-01be-4e49-aa66-5045023e264c with overridden hostname of 30.0.0.15
W0625 00:01:00.205454    1760 setters.go:162] replacing cloudprovider-reported hostname of 1f90280e-01be-4e49-aa66-5045023e264c with overridden hostname of 30.0.0.15
I0625 00:01:01.949547    1760 operation_generator.go:551] MountVolume.WaitForAttach entering for volume "pvc-85ad18e4-46e7-48ea-9e83-57148de58110" (UniqueName: "kubernetes.io/vsphere-volume/[vsanDatastore] 64ac5c5e-1447-dc56-80b1-e4434bbb68dc/d33d55d7-7191-422f-a2f6-2ee49079436a.internal-dynamic-pvc-85ad18e4-46e7-48ea-9e83-57148de58110.vmdk") pod "aspnet" (UID: "06d7f8e9-187f-4b38-b41a-5bce7a82f5ba") DevicePath "/dev/disk/by-id/wwn-0x6000c2935a21d624dd0c764c49f374b3"
I0625 00:01:04.320912    1760 attacher.go:183] Successfully found attached VMDK "[vsanDatastore] 64ac5c5e-1447-dc56-80b1-e4434bbb68dc/d33d55d7-7191-422f-a2f6-2ee49079436a.internal-dynamic-pvc-85ad18e4-46e7-48ea-9e83-57148de58110.vmdk".
I0625 00:01:04.320912    1760 operation_generator.go:560] MountVolume.WaitForAttach succeeded for volume "pvc-85ad18e4-46e7-48ea-9e83-57148de58110" (UniqueName: "kubernetes.io/vsphere-volume/[vsanDatastore] 64ac5c5e-1447-dc56-80b1-e4434bbb68dc/d33d55d7-7191-422f-a2f6-2ee49079436a.internal-dynamic-pvc-85ad18e4-46e7-48ea-9e83-57148de58110.vmdk") pod "aspnet" (UID: "06d7f8e9-187f-4b38-b41a-5bce7a82f5ba") DevicePath "2"
I0625 00:01:04.320912    1760 attacher.go:212] vsphere MountDevice2\var\lib\kubelet\plugins\kubernetes.io\vsphere-volume\mounts\[vsanDatastore] 64ac5c5e-1447-dc56-80b1-e4434bbb68dc\d33d55d7-7191-422f-a2f6-2ee49079436a.internal-dynamic-pvc-85ad18e4-46e7-48ea-9e83-57148de58110.vmdk
E0625 00:01:07.109001    1760 mount_windows.go:236] mklink failed: fork/exec C:\Windows\system32\cmd.exe: invalid argument, output: ""

Environment:

  • Kubernetes version (use kubectl version):
kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.5+vmware.1", GitCommit:"f8b685623a975d20a9b35685c24611c44d464b3b", GitTreeState:"clean", BuildDate:"2020-05-01T21:32:30Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: vSphere Cloud Provider

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 38 (35 by maintainers)

Most upvoted comments

Verified in a new worker node with the flag set to OnlineAll, it did fix the issue. Thank you!