csi-digitalocean: Resize PVC failed: FailedMount, Invalid argument
What did you do? (required. The issue will be closed when not provided.)
I have a pvc of size 100Gi , and I want to resize it to 105GB following this volume-expansion.
What did you expect to happen?
- The pvc $NS/tikv-myapp-tidb-tikv-0 resize from 100Gi to 105Gi
- The pv
pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191resize from 100Gi to 105Gi - (do console) volume
pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191resize from 100Gi to 105Gi But, after I restart the pod, the volume of the pv and in the do console did resize, and the pvc did not resize.
Configuration (MUST fill this out):
- system logs:
kubectl -n $NS describe pod myapp-tidb-tikv-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> tidb-scheduler Successfully assigned myapp-tidb/myapp-tidb-tikv-0 to pool-ngx-app-1p0c
Warning FailedMount 12m kubelet, pool-ngx-app-1p0c Unable to attach or mount volumes: unmounted volumes=[tikv], unattached volumes=[config startup-script default-token-spjgt annotations tikv]: timed out waiting for the condition
Warning FailedMount 6m10s (x2 over 16m) kubelet, pool-ngx-app-1p0c Unable to attach or mount volumes: unmounted volumes=[tikv], unattached volumes=[tikv config startup-script default-token-spjgt annotations]: timed out waiting for the condition
Warning FailedMount 2m2s (x5 over 14m) kubelet, pool-ngx-app-1p0c Unable to attach or mount volumes: unmounted volumes=[tikv], unattached volumes=[annotations tikv config startup-script default-token-spjgt]: timed out waiting for the condition
Warning FailedMount 2m (x16 over 18m) kubelet, pool-ngx-app-1p0c MountVolume.MountDevice failed for volume "pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191" : rpc error: code = Internal desc = mounting failed: exit status 255 cmd: 'mount -t ext4 /dev/disk/by-id/scsi-0DO_Volume_pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191/globalmount' output: "mount: mounting /dev/disk/by-id/scsi-0DO_Volume_pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191/globalmount failed: Invalid argument\n"
kubectl -n $NS describe pvc tikv-myapp-tidb-tikv-0
Name: tikv-myapp-tidb-tikv-0
Namespace: myapp-tidb
StorageClass: do-ssd
Status: Bound
Volume: pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191
Labels: app.kubernetes.io/component=tikv
app.kubernetes.io/instance=myapp-tidb
app.kubernetes.io/managed-by=tidb-operator
app.kubernetes.io/name=tidb-cluster
tidb.pingcap.com/cluster-id=6803691304543823427
tidb.pingcap.com/store-id=6
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
tidb.pingcap.com/pod-name: myapp-tidb-tikv-0
volume.beta.kubernetes.io/storage-provisioner: dobs.csi.digitalocean.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 100Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: myapp-tidb-tikv-0
Conditions:
Type Status LastProbeTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
FileSystemResizePending True Mon, 01 Jan 0001 00:00:00 +0000 Fri, 13 Mar 2020 14:00:19 +0000 Waiting for user to (re-)start a pod to finish file system resize of volume on node.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 25m (x2 over 25m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "dobs.csi.digitalocean.com" or manually created by system administrator
Normal Provisioning 25m dobs.csi.digitalocean.com_master-k8s-ngx-app-cluster-do-0-sgp1_c7f7b0ad-94cc-49a1-8be1-10f0431eb3cb External provisioner is provisioning volume for claim "myapp-tidb/tikv-myapp-tidb-tikv-0"
Normal ProvisioningSucceeded 25m dobs.csi.digitalocean.com_master-k8s-ngx-app-cluster-do-0-sgp1_c7f7b0ad-94cc-49a1-8be1-10f0431eb3cb Successfully provisioned volume pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191
Warning ExternalExpanding 22m volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.
Normal Resizing 22m external-resizer dobs.csi.digitalocean.com External resizer is resizing volume pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191
Normal FileSystemResizeRequired 22m external-resizer dobs.csi.digitalocean.com Require file system resize of volume on node
kubectl describe pv pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191
Name: pvc-ad8cf8ee-236b-42b6-a881-dcc3cdbdf191
Labels: app.kubernetes.io/component=tikv
app.kubernetes.io/instance=myapp-tidb
app.kubernetes.io/managed-by=tidb-operator
app.kubernetes.io/name=tidb-cluster
app.kubernetes.io/namespace=myapp-tidb
tidb.pingcap.com/cluster-id=6803691304543823427
tidb.pingcap.com/store-id=6
Annotations: pv.kubernetes.io/provisioned-by: dobs.csi.digitalocean.com
tidb.pingcap.com/pod-name: myapp-tidb-tikv-0
Finalizers: [kubernetes.io/pv-protection external-attacher/dobs-csi-digitalocean-com]
StorageClass: do-ssd
Status: Bound
Claim: myapp-tidb/tikv-myapp-tidb-tikv-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 105Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: dobs.csi.digitalocean.com
VolumeHandle: 8fd26b90-6532-11ea-9953-0a58ac14a251
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1584067308672-8081-dobs.csi.digitalocean.com
Events: <none>
kubectl describe sc do-ssd
Name: do-ssd
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"do-ssd"},"provisioner":"dobs.csi.digitalocean.com","reclaimPolicy":"Retain","volumeBindingMode":"Immediate"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: dobs.csi.digitalocean.com
Parameters: <none>
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Retain
VolumeBindingMode: Immediate
Events: <none>
-
CSI Version: -
-
Kubernetes Version:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.6", GitCommit:"72c30166b2105cd7d3350f2c28a219e6abcd79eb", GitTreeState:"clean", BuildDate:"2020-01-18T23:23:21Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider/framework version, if applicable (such as Rancher): digitalocean
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 18
Thanks for the additional data point, I’ll take it into account. The multi-attachment error could be for the same reason or a different one. Chances are it’s another symptom of the same underlying problem.
Thank you so much for the elaborate description. I’ll give it a try and report back.
Hi @timoreimann , I will create a new cluster and post my steps later.