rook: MountVolume.SetUp failed for volume
Hi, everybody
cat /etc/centos-release CentOS Linux release 7.4.1708 (Core)
uname -ar Linux k8s-master11 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
k get nodes
NAME STATUS ROLES AGE VERSION
k8s-master11 Ready master 111d v1.18.0
k8s-master12 Ready master 111d v1.18.0
k8s-master13 Ready master 111d v1.18.0
k8s-worker14 Ready <none> 111d v1.18.0
k8s-worker15 Ready <none> 111d v1.18.0
[root@Ceph-1 ~]# ceph -s
cluster:
id: 4050a101-f634-4873-80b4-3c5c00c46073
health: HEALTH_OK
services:
mon: 3 daemons, quorum Ceph-1,Ceph-2,Ceph-3
mgr: Ceph-1(active), standbys: Ceph-3, Ceph-2
mds: cephfs-1/1/1 up {0=Ceph-3=up:active}, 2 up:standby
osd: 6 osds: 6 up, 6 in
data:
pools: 2 pools, 256 pgs
objects: 273 objects, 176MiB
usage: 268GiB used, 932GiB / 1.17TiB avail
pgs: 256 active+clean
[root@Ceph-1 ~]# ceph -v ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable) [root@Ceph-1 ~]#
cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: kube-ops
data:
key: xxxxxxxxxxxxxxxxxx
cat ceph-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: cephfs-pv
namespace: kube-ops
spec:
capacity:
storage: 300Gi
accessModes:
- ReadWriteMany
cephfs:
monitors:
- 172.168.1.28:6789
- 172.168.1.29:6789
- 172.168.1.30:6789
user: admin
secretRef:
name: ceph-secret
readOnly: false
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pv-claim
namespace: kube-ops
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 300Gi
cat redis.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: kube-ops
labels:
app: redis-ci
spec:
selector:
matchLabels:
name: redis
template:
metadata:
labels:
name: redis
spec:
containers:
- name: redis
image: 172.168.1.16/library/redis:v1
imagePullPolicy: IfNotPresent
ports:
- name: redis
containerPort: 6379
volumeMounts:
- mountPath: /data
name: cephfs-pv-claim
livenessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 60
timeoutSeconds: 30
readinessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 30
timeoutSeconds: 20
volumes:
- name: cephfs-pv-claim
persistentVolumeClaim:
claimName: cephfs-pv-claim
k describe -n kube-ops pods
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned kube-ops/redis-66f9494c6f-6kkkz to k8s-worker14
Warning FailedMount 3m51s (x5 over 15m) kubelet, k8s-worker14 Unable to attach or mount volumes: unmounted volumes=[cephfs-pv-claim], unattached volumes=[cephfs-pv-claim default-token-gtn77]: timed out waiting for the condition
Warning FailedMount 94s (x2 over 12m) kubelet, k8s-worker14 Unable to attach or mount volumes: unmounted volumes=[cephfs-pv-claim], unattached volumes=[default-token-gtn77 cephfs-pv-claim]: timed out waiting for the condition
Warning FailedMount 48s (x16 over 17m) kubelet, k8s-worker14 MountVolume.SetUp failed for volume "cephfs-pv" : CephFS: mount failed: mount failed: fork/exec /usr/bin/systemd-run: invalid argument
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/323f5a38-f5f0-48b0-a3d3-edb90a54d7d6/volumes/kubernetes.io~cephfs/cephfs-pv --scope -- mount -t ceph -o <masked>,<masked> 172.168.1.28:6789,172.168.1.29:6789,172.168.1.30:6789:/ /var/lib/kubelet/pods/323f5a38-f5f0-48b0-a3d3-edb90a54d7d6/volumes/kubernetes.io~cephfs/cephfs-pv
Output:
k logs -f -n kube-ops redis-66f9494c6f-6kkkz
Error from server (BadRequest): container "redis" in pod "redis-66f9494c6f-6kkkz" is waiting to start: ContainerCreating
[root@k8s-worker14 ~]# cat /var/log/messages
Aug 12 20:42:47 k8s-worker14 kubelet: E0812 20:42:47.671082 12835 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/cephfs/323f5a38-f5f0-48b0-a3d3-edb90a54d7d6-cephfs-pv podName:323f5a38-f5f0-48b0-a3d3-edb90a54d7d6 nodeName:}" failed. No retries permitted until 2020-08-12 20:44:49.670913359 +0800 CST m=+9667424.843439048 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"cephfs-pv\" (UniqueName: \"kubernetes.io/cephfs/323f5a38-f5f0-48b0-a3d3-edb90a54d7d6-cephfs-pv\") pod \"redis-66f9494c6f-6kkkz\" (UID: \"323f5a38-f5f0-48b0-a3d3-edb90a54d7d6\") : CephFS: mount failed: mount failed: fork/exec /usr/bin/systemd-run: invalid argument\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/323f5a38-f5f0-48b0-a3d3-edb90a54d7d6/volumes/kubernetes.io~cephfs/cephfs-pv --scope -- mount -t ceph -o <masked>,<masked> 172.168.1.28:6789,172.168.1.29:6789,172.168.1.30:6789:/ /var/lib/kubelet/pods/323f5a38-f5f0-48b0-a3d3-edb90a54d7d6/volumes/kubernetes.io~cephfs/cephfs-pv\nOutput: "
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 16 (3 by maintainers)
Commits related to this issue
- ceph-secret secret use stringData https://github.com/rook/rook/issues/6053 — committed to rocdove/examples by rocdove 2 years ago
Found it!
This is not a workaround, this is a solution!!
Your Ceph secret likely contains:
But Kubernetes Secrets defined like that are already required to be base64-encoded for pseudo-encryption and serialization purposes.
Thus, when defining your secret like that, Kubernetes stores the decoded base64 content of your key, which is not what
mount.cephexpects in its-oflag.Thus, use a secret like: