kubernetes: 1.16.3. Error: "failed to create rbd image: executable file not found in $PATH, command output: "
Still have an issue #71904 in fresh installed k8s 1.16.3 by kubeadm tool: What happened: Could not create pvc
/open
What you expected to happen: Create pvc
How to reproduce it (as minimally and precisely as possible): sc.yml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rbd-data
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.x.x.1:6789, 10.x.x.2:6789, 10.x.x.3:6789
adminId: kube
adminSecretName: ceph-secret
adminSecretNamespace: "kube-system"
pool: rbd_data
userId: kube
userSecretName: ceph-secret-user
pvc.yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-3gb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: rbd-data
kubectl describe pvc test-3gb:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 2m5s (x20 over 30m) persistentvolume-controller Failed to provision volume with StorageClass "rbd-data": failed to create rbd image: executable file not found in $PATH, command output:
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: private network installation
- OS (e.g:
cat /etc/os-release
):
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-1.test.k8s Ready master 4d8h v1.16.3 10.x.x.x <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://18.9.5
master-2.test.k8s Ready master 4d7h v1.16.3 10.x.x.x <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://18.9.5
master-3.test.k8s Ready master 4d7h v1.16.3 10.x.x.x <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://18.9.5
node-1.test.k8s Ready node 4d6h v1.16.3 10.x.x.x <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://18.9.5
node-2.test.k8s Ready node 4d7h v1.16.3 10.x.x.x <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://18.9.5
-
Kernel (e.g.
uname -a
):Linux master-1.test.k8s 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
-
Install tools: kubeadm 1.16.3
-
Network plugin and version (if this is a network-related bug): Calico
-
Others:
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 29 (13 by maintainers)
I’ve noticed that kubeadm uses image: k8s.gcr.io/kube-controller-manager which doesn’t have rbd, but here RBD Plugin marked as Internal Provisioner. I think kubeadm have to use another image with rbd or k8s.gcr.io/kube-controller-manager should have rbd on it.
Workaround: Changing
image: k8s.gcr.io/kube-controller-manager:v1.16.3
toimage: gcr.io/google_containers/hyperkube:v1.16.3
in/etc/kubernetes/manifests/kube-controller-manager.yaml
solve the issue but it’s only !workaround!. kubeadm have to provision right image for kube-controller-managerplease fix the documentation if ceph/rbd is not working. don’t let more users getting annoyed.
https://kubernetes.io/docs/concepts/storage/storage-classes/
thanks…
For anyone coming across this particular issue, I moved my cluster to Ceph-CSI. Migration is quite painful but I was able to do without having to resort to data restore: set PV to Retain, scale Deployment to 0, delete the PVC, create new PVC with new storageclass, delete the newly created RBD image and rename the old image to the new name. StatefulSets were a bit more involved but worked basically the same.
Is there any way to upgrade to 1.19 without breaking Ceph support?
@humblec Was there a conclusion on this issue?
@humblec we are going in the other direction, trying to push to remove stuff and end up with distroless images. If you need specific changes, the recommendation is to roll your own. (problem being addressed is the security posture / tons of CVE’s in images we ship)