containerd: Kubeadm unknown service runtime.v1alpha2.RuntimeService
Problem
Following Kubernetes official installation instruction for containerd and kubeadm init
will fail with unknown service runtime.v1alpha2.RuntimeService
.
# Commands from https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io
# Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
...
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2020-09-24T11:49:16Z" level=fatal msg="getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
Solution:
rm /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
Versions:
- Ubuntu 20.04 (focal)
- containerd.io 1.3.7
- kubectl 1.19.2
- kubeadm 1.19.2
- kubelet 1.19.2
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 191
- Comments: 43 (5 by maintainers)
Links to this issue
Commits related to this issue
- Bug workaround: Kubernetes v1.24.0 will not init See link https://github.com/containerd/containerd/issues/4581#issuecomment-1128256599 Same fix for older k8s works on latest 1.24.0 — committed to BuildAndDestroy/container_toolkit by BuildAndDestroy 2 years ago
- Fixed https://github.com/containerd/containerd/issues/4581 — committed to rhys96/vagrant-kubernetes-cluster by deleted user 2 years ago
- Error fix related to [this issue](https://github.com/containerd/containerd/issues/4581). — committed to samjtro/k8s-init by samjtro 2 years ago
- Node-playbook && Master FIXED github.com/containerd/containerd/issues/4581 — committed to ThauMish/Kube-Vagrant-Ansible by ThauMish a year ago
- Node-playbook && Master FIXED github.com/containerd/containerd/issues/4581 — committed to ThauMish/Kube-Vagrant-Ansible by ThauMish a year ago
- Node-playbook && Master FIXED github.com/containerd/containerd/issues/4581 — committed to ThauMish/Kube-Vagrant-Ansible by ThauMish a year ago
- Node-playbook && Master FIXED github.com/containerd/containerd/issues/4581 — committed to ThauMish/Kube-Vagrant-Ansible by ThauMish a year ago
- fix(debian kube-mast): :bug: fix unknown service runtime.v1alpha2.RuntimeService. when trying to install the kube mast on a debian image you get " unknown service runtime.v1alpha2.RuntimeService." du... — committed to shaharby7/kubernetes-under-the-hood by shaharby7 a year ago
Thanks! You help-me with this solution!!!
I followed the official instructions here https://kubernetes.io/docs/setup/production-environment/container-runtimes/
and I was getting similar error
I checked /etc/containerd/config.toml and saw ‘disabled_plugins = []’
Note the only thing I changed in the config.toml was to use systemd as true -it was different from the way docs has mentioned (maybe this was the problem?)
from docs
deleting this config.toml as given in the first post and restarting containerd service solved and kubeadm could proceed
Apparently it is my fault this time. My ansible playbook did not override config.toml file as I expected. Am sorry for taking up your time, default installation instructions work great.
In the
config.toml
file installed by packagecontainerd.io
there is the linedisabled_plugins = ["cri"]
that am guessing creating the issue. That maybe is bad default setting to have in the packagecontainerd.io
but that is for another issue/bug.Closing.
Heads up, this just happened to me on a clean install of Kubernetes v1.24.0 on Ubunutu 20.04.4 LTS. The original fix helped me as well.
Exception:
Corrected:
This comment saved my day 👍 . The default docker configuration removes CRI.
CentOS 7 Linux 5.19.1-1.el7.elrepo.x86_64
Case 1:Kubernetes 1.2x binary installation and reports this error solution:
Case 2:kuberspray installs kubernetes 1.25.3 and reports this error solution:
Then rerun kubespray
It got fixed for me finally:
fix:
Update contained to the latest and fix toml file with below changes:
disabled_plugins = [“cri”] -->> disabled_plugins = [“”]
Done. Thnak me later 😉
Got this error too
The following steps worked for me ->
I struggled with this too. The solution for me was to comment this line out in
/etc/containerd/config.toml
I just ran
apt-get upgrade
and now my control plane and all workers are failing to runcontainerd
and thus alsokubelet
. The logs forsudo service containerd status
show:It seems the
apt-get upgrade
reverted my changes to/etc/containerd/config.toml
and setSystemdCgroup
back tofalse
as well assystemd_cgroup
. Why does this keep on reverting? Additionally, why are these defaulted tofalse
?Seems like perhaps there should be some enhanced logic in the generate default config that detects if systemd is in use and set those values to
true
?EDIT
Update + solution: in my case, I had to set
SystemdCgroup = true
andsystemd_cgroup = false
. Leavingsystemd_cgroup = true
resulted in an error on containerd startup.Hi folks, my bash history from the new control plane is below, it works.
Got the same issue and fixed it by installing the containerd.io package from the docker repository instead of the one from ubuntu’s repository. see: https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository
I have ubuntu 22.04.2 on VMs and raspberry pies
Also it seems there is presently an issue to retrieve the gpg key from https://packages.cloud.google.com/apt/doc/apt-key.gpg
@hamedsol Thanks for the hint. I was finally able to init the cluster after installing the containerd binaries by following this guide: https://www.itzgeek.com/how-tos/linux/ubuntu-how-tos/install-containerd-on-ubuntu-22-04.html
Thou Saved The Day
Thanks for the reply, this is the only solution that worked for me.
rm /etc/containerd/config.toml systemctl restart containerd kubeadm init
unfortunately the config in the containerd.io package has, since forever, had a bad configuration for kubernetes tools. The bad configuration is they install a version of the config for containerd that is only good for docker. This config needs to be replaced at least with the default containerd config… and you can modify it from there if you like.
“containerd config default > /etc/containerd/config.toml” will overwrite docker’s version of the config and replace it with containerd’s version of the config… which also works just fine for docker. Then restart containerd.