cri-o: error: did not receive slice as parent

Running hack/local-up-cluster.sh

E0912 17:28:03.825920   27272 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = cri-o configured with systemd cgroup manager, but did not receive slice as parent: /kubepods/pod6f71cc2a-9809-11e7-9b63-7085c20cf2ab

Running rc1 cri-o package from the repos on Fedora 26.

@mrunalp @derekwaynecarr

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 24 (13 by maintainers)

Most upvoted comments

Hi, guys, Finally, I’ve found a solution. You should put “–cgroup-driver=systemd” to the /var/lib/kubelet/kubeadm-flags.env as KUBELET_KUBEADM_ARGS, then:

systemctl daemon-reload
systemctl restart kubelet

and should work.

Did you set CGROUP_DRIVER=systemd for kube?

made a small script for this that works for me with k8 1.20 and cri-o 1.20. I ran this on all nodes

# more info at https://github.com/cri-o/cri-o/blob/master/tutorials/kubeadm.md
KUBELET_EXTRA_ARGS="--feature-gates=\"AllAlpha=false,RunAsGroup=true\" --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m"
# add Kublet extra args to /etc/default/kubelet if not there
grep -qxF "KUBELET_EXTRA_ARGS=${KUBELET_EXTRA_ARGS}" /etc/default/kubelet \
|| echo "KUBELET_EXTRA_ARGS=${KUBELET_EXTRA_ARGS}" \
| sudo tee /etc/default/kubelet

cgroup-manager:

hi, after adding cgroup-manager: systemd in /etc/crictl.yaml, could it work? I am very very confused with my errors…

We should probably change this if we are going to default to systemd, but if we stick to default of cgroupfs then it is fine.

I don’t know how to modify it when use systemd, is there a sample?

in your /etc/crio/crio.conf edit the line

cgroup_manager = "$MANAGER"

where $MANAGER is either cgroupfs or systemd