cri-o: Error log: Failed to create existing container

Description
I checked the similar issues #3259 and #4465 , but fail to fix with the workaround.
My situation is that the error messages were triggered after restarting crio & kubelet.
Though the error messages can disappear after reboot, is there any solution to this problem without rebooting?

Steps to reproduce the issue:

  1. systemctl stop kubelet
  2. systemctl restart crio
  3. systemctl start kubelet
  4. journalctl -xeu kubelet -f

Describe the results you received:
I executed tests with different cgroup drivers, and I received errors and warnings below in kubelet logs every minute.

#kubelet & crio cgroup driver use systemd:

kubelet[16645]: E1119 12:07:05.393398   16645 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e900d5d_3559_4bc2_9b52_761eb5ed3c3f.slice/crio-217e7ff7799d62d02282afd1a0f2a8bbba85e62c8614ad4bf0aa9c029f2a0661.scope: Error finding container 217e7ff7799d62d02282afd1a0f2a8bbba85e62c8614ad4bf0aa9c029f2a0661: Status 404 returned error &{%!s(*http.body=&{0xc001297aa0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x55fd88bf3020) %!s(func() error=0x55fd88bf2fa0)}

#kubelet & crio cgroup driver use cgroupfs:

kubelet[7608]: E1203 12:30:49.996168    7608 manager.go:1123] Failed to create existing container: /kubepods/burstable/pod5f65e3d8d489e8fea295a7cd01aff842/crio-a3e66d2af18c551b3a281b5369de361a19ed1ea1ed508563fbdf94366cb321eb: Error finding container a3e66d2af18c551b3a281b5369de361a19ed1ea1ed508563fbdf94366cb321eb: Status 404 returned error &{%!s(*http.body=&{0xc000b6cfd8 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x55fe06c23f40) %!s(func() error=0x55fe06c23ec0)}

Describe the results you expected:
In kubelet logs, no errors or warnings of “Failed to create existing container”.

Additional information you deem important (e.g. issue happens only occasionally):
I received these logs every 60 seconds after restarting crio and kubelet.

Output of crio --version:

crio version 1.22.1
Version:          1.22.1
GitCommit:        63ca93845d5fe05cdca826367afcb601ece8d7ad
GitTreeState:     clean
BuildDate:        2021-11-11T20:24:17Z
GoVersion:        go1.16.8
Compiler:         gc
Platform:         linux/amd64
Linkmode:         dynamic
BuildTags:        exclude_graphdriver_devicemapper, seccomp
SeccompEnabled:   true
AppArmorEnabled:  false

Additional environment details (AWS, VirtualBox, physical, etc.):
OS: Centos 7
Kubernetes v1.22.1

crio configuration:

...
[crio.runtime]
conmon_cgroup = "system.slice"
cgroup_manager = "systemd"
...

Kubelet configuration:

apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerPolicy: static
cpuManagerReconcilePeriod: 5s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
kubeReserved:
  cpu: 500m
kubeletCgroups: /systemd/system.slice
logging: {}
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 28 (7 by maintainers)

Most upvoted comments

So I got bitten by this, and decided to troubleshoot it a bit. It looks like when kubelet is not running and you delete a pod from cri-o directly (using crictl rmp $POD_ID), its corresponding systemd slice is left around, for some reason:

# systemctl stop kubelet
# crictl pods -v | grep -A1 -B1 'Name: kube-proxy'
ID: aa2f0c23f03449a0113926e4f6bb73f9cbf8f7cef0365c0e7ea4cc325a7bf432
Name: kube-proxy-mpt7q
UID: 2e0090b5-3385-4d73-93fa-31e5e17999c7
# systemctl -t slice | grep 2e0090b5
  UNIT                                                              LOAD   ACTIVE SUB    DESCRIPTION                                                                             
  kubepods-besteffort-pod2e0090b5_3385_4d73_93fa_31e5e17999c7.slice loaded active active libcontainer container kubepods-besteffort-pod2e0090b5_3385_4d73_93fa_31e5e17999c7.slice
# crictl stopp aa2f0c23f03449a0113926e4f6bb73f9cbf8f7cef0365c0e7ea4cc325a7bf432
Stopped sandbox aa2f0c23f03449a0113926e4f6bb73f9cbf8f7cef0365c0e7ea4cc325a7bf432
# crictl rmp aa2f0c23f03449a0113926e4f6bb73f9cbf8f7cef0365c0e7ea4cc325a7bf432
Removed sandbox aa2f0c23f03449a0113926e4f6bb73f9cbf8f7cef0365c0e7ea4cc325a7bf432
# systemctl -t slice | grep 2e0090b5
  kubepods-besteffort-pod2e0090b5_3385_4d73_93fa_31e5e17999c7.slice loaded active active libcontainer container kubepods-besteffort-pod2e0090b5_3385_4d73_93fa_31e5e17999c7.slice

and looking at the system logs, there’s no

Apr 24 09:52:21 node-4 systemd[1]: Removed slice libcontainer container kubepods-besteffort-podc660be45_144b_4f1a_9819_e07d5f779161.slice.

that happens when kubelet is running and a pod is deleted through the k8s API server.

This is a bit surprising, because, the slice description being libcontainer container kubepods-besteffort-pod2e0090b5_3385_4d73_93fa_31e5e17999c7.slice, I would have expected for runc to manage its lifecycle, but the only explanation for the above behaviour is that it’s the kubelet that manages the slices?

Later edit: turns out that kubelet also uses libcontainer to manage systemd cgroups, hence the slice name (which is set here, btw: https://github.com/opencontainers/runc/blob/255fe4099ed06edf5416c6af7dd736fcd8f3c5d2/libcontainer/cgroups/systemd/v1.go#L177 ).

So in order to fix this, after stopping kubelet, deleting all the pods with crictl rmp, I also manually deleted the leftover slices:

for slice in $(systemctl -t slice | grep -E '^\s*kubepods' | awk '{ print $1 }'); do \
  echo "deleting slice $slice..."; \
  systemctl stop "$slice"; \
done

I then started kubelet, which created new pods, and the error messages went away.

Been digging this for a while. I’ve discovered that the uids of the pods are actually there.

grep "\($( kubectl get pods -A -ojsonpath='{.items[*].metadata.uid}' | sed -e 's/-/_/g' -e 's/ /\\|/g' )\)" /var/log/syslog \
  | tail -n 10

But kubelet said nope shits returning 404.

Jan 25 14:23:13 astrohost2 kubelet[1708301]: E0125 14:23:13.263181 1708301 manager.go:1121] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod112d0a72_753c_42be_a57a_d57ab0ee749f.slice/crio-2bf9ddb89e7bf9926787d8ac69e04cc9ba5de941cb75f833f376ec1a2274fd1e.scope: Error finding container 2bf9ddb89e7bf9926787d8ac69e04cc9ba5de941cb75f833f376ec1a2274fd1e: Status 404 returned error can't find the container with id 2bf9ddb89e7bf9926787d8ac69e04cc9ba5de941cb75f833f376ec1a2274fd1e
Jan 25 14:23:13 astrohost2 kubelet[1708301]: E0125 14:23:13.265862 1708301 manager.go:1121] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2275207_f415_4f62_ad5c_b5ae625b1dbb.slice/crio-7b01662e5efd97bbfb17a4ccb9511b819ea3556d63c2677866bcd52f5d57e2c4.scope: Error finding container 7b01662e5efd97bbfb17a4ccb9511b819ea3556d63c2677866bcd52f5d57e2c4: Status 404 returned error can't find the container with id 7b01662e5efd97bbfb17a4ccb9511b819ea3556d63c2677866bcd52f5d57e2c4
Jan 25 14:23:13 astrohost2 kubelet[1708301]: E0125 14:23:13.266447 1708301 manager.go:1121] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf1e2a88_c98b_4c4b_a9c5_988267d71b9a.slice/crio-5460e61413dc9abd06c4e22595b13afb2fa16c4907e8585ee0d5d28865b5457f.scope: Error finding container 5460e61413dc9abd06c4e22595b13afb2fa16c4907e8585ee0d5d28865b5457f: Status 404 returned error can't find the container with id 5460e61413dc9abd06c4e22595b13afb2fa16c4907e8585ee0d5d28865b5457f
Jan 25 14:23:13 astrohost2 kubelet[1708301]: E0125 14:23:13.269040 1708301 manager.go:1121] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea4220c3_0501_4c60_828f_4dbc4d643abc.slice/crio-d66e197a527ab1953ac59628624a3a0448a645f76bd9be1d6337dfc82b3b56de.scope: Error finding container d66e197a527ab1953ac59628624a3a0448a645f76bd9be1d6337dfc82b3b56de: Status 404 returned error can't find the container with id d66e197a527ab1953ac59628624a3a0448a645f76bd9be1d6337dfc82b3b56de
Jan 25 14:23:13 astrohost2 kubelet[1708301]: E0125 14:23:13.269808 1708301 manager.go:1121] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod769cdbbb_7fb3_4aed_9a67_35d2b94f8b24.slice/crio-6bce81b4e7f23c72c161978d70991ae3a83ceaffc5938eadd2e88f26440e38ca.scope: Error finding container 6bce81b4e7f23c72c161978d70991ae3a83ceaffc5938eadd2e88f26440e38ca: Status 404 returned error can't find the container with id 6bce81b4e7f23c72c161978d70991ae3a83ceaffc5938eadd2e88f26440e38ca
Jan 25 14:23:13 astrohost2 kubelet[1708301]: E0125 14:23:13.271797 1708301 manager.go:1121] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36203036_b6d4_4fc7_9a46_7a706c598569.slice/crio-66d3e8db27cf7806c82a146097385264f9058d369e0dd0e7471c868700d588a2.scope: Error finding container 66d3e8db27cf7806c82a146097385264f9058d369e0dd0e7471c868700d588a2: Status 404 returned error can't find the container with id 66d3e8db27cf7806c82a146097385264f9058d369e0dd0e7471c868700d588a2
Jan 25 14:23:13 astrohost2 kubelet[1708301]: E0125 14:23:13.274054 1708301 manager.go:1121] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8cae3df_9879_436e_8b3f_01a8f93d13e6.slice/crio-0875f1b433322432fc65265a8892117dc454a573c82e2492d112acc70ac0a5d3.scope: Error finding container 0875f1b433322432fc65265a8892117dc454a573c82e2492d112acc70ac0a5d3: Status 404 returned error can't find the container with id 0875f1b433322432fc65265a8892117dc454a573c82e2492d112acc70ac0a5d3
Jan 25 14:23:13 astrohost2 kubelet[1708301]: E0125 14:23:13.274245 1708301 manager.go:1121] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde9a9d59_de39_4039_b6da_07cd3d578d4b.slice/crio-e980c55f3384ce9b4ace5c29635090ce1bc06432a98b5cf7363701d3fcb9080d.scope: Error finding container e980c55f3384ce9b4ace5c29635090ce1bc06432a98b5cf7363701d3fcb9080d: Status 404 returned error can't find the container with id e980c55f3384ce9b4ace5c29635090ce1bc06432a98b5cf7363701d3fcb9080d
Jan 25 14:23:13 astrohost2 kubelet[1708301]: E0125 14:23:13.277477 1708301 manager.go:1121] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e1cbcaf_2668_4572_ab28_08449e1489f7.slice/crio-0d90a6c89261b5b898ea73b2deb83f7940182b6aa5c94323f2d026fad73a5241.scope: Error finding container 0d90a6c89261b5b898ea73b2deb83f7940182b6aa5c94323f2d026fad73a5241: Status 404 returned error can't find the container with id 0d90a6c89261b5b898ea73b2deb83f7940182b6aa5c94323f2d026fad73a5241
Jan 25 14:23:13 astrohost2 kubelet[1708301]: E0125 14:23:13.280810 1708301 manager.go:1121] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36203036_b6d4_4fc7_9a46_7a706c598569.slice/crio-f1a1b0205748e4da1b4f513470d1a196d75da5a76cfbaa60a475cfe6e09a5845.scope: Error finding container f1a1b0205748e4da1b4f513470d1a196d75da5a76cfbaa60a475cfe6e09a5845: Status 404 returned error can't find the container with id f1a1b0205748e4da1b4f513470d1a196d75da5a76cfbaa60a475cfe6e09a5845

Is there a way to manually simulate what kubelet was trying to request and verify against on CRI?

I figured the only way to solve this is to manually step through the process.

Version Info

# crio --version
crio version 1.26.1
Version:        1.26.1
GitCommit:      unknown
GitCommitDate:  unknown
GitTreeState:   clean
BuildDate:      2023-01-10T21:29:18Z
GoVersion:      go1.19
Compiler:       gc
Platform:       linux/amd64
Linkmode:       dynamic
BuildTags:      
  apparmor
  seccomp
  containers_image_ostree_stub
  exclude_graphdriver_btrfs
  exclude_graphdriver_devicemapper
  containers_image_openpgp
LDFlags:          -s -w -X github.com/cri-o/cri-o/internal/pkg/criocli.DefaultsPath="" -X github.com/cri-o/cri-o/internal/version.buildDate=2023-01-10T21:29:18Z 
SeccompEnabled:   true
AppArmorEnabled:  true
Dependencies:     
  
# kubelet --version
Kubernetes v1.26.0

Edit: Just upgraded kubelet to 1.26.1 and the results are still the same.

Here are the CRI-O’s debug level log.
crio_debug.txt


crictl pods:

POD ID              CREATED             STATE               NAME                                      NAMESPACE           ATTEMPT             RUNTIME
8c25dcb52ba15       30 seconds ago      Ready               calico-kube-controllers-588575d68-v52zm   calico-system       0                   (default)
74664d211c6ab       31 seconds ago      Ready               calico-apiserver-66c76fb49-lnkmz          calico-apiserver    0                   (default)
e398c46dd2223       31 seconds ago      Ready               cadvisor-4nzpt                            monitoring-vip      0                   (default)
98cfeec86731f       33 seconds ago      Ready               calico-node-5wtt5                         calico-system       0                   (default)
7e8281ca8ab95       34 seconds ago      Ready               coredns-78fcd69978-4dr9p                  kube-system         0                   (default)
e619b4d979cf1       34 seconds ago      Ready               calico-apiserver-66c76fb49-kd6rb          calico-apiserver    0                   (default)
514ce4613d143       34 seconds ago      Ready               testpod                                   default             0                   (default)
e2e67bb8c368f       34 seconds ago      Ready               tigera-operator-b78466769-52jvp           tigera-operator     0                   (default)
bb9d4faa2ed29       35 seconds ago      Ready               kube-proxy-r8mcf                          kube-system         0                   (default)
b1854d14380eb       35 seconds ago      Ready               calico-typha-6d87ccd479-xqnpb             calico-system       0                   (default)
e95e5b6eff713       35 seconds ago      Ready               coredns-78fcd69978-bms47                  kube-system         0                   (default)
a3afd8cd974ed       43 seconds ago      Ready               kube-apiserver-k8s-2-lab01                kube-system         0                   (default)
0c0fc1a7e6cec       43 seconds ago      Ready               etcd-k8s-2-lab01                          kube-system         0                   (default)
d8ec8adbea85b       43 seconds ago      Ready               kube-scheduler-k8s-2-lab01                kube-system         0                   (default)
e9fd416fb7bb2       43 seconds ago      Ready               kube-controller-manager-k8s-2-lab01       kube-system         0                   (default)

crictl ps -a:

CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID
627f358178ac4       docker.io/library/nginx@sha256:2e87d9ff130deb0c2d63600390c3f2370e71e71841573990d54579bc35046203   27 seconds ago      Running             testpod                   9                   514ce4613d143
359783bdf5078       e4ee02aeee09f1176937af0970505802390c206cdfd8ea929c8ea7936facdb47                                  33 seconds ago      Running             calico-kube-controllers   10                  8c25dcb52ba15
118b953aac7f7       ea725dc15d717ece6550940f909d6c662da85822c7583e4ac66c1d4220bf39a4                                  34 seconds ago      Running             calico-apiserver          11                  74664d211c6ab
430a9936c8f67       9b7965ed45041110fad9da8c6946875ec50b3d5814b46a52cb2a2362bf346e31                                  34 seconds ago      Running             calico-node               15                  98cfeec86731f
0984dace7bc5b       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                  34 seconds ago      Running             coredns                   9                   e95e5b6eff713
11e01abca9194       2d9c916993645ccc053c127e4f7481dc79b118fcaf881ecba8ac100e4c71e1b4                                  34 seconds ago      Running             cadvisor                  9                   e398c46dd2223
7db66c365550a       2c8aa43a8d6d53ade09e44fd9cdf52637fa2a9cac1682ea7aebbf79dbff7d1a0                                  36 seconds ago      Exited              install-cni               15                  98cfeec86731f
add42145d1c3a       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                  36 seconds ago      Running             coredns                   9                   7e8281ca8ab95
4d5dd1c5f3f51       ea725dc15d717ece6550940f909d6c662da85822c7583e4ac66c1d4220bf39a4                                  36 seconds ago      Running             calico-apiserver          10                  e619b4d979cf1
9ab0148bb2d29       0d3b19c2d4d5e514f4d8aaa85c98dff13b1594847aebbdadd9ef33cba7a3761e                                  36 seconds ago      Exited              flexvol-driver            15                  98cfeec86731f
be738ea759782       53d842c232878f957c11644a2265e52ab20afae9a0a0ed2a5556e9180b45bc2d                                  38 seconds ago      Running             tigera-operator           20                  e2e67bb8c368f
aff1986acdda3       36c4ebbc9d979f15a0316c6dde446c556250d397e2085375cfbaf2660272d912                                  38 seconds ago      Running             kube-proxy                15                  bb9d4faa2ed29
fafaeb8f1af37       96705d7d0eb9d769d722ce942f5166e589b5a50e3cffccbcd557a425f4615eaf                                  38 seconds ago      Running             calico-typha              15                  b1854d14380eb
81038b1cfd5fb       f30469a2491a5580699fc45c6416bc22f0540ed4d66672e9079f8883b9083e2c                                  46 seconds ago      Running             kube-apiserver            22                  a3afd8cd974ed
e9c7c7e26ebd4       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba                                  46 seconds ago      Running             etcd                      62                  0c0fc1a7e6cec
0522fdc27226c       6e002eb89a88130791169af169a48aa8a029271dfb8d8183b11122c045565433                                  46 seconds ago      Running             kube-controller-manager   23                  e9fd416fb7bb2
9ea8f6b6a0098       aca5ededae9c8e47a73d5e43ef6666083ada2df2fdb9c8dc69c5803a92f5a307                                  46 seconds ago      Running             kube-scheduler            22                  d8ec8adbea85b

Also, a workaround did not worked,

    systemctl stop kubelet
    systemctl restart crio
    crictl rmp -fa
    systemctl start kubelet

and there are some errors in crictl rmp -fa

Stopped sandbox e9fd416fb7bb24cbf8b76f8f305a26befeec5a28f17810515f5607b53ccb8638
Removed sandbox e9fd416fb7bb24cbf8b76f8f305a26befeec5a28f17810515f5607b53ccb8638
Stopped sandbox 0c0fc1a7e6cec0a549392ca03fb2d46eb3b73b9dd53ef5a22c897c348bbd50cd
Stopped sandbox 98cfeec86731fd268fb5802a15d17bee0a5a76f61d45e5132f490084169a11de
Stopped sandbox b1854d14380eb5e5381a90c08d7aa504f38c1bdacd8f996d5e63231921fb2df3
Stopped sandbox e2e67bb8c368f3d27f2ae5e1b5aa3631059edfceb776e910f1eda61d18c76b4c
Stopped sandbox bb9d4faa2ed2905b5b4898a9a5578b0d87d675f05f5d38b47ecc2742c583ba02
Removed sandbox 0c0fc1a7e6cec0a549392ca03fb2d46eb3b73b9dd53ef5a22c897c348bbd50cd
Removed sandbox b1854d14380eb5e5381a90c08d7aa504f38c1bdacd8f996d5e63231921fb2df3
Removed sandbox e2e67bb8c368f3d27f2ae5e1b5aa3631059edfceb776e910f1eda61d18c76b4c
Removed sandbox bb9d4faa2ed2905b5b4898a9a5578b0d87d675f05f5d38b47ecc2742c583ba02
Removed sandbox 98cfeec86731fd268fb5802a15d17bee0a5a76f61d45e5132f490084169a11de
Stopped sandbox d8ec8adbea85b6629ec6d29a97ea3f3ef4cb65701900602230ca768a8cd3acb6
Removed sandbox d8ec8adbea85b6629ec6d29a97ea3f3ef4cb65701900602230ca768a8cd3acb6
Stopped sandbox a3afd8cd974edadfdf61e507aac6a3c761a441697522f66266a712d1f9bf15ec
Removed sandbox a3afd8cd974edadfdf61e507aac6a3c761a441697522f66266a712d1f9bf15ec
stopping the pod sandbox "514ce4613d143eab255e81876f06d390442f5aa52ff89295b7d1bfd1d6260152" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_testpod_default_dfb863ea-0a17-4dbd-86d8-6a9a10679b06_0(514ce4613d143eab255e81876f06d390442f5aa52ff89295b7d1bfd1d6260152): error removing pod default_testpod from CNI network "k8s-pod-network": error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
stopping the pod sandbox "7e8281ca8ab95f27d738a770f6e48af25dde81e65284ef484cc95c9cd77715b5" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_coredns-78fcd69978-4dr9p_kube-system_615fe8e1-7d2f-448c-a16e-1b40a788e8b0_0(7e8281ca8ab95f27d738a770f6e48af25dde81e65284ef484cc95c9cd77715b5): error removing pod kube-system_coredns-78fcd69978-4dr9p from CNI network "k8s-pod-network": error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
stopping the pod sandbox "8c25dcb52ba15508f84a16ac38303cad2b5da2c3aceddefabfd9705ad38d564b" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_calico-kube-controllers-588575d68-v52zm_calico-system_07c6a4ce-0c49-4306-b079-c93ecabfc4ad_0(8c25dcb52ba15508f84a16ac38303cad2b5da2c3aceddefabfd9705ad38d564b): error removing pod calico-system_calico-kube-controllers-588575d68-v52zm from CNI network "k8s-pod-network": error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
stopping the pod sandbox "e398c46dd2223c9e6299701e1c2a273e808033479ac179ffe435f9fb65364d6e" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_cadvisor-4nzpt_monitoring-vip_c74896a3-8a95-46b1-b40c-ef42581fdc78_0(e398c46dd2223c9e6299701e1c2a273e808033479ac179ffe435f9fb65364d6e): error removing pod monitoring-vip_cadvisor-4nzpt from CNI network "k8s-pod-network": error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
stopping the pod sandbox "e95e5b6eff713bcde3ca703cce2e7b93819e5f4a3094efce2dc8096d0e1d1708" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_coredns-78fcd69978-bms47_kube-system_82edebf2-3f33-4b50-8c66-33017038e917_0(e95e5b6eff713bcde3ca703cce2e7b93819e5f4a3094efce2dc8096d0e1d1708): error removing pod kube-system_coredns-78fcd69978-bms47 from CNI network "k8s-pod-network": error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
stopping the pod sandbox "e619b4d979cf1672b95b0f20c0e2e14270134141c0f3c5d63808c4b656b3075f" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_calico-apiserver-66c76fb49-kd6rb_calico-apiserver_fcee8550-5fcf-47ae-b95e-7e5a335d6593_0(e619b4d979cf1672b95b0f20c0e2e14270134141c0f3c5d63808c4b656b3075f): error removing pod calico-apiserver_calico-apiserver-66c76fb49-kd6rb from CNI network "k8s-pod-network": error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
stopping the pod sandbox "74664d211c6ab3012d8418bc20f1b1f70dde8eec0631ae0976e8f4736857620f" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_calico-apiserver-66c76fb49-lnkmz_calico-apiserver_b5400a96-57f4-4fc6-9922-b6902d3a6956_0(74664d211c6ab3012d8418bc20f1b1f70dde8eec0631ae0976e8f4736857620f): error removing pod calico-apiserver_calico-apiserver-66c76fb49-lnkmz from CNI network "k8s-pod-network": error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused

after crictl rmp -fa

crictl pods:

POD ID              CREATED              STATE               NAME                                      NAMESPACE           ATTEMPT             RUNTIME
57d6853040e02       About a minute ago   Ready               calico-typha-6d87ccd479-xqnpb             calico-system       0                   (default)
aa46b93601d0b       About a minute ago   Ready               tigera-operator-b78466769-52jvp           tigera-operator     0                   (default)
9518a8bb43f76       About a minute ago   Ready               calico-node-5wtt5                         calico-system       0                   (default)
0f14f1619cc6e       About a minute ago   Ready               kube-proxy-r8mcf                          kube-system         0                   (default)
bdfc2a0b42df0       About a minute ago   Ready               kube-apiserver-k8s-2-lab01                kube-system         0                   (default)
f01885034c979       About a minute ago   Ready               etcd-k8s-2-lab01                          kube-system         0                   (default)
cab2588f63200       About a minute ago   Ready               kube-controller-manager-k8s-2-lab01       kube-system         0                   (default)
2da61ea8df741       About a minute ago   Ready               kube-scheduler-k8s-2-lab01                kube-system         0                   (default)
8c25dcb52ba15       3 minutes ago        Ready               calico-kube-controllers-588575d68-v52zm   calico-system       0                   (default)
74664d211c6ab       3 minutes ago        Ready               calico-apiserver-66c76fb49-lnkmz          calico-apiserver    0                   (default)
e398c46dd2223       3 minutes ago        Ready               cadvisor-4nzpt                            monitoring-vip      0                   (default)
7e8281ca8ab95       3 minutes ago        Ready               coredns-78fcd69978-4dr9p                  kube-system         0                   (default)
e619b4d979cf1       3 minutes ago        Ready               calico-apiserver-66c76fb49-kd6rb          calico-apiserver    0                   (default)
514ce4613d143       3 minutes ago        Ready               testpod                                   default             0                   (default)
e95e5b6eff713       3 minutes ago        Ready               coredns-78fcd69978-bms47                  kube-system         0                   (default)

crictl ps -a:

CONTAINER           IMAGE                                                                                             CREATED              STATE               NAME                      ATTEMPT             POD ID
a2c805ce1fc4b       9b7965ed45041110fad9da8c6946875ec50b3d5814b46a52cb2a2362bf346e31                                  About a minute ago   Running             calico-node               16                  9518a8bb43f76
d769204b606cd       96705d7d0eb9d769d722ce942f5166e589b5a50e3cffccbcd557a425f4615eaf                                  About a minute ago   Running             calico-typha              16                  57d6853040e02
21bf10af4fc84       2c8aa43a8d6d53ade09e44fd9cdf52637fa2a9cac1682ea7aebbf79dbff7d1a0                                  About a minute ago   Exited              install-cni               16                  9518a8bb43f76
bc21f13c87bc1       53d842c232878f957c11644a2265e52ab20afae9a0a0ed2a5556e9180b45bc2d                                  About a minute ago   Running             tigera-operator           21                  aa46b93601d0b
10761e66574c7       0d3b19c2d4d5e514f4d8aaa85c98dff13b1594847aebbdadd9ef33cba7a3761e                                  About a minute ago   Exited              flexvol-driver            16                  9518a8bb43f76
3ff8241ec812f       36c4ebbc9d979f15a0316c6dde446c556250d397e2085375cfbaf2660272d912                                  About a minute ago   Running             kube-proxy                16                  0f14f1619cc6e
6ae3ce7f680b5       6e002eb89a88130791169af169a48aa8a029271dfb8d8183b11122c045565433                                  About a minute ago   Running             kube-controller-manager   24                  cab2588f63200
1fce1145b1c09       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba                                  About a minute ago   Running             etcd                      63                  f01885034c979
743caf16265cb       aca5ededae9c8e47a73d5e43ef6666083ada2df2fdb9c8dc69c5803a92f5a307                                  About a minute ago   Running             kube-scheduler            23                  2da61ea8df741
39511099cf712       f30469a2491a5580699fc45c6416bc22f0540ed4d66672e9079f8883b9083e2c                                  About a minute ago   Running             kube-apiserver            23                  bdfc2a0b42df0
627f358178ac4       docker.io/library/nginx@sha256:2e87d9ff130deb0c2d63600390c3f2370e71e71841573990d54579bc35046203   3 minutes ago        Running             testpod                   9                   514ce4613d143
359783bdf5078       e4ee02aeee09f1176937af0970505802390c206cdfd8ea929c8ea7936facdb47                                  3 minutes ago        Running             calico-kube-controllers   10                  8c25dcb52ba15
118b953aac7f7       ea725dc15d717ece6550940f909d6c662da85822c7583e4ac66c1d4220bf39a4                                  3 minutes ago        Running             calico-apiserver          11                  74664d211c6ab
0984dace7bc5b       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                  3 minutes ago        Running             coredns                   9                   e95e5b6eff713
11e01abca9194       2d9c916993645ccc053c127e4f7481dc79b118fcaf881ecba8ac100e4c71e1b4                                  3 minutes ago        Running             cadvisor                  9                   e398c46dd2223
add42145d1c3a       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44                                  3 minutes ago        Running             coredns                   9                   7e8281ca8ab95
4d5dd1c5f3f51       ea725dc15d717ece6550940f909d6c662da85822c7583e4ac66c1d4220bf39a4                                  3 minutes ago        Running             calico-apiserver          10                  e619b4d979cf1