kubevirt: Kubevirt virt-handler Init:CrashLoopBackOff
What happened: I was installing kubevirt operator following the tutorial in this page: https://kubevirt.io/2019/KubeVirt_k8s_crio_from_scratch_installing_KubeVirt.html
But the pod handler wont start and I don’t understand the reason…
root@debian:~# **kubectl get pods -n kubevirt**
NAME READY STATUS RESTARTS AGE
virt-api-77df5c4f87-7mqv4 1/1 Running 1 (17m ago) 27m
virt-api-77df5c4f87-wcq44 1/1 Running 1 (17m ago) 27m
virt-controller-749d8d99d4-56gb7 1/1 Running 1 (17m ago) 27m
virt-controller-749d8d99d4-78j6x 1/1 Running 1 (17m ago) 27m
virt-handler-4w99d 0/1 Init:Error 14 (5m18s ago) 27m
virt-operator-564f568975-g9wh4 1/1 Running 1 (17m ago) 31m
virt-operator-564f568975-wnpz8 1/1 Running 1 (17m ago) 31m
root@debian:~# **kubectl logs virt-handler-4w99d -n kubevirt**
Error from server (BadRequest): container "virt-handler" in pod "virt-handler-4w99d" is waiting to start: PodInitializing
kubectl describe pod virt-handler-4w99d -n kubevirt
root@debian:~# kubectl describe pod virt-handler-4w99d -n kubevirt
Name: virt-handler-4w99d
Namespace: kubevirt
Priority: 1000000000
Priority Class Name: kubevirt-cluster-critical
Node: debian/172.16.16.13
Start Time: Wed, 18 May 2022 16:33:05 +0100
Labels: app.kubernetes.io/component=kubevirt
app.kubernetes.io/managed-by=virt-operator
app.kubernetes.io/version=v0.52.0
controller-revision-hash=f68858c57
kubevirt.io=virt-handler
pod-template-generation=1
prometheus.kubevirt.io=true
Annotations: cni.projectcalico.org/containerID: 97dd02deebc33d2714172adde8aaae83a3d33d668d3555057012b74f3717e25f
cni.projectcalico.org/podIP: 192.168.245.224/32
cni.projectcalico.org/podIPs: 192.168.245.224/32
kubevirt.io/install-strategy-identifier: 72d62fe25180ebc296d7a30b4ba2508933d9c2fe
kubevirt.io/install-strategy-registry: quay.io/kubevirt
kubevirt.io/install-strategy-version: v0.52.0
Status: Pending
IP: 192.168.245.224
IPs:
IP: 192.168.245.224
Controlled By: DaemonSet/virt-handler
Init Containers:
virt-launcher:
Container ID: containerd://8a3b93bab9cafb06ae1e4cd0ab7cae040e87cf88b0cb7af92b5029bac23c8e0e
Image: quay.io/kubevirt/virt-launcher:v0.52.0
Image ID: quay.io/kubevirt/virt-launcher@sha256:7138d7de949a86955718e07edb90381b3abf1dd2e642d55c0db66fb15b21719b
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
Args:
node-labeller.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 18 May 2022 17:00:28 +0100
Finished: Wed, 18 May 2022 17:00:28 +0100
Ready: False
Restart Count: 14
Environment: <none>
Mounts:
/var/lib/kubevirt-node-labeller from node-labeller (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5spst (ro)
Containers:
virt-handler:
Container ID:
Image: quay.io/kubevirt/virt-handler:v0.52.0
Image ID:
Port: 8443/TCP
Host Port: 0/TCP
Command:
virt-handler
--port
8443
--hostname-override
$(NODE_NAME)
--pod-ip-address
$(MY_POD_IP)
--max-metric-requests
3
--console-server-port
8186
--graceful-shutdown-seconds
315
-v
2
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 10m
memory: 230Mi
Liveness: http-get https://:8443/healthz delay=15s timeout=10s period=45s #success=1 #failure=3
Readiness: http-get https://:8443/healthz delay=15s timeout=10s period=20s #success=1 #failure=3
Environment:
NODE_NAME: (v1:spec.nodeName)
MY_POD_IP: (v1:status.podIP)
Mounts:
/etc/podinfo from podinfo (rw)
/etc/virt-handler/clientcertificates from kubevirt-virt-handler-certs (ro)
/etc/virt-handler/servercertificates from kubevirt-virt-handler-server-certs (ro)
/pods from kubelet-pods-shortened (rw)
/profile-data from profile-data (rw)
/var/lib/kubelet/device-plugins from device-plugin (rw)
/var/lib/kubelet/pods from kubelet-pods (rw)
/var/lib/kubevirt from virt-lib-dir (rw)
/var/lib/kubevirt-node-labeller from node-labeller (rw)
/var/run/kubevirt from virt-share-dir (rw)
/var/run/kubevirt-libvirt-runtimes from libvirt-runtimes (rw)
/var/run/kubevirt-private from virt-private-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5spst (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubevirt-virt-handler-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubevirt-virt-handler-certs
Optional: true
kubevirt-virt-handler-server-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubevirt-virt-handler-server-certs
Optional: true
profile-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
libvirt-runtimes:
Type: HostPath (bare host directory volume)
Path: /var/run/kubevirt-libvirt-runtimes
HostPathType:
virt-share-dir:
Type: HostPath (bare host directory volume)
Path: /var/run/kubevirt
HostPathType:
virt-lib-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubevirt
HostPathType:
virt-private-dir:
Type: HostPath (bare host directory volume)
Path: /var/run/kubevirt-private
HostPathType:
device-plugin:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/device-plugins
HostPathType:
kubelet-pods-shortened:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/pods
HostPathType:
kubelet-pods:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/pods
HostPathType:
node-labeller:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubevirt-node-labeller
HostPathType:
podinfo:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.annotations['k8s.v1.cni.cncf.io/network-status'] -> network-status
kube-api-access-5spst:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32m default-scheduler Successfully assigned kubevirt/virt-handler-4w99d to debian
Normal Pulled 30m (x5 over 32m) kubelet Container image "quay.io/kubevirt/virt-launcher:v0.52.0" already present on machine
Normal Created 30m (x5 over 32m) kubelet Created container virt-launcher
Normal Started 30m (x5 over 32m) kubelet Started container virt-launcher
Warning BackOff 27m (x25 over 32m) kubelet Back-off restarting failed container
Normal SandboxChanged 21m (x2 over 22m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 19m (x4 over 21m) kubelet Container image "quay.io/kubevirt/virt-launcher:v0.52.0" already present on machine
Normal Created 19m (x4 over 21m) kubelet Created container virt-launcher
Normal Started 19m (x4 over 21m) kubelet Started container virt-launcher
Warning BackOff 102s (x93 over 21m) kubelet Back-off restarting failed container
What you expected to happen: Resolve the problem
Environment:
- KubeVirt version (use
virtctl version): v0.52.0 - Kubernetes version (use
kubectl version): v1.23.6 - VM or VMI specifications: N/A
- Cloud provider or hardware configuration: N/A
- OS (e.g. from /etc/os-release): Debian GNU/Linux 11 (bullseye)
- Kernel (e.g.
uname -a): 5.10.0-13-amd64 - Install tools: N/A
- Others: N/A
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 15 (4 by maintainers)
Thanks, now its working
I am not aware if someone is working on that ATM. But for sure, Apparmor support in KubeVirt will definitely bring a lot of value to the project. Contributions are more than welcome here.