kubevirt: Virt handler failed start in the kind k8s cluster on X86_64

What happened: I followed the instruction in https://kubevirt.io/quickstart_kind/ to create a kind k8s cluster. And then try to deploy kubevirt on it, but virt-handler failed to start in the init process as following:

kubevirt             virt-handler-w9qs7                           0/1     Init:CrashLoopBackOff   5 (116s ago)   5m16s

I do not find any helpful logs. Here is the output form kubectl describe pods

Name:                 virt-handler-w9qs7
Namespace:            kubevirt
Priority:             1000000000
Priority Class Name:  kubevirt-cluster-critical
Node:                 kind-control-plane/172.18.0.2
Start Time:           Wed, 27 Apr 2022 04:17:17 +0000
Labels:               app.kubernetes.io/component=kubevirt
                      app.kubernetes.io/managed-by=virt-operator
                      app.kubernetes.io/version=v0.52.0
                      controller-revision-hash=f68858c57
                      kubevirt.io=virt-handler
                      pod-template-generation=1
                      prometheus.kubevirt.io=true
Annotations:          kubevirt.io/install-strategy-identifier: 72d62fe25180ebc296d7a30b4ba2508933d9c2fe
                      kubevirt.io/install-strategy-registry: quay.io/kubevirt
                      kubevirt.io/install-strategy-version: v0.52.0
Status:               Pending
IP:                   10.244.0.11
IPs:
  IP:           10.244.0.11
Controlled By:  DaemonSet/virt-handler
Init Containers:
  virt-launcher:
    Container ID:  containerd://3f0a455aff959ec1d039891b61da45105c5424fbf9eca32235e3a0a26439218a
    Image:         quay.io/kubevirt/virt-launcher:v0.52.0
    Image ID:      quay.io/kubevirt/virt-launcher@sha256:7138d7de949a86955718e07edb90381b3abf1dd2e642d55c0db66fb15b21719b
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
    Args:
      node-labeller.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 27 Apr 2022 04:28:32 +0000
      Finished:     Wed, 27 Apr 2022 04:28:33 +0000
    Ready:          False
    Restart Count:  7
    Environment:    <none>
    Mounts:
      /var/lib/kubevirt-node-labeller from node-labeller (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gw5b2 (ro)
Containers:
  virt-handler:
    Container ID:
    Image:         quay.io/kubevirt/virt-handler:v0.52.0
    Image ID:
    Port:          8443/TCP
    Host Port:     0/TCP
    Command:
      virt-handler
      --port
      8443
      --hostname-override
      $(NODE_NAME)
      --pod-ip-address
      $(MY_POD_IP)
      --max-metric-requests
      3
      --console-server-port
      8186
      --graceful-shutdown-seconds
      315
      -v
      2
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      10m
      memory:   230Mi
    Liveness:   http-get https://:8443/healthz delay=15s timeout=10s period=45s #success=1 #failure=3
    Readiness:  http-get https://:8443/healthz delay=15s timeout=10s period=20s #success=1 #failure=3
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
      MY_POD_IP:   (v1:status.podIP)
    Mounts:
      /etc/podinfo from podinfo (rw)
      /etc/virt-handler/clientcertificates from kubevirt-virt-handler-certs (ro)
      /etc/virt-handler/servercertificates from kubevirt-virt-handler-server-certs (ro)
      /pods from kubelet-pods-shortened (rw)
      /profile-data from profile-data (rw)
      /var/lib/kubelet/device-plugins from device-plugin (rw)
      /var/lib/kubelet/pods from kubelet-pods (rw)
      /var/lib/kubevirt from virt-lib-dir (rw)
      /var/lib/kubevirt-node-labeller from node-labeller (rw)
      /var/run/kubevirt from virt-share-dir (rw)
      /var/run/kubevirt-libvirt-runtimes from libvirt-runtimes (rw)
      /var/run/kubevirt-private from virt-private-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gw5b2 (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubevirt-virt-handler-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubevirt-virt-handler-certs
    Optional:    true
  kubevirt-virt-handler-server-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubevirt-virt-handler-server-certs
    Optional:    true
  profile-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  libvirt-runtimes:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/kubevirt-libvirt-runtimes
    HostPathType:
  virt-share-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/kubevirt
    HostPathType:
  virt-lib-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubevirt
    HostPathType:
  virt-private-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/kubevirt-private
    HostPathType:
  device-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/device-plugins
    HostPathType:
  kubelet-pods-shortened:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:
  kubelet-pods:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:
  node-labeller:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubevirt-node-labeller
    HostPathType:
  podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.annotations['k8s.v1.cni.cncf.io/network-status'] -> network-status
  kube-api-access-gw5b2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  13m                   default-scheduler  Successfully assigned kubevirt/virt-handler-w9qs7 to kind-control-plane
  Normal   Pulling    13m                   kubelet            Pulling image "quay.io/kubevirt/virt-launcher:v0.52.0"
  Normal   Pulled     13m                   kubelet            Successfully pulled image "quay.io/kubevirt/virt-launcher:v0.52.0" in 18.821255689s
  Normal   Created    11m (x5 over 13m)     kubelet            Created container virt-launcher
  Normal   Started    11m (x5 over 13m)     kubelet            Started container virt-launcher
  Normal   Pulled     11m (x4 over 13m)     kubelet            Container image "quay.io/kubevirt/virt-launcher:v0.52.0" already present on machine
  Warning  BackOff    3m40s (x45 over 13m)  kubelet            Back-off restarting failed container

What you expected to happen: kubeVirt successfully start in the kind k8s cluster.

How to reproduce it (as minimally and precisely as possible): Just follow the instruction in the https://kubevirt.io/quickstart_kind/

Environment:

  • KubeVirt version (use virtctl version): v0.52.0
  • Kubernetes version (use kubectl version): 1.23.4
  • OS (e.g. from /etc/os-release): Ubuntu 18.10
  • Kernel (e.g. uname -a): Linux dell 4.18.0-25-generic
  • Install tools: kind 0.12.0 (https://github.com/kubernetes-sigs/kind/releases)

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 15 (7 by maintainers)

Most upvoted comments

I switched to CentOS 7.9 and everything works fine.

I found a solution. Virt-handler is now working on my Kubernetes after these steps:

  1. https://github.com/kubevirt/kubevirt/issues/4303#issuecomment-830365183
  2. https://github.com/kubevirt/kubevirt/issues/4303#issuecomment-839052345
  3. Execute sudo systemctl reload apparmor.service or reboot

Hi @zhlhahaha , I believe we need to adjust the script a little bit. Do you want to have a look?