longhorn: longhorn-manager pod is crashing with CrashLoopBackOff

Hello, I am trying to deploy longhorn on a Kubernetes cluster in a vLAB environment. But the longhorn-manager pod is failing.

Packages used: kubelet v1.24.3-00 containerd v1.24.3-00 kubectl v1.24.3-00 kubeadm v1.24.3-00 Longhorn v1.3.1

Node OS: Ubuntu 20.04.3 LTS Internet connectivity using a local Squid proxy server

`# kubectl get all NAME READY STATUS RESTARTS AGE pod/longhorn-admission-webhook-8498c969d4-9992g 1/1 Running 0 24m pod/longhorn-admission-webhook-8498c969d4-mlxx2 1/1 Running 0 24m pod/longhorn-conversion-webhook-78fc4df57c-b92b5 1/1 Running 0 24m pod/longhorn-conversion-webhook-78fc4df57c-gmntz 1/1 Running 0 24m pod/longhorn-driver-deployer-855dfc74ff-tnxpq 0/1 Init:0/1 0 24m pod/longhorn-manager-58lsq 0/1 CrashLoopBackOff 9 (3m33s ago) 24m pod/longhorn-manager-687ds 0/1 CrashLoopBackOff 9 (3m28s ago) 24m pod/longhorn-manager-j9nsm 0/1 CrashLoopBackOff 9 (3m21s ago) 24m pod/longhorn-ui-58b8cc8d8b-dg5rh 1/1 Running 0 24m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/longhorn-admission-webhook ClusterIP 10.99.165.112 <none> 9443/TCP 24m service/longhorn-backend ClusterIP 10.102.158.48 <none> 9500/TCP 24m service/longhorn-conversion-webhook ClusterIP 10.101.199.31 <none> 9443/TCP 24m service/longhorn-engine-manager ClusterIP None <none> <none> 24m service/longhorn-frontend ClusterIP 10.99.27.208 <none> 80/TCP 24m service/longhorn-replica-manager ClusterIP None <none> <none> 24m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/longhorn-manager 3 3 0 3 0 <none> 24m

NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/longhorn-admission-webhook 2/2 2 2 24m deployment.apps/longhorn-conversion-webhook 2/2 2 2 24m deployment.apps/longhorn-driver-deployer 0/1 1 0 24m deployment.apps/longhorn-ui 1/1 1 1 24m

NAME DESIRED CURRENT READY AGE replicaset.apps/longhorn-admission-webhook-8498c969d4 2 2 2 24m replicaset.apps/longhorn-conversion-webhook-78fc4df57c 2 2 2 24m replicaset.apps/longhorn-driver-deployer-855dfc74ff 1 1 0 24m replicaset.apps/longhorn-ui-58b8cc8d8b 1 1 1 24m

kubectl describe pod/longhorn-manager-58lsq

Name: longhorn-manager-58lsq Namespace: longhorn-system Priority: 0 Node: node1/192.168.239.21 Start Time: Mon, 22 Aug 2022 14:44:50 +0000 Labels: app=longhorn-manager app.kubernetes.io/instance=longhorn app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=longhorn app.kubernetes.io/version=v1.3.1 controller-revision-hash=77b984974f helm.sh/chart=longhorn-1.3.1 pod-template-generation=1 Annotations: cni.projectcalico.org/containerID: f8b8b98e75632307376a279b30650b238c0fa8730563acefb359ea81e236b143 cni.projectcalico.org/podIP: 172.16.229.89/32 cni.projectcalico.org/podIPs: 172.16.229.89/32 Status: Running IP: 172.16.229.89 IPs: IP: 172.16.229.89 Controlled By: DaemonSet/longhorn-manager Init Containers: wait-longhorn-admission-webhook: Container ID: containerd://cf538d487b84c0185891c285ae78de317772e21337d0ce870b750265ba002e0a Image: longhornio/longhorn-manager:v1.3.1 Image ID: docker.io/longhornio/longhorn-manager@sha256:8cb9792ada5b4a2b8ff8b67b878b654c25d0318a81b07c01e786c753d4c46523 Port: <none> Host Port: <none> Command: sh -c while [ $(curl -m 1 -s -o /dev/null -w “%{http_code}” -k https://longhorn-admission-webhook:9443/v1/healthz) != “200” ]; do echo waiting; sleep 2; done State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 22 Aug 2022 14:44:51 +0000 Finished: Mon, 22 Aug 2022 14:44:58 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fsvp9 (ro) Containers: longhorn-manager: Container ID: containerd://922de27020a84289b3ecbc24ac5ec766e2147b81a77d630b06b7aba7aac42121 Image: longhornio/longhorn-manager:v1.3.1 Image ID: docker.io/longhornio/longhorn-manager@sha256:8cb9792ada5b4a2b8ff8b67b878b654c25d0318a81b07c01e786c753d4c46523 Port: 9500/TCP Host Port: 0/TCP Command: longhorn-manager -d daemon –engine-image longhornio/longhorn-engine:v1.3.1 –instance-manager-image longhornio/longhorn-instance-manager:v1_20220808 –share-manager-image longhornio/longhorn-share-manager:v1_20220808 –backing-image-manager-image longhornio/backing-image-manager:v3_20220808 –manager-image longhornio/longhorn-manager:v1.3.1 –service-account longhorn-service-account State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 22 Aug 2022 15:06:10 +0000 Finished: Mon, 22 Aug 2022 15:06:10 +0000 Ready: False Restart Count: 9 Readiness: tcp-socket :9500 delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAMESPACE: longhorn-system (v1:metadata.namespace) POD_IP: (v1:status.podIP) NODE_NAME: (v1:spec.nodeName) Mounts: /host/dev/ from dev (rw) /host/proc/ from proc (rw) /tls-files/ from longhorn-grpc-tls (rw) /var/lib/longhorn/ from longhorn (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fsvp9 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: dev: Type: HostPath (bare host directory volume) Path: /dev/ HostPathType: proc: Type: HostPath (bare host directory volume) Path: /proc/ HostPathType: longhorn: Type: HostPath (bare host directory volume) Path: /var/lib/longhorn/ HostPathType: longhorn-grpc-tls: Type: Secret (a volume populated by a Secret) SecretName: longhorn-grpc-tls Optional: true kube-api-access-fsvp9: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message


Normal Scheduled 25m default-scheduler Successfully assigned longhorn-system/longhorn-manager-58lsq to node1 Normal Pulled 25m kubelet Container image “longhornio/longhorn-manager:v1.3.1” already present on machine Normal Created 25m kubelet Created container wait-longhorn-admission-webhook Normal Started 25m kubelet Started container wait-longhorn-admission-webhook Normal Created 24m (x4 over 25m) kubelet Created container longhorn-manager Normal Started 24m (x4 over 25m) kubelet Started container longhorn-manager Normal Pulled 23m (x5 over 25m) kubelet Container image “longhornio/longhorn-manager:v1.3.1” already present on machine Warning BackOff 5s (x122 over 25m) kubelet Back-off restarting failed container

kubectl logs pod/longhorn-manager-58lsq

Defaulted container “longhorn-manager” out of: longhorn-manager, wait-longhorn-admission-webhook (init) W0822 15:06:10.307900 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. time=“2022-08-22T15:06:10Z” level=info msg=“cannot list the content of the src directory /var/lib/rancher/longhorn/engine-binaries for the copy, will do nothing: Failed to execute: nsenter [–mount=/host/proc/1/ns/mnt --net=/host/proc/1/ns/net bash -c ls /var/lib/rancher/longhorn/engine-binaries/], output , stderr, ls: cannot access '/var/lib/rancher/longhorn/engine-binaries/': No such file or directory\n, error exit status 2” I0822 15:06:10.353730 1 leaderelection.go:248] attempting to acquire leader lease longhorn-system/longhorn-manager-upgrade-lock… I0822 15:06:10.370533 1 leaderelection.go:258] successfully acquired lease longhorn-system/longhorn-manager-upgrade-lock time=“2022-08-22T15:06:10Z” level=info msg=“Start upgrading” time=“2022-08-22T15:06:10Z” level=info msg=“setting default-engine-image not found” time=“2022-08-22T15:06:10Z” level=error msg=“Upgrade failed: upgrade API version failed: cannot create CRDAPIVersionSetting: Internal error occurred: failed calling webhook "validator.longhorn.io": failed to call webhook: Post "https://longhorn-admission-webhook.longhorn-system.svc:9443/v1/webhook/validaton?timeout=10s\”: Forbidden" time=“2022-08-22T15:06:10Z” level=info msg=“Upgrade leader lost: node1” time=“2022-08-22T15:06:10Z” level=fatal msg=“Error starting manager: upgrade API version failed: cannot create CRDAPIVersionSetting: Internal error occurred: failed calling webhook "validator.longhorn.io": failed to call webhook: Post "https://longhorn-admission-webhook.longhorn-system.svc:9443/v1/webhook/validaton?timeout=10s\”: Forbidden" #`

`# kubectl get all -n kube-system NAME READY STATUS RESTARTS AGE pod/calico-kube-controllers-555bc4b957-4r4bh 1/1 Running 2 (36m ago) 3d1h pod/calico-node-9xlhh 1/1 Running 2 (36m ago) 3d1h pod/calico-node-q2hxk 1/1 Running 2 (35m ago) 3d1h pod/calico-node-xgp9j 1/1 Running 2 (34m ago) 3d1h pod/coredns-6d4b75cb6d-b8rm4 1/1 Running 2 (36m ago) 3d1h pod/coredns-6d4b75cb6d-nf7lj 1/1 Running 2 (36m ago) 3d1h pod/etcd-node1 1/1 Running 2 (36m ago) 3d1h pod/kube-apiserver-node1 1/1 Running 2 (36m ago) 3d1h pod/kube-controller-manager-node1 1/1 Running 2 (36m ago) 3d1h pod/kube-proxy-cw664 1/1 Running 2 (34m ago) 3d1h pod/kube-proxy-mhxw7 1/1 Running 2 (36m ago) 3d1h pod/kube-proxy-w47rv 1/1 Running 2 (35m ago) 3d1h pod/kube-scheduler-node1 1/1 Running 2 (36m ago) 3d1h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3d1h

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/calico-node 3 3 3 3 3 kubernetes.io/os=linux 3d1h daemonset.apps/kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 3d1h

NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/calico-kube-controllers 1/1 1 1 3d1h deployment.apps/coredns 2/2 2 2 3d1h

NAME DESIRED CURRENT READY AGE replicaset.apps/calico-kube-controllers-555bc4b957 1 1 1 3d1h replicaset.apps/coredns-6d4b75cb6d 2 2 2 3d1h #`

Any help would be highly appreciated.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 16 (7 by maintainers)

Most upvoted comments

I did one more experiment, I removed proxy configuration from every node and re-created the LH but no luck.

Well, I found the solution… or the bug in my configuration… 😃

Previously, I was using a custom dnsDomain which was also configured as search directive in /etc/resolv.conf on the host VM. The resolv.conf file on the host VM was redirecting all such traffic to external DNS server defined via nameserver directive.

*** Not working **** $ grep dnsDomain ClusterConfiguration.yaml dnsDomain: mydomain.lab $

*** DNS client configuration on all nodes *** $ cat /etc/resolv.conf nameserver 192.168.239.24 search mydomain.lab $

I changed the dnsDomain directive in cluster bootstrap configuration to default one and Longhorn latest version is working fine after this change.

*** Working fine after changing this to something other then host VM domain **** $ grep dnsDomain ClusterConfiguration.yaml-1 dnsDomain: cluster.local $

No change in host VM resolv.conf.

An update on this issue… No actual solution yet but found one work-around… able to install latest LH version this way…

$ kubectl get ValidatingWebhookConfiguration -n longhorn-system NAME WEBHOOKS AGE longhorn-webhook-validator 1 4m $ kubectl delete ValidatingWebhookConfiguration longhorn-webhook-validator -n longhorn-system warning: deleting cluster-scoped resources, not scoped to the provided namespace validatingwebhookconfiguration.admissionregistration.k8s.io “longhorn-webhook-validator” deleted $

yes, I tried it earlier but didn’t worked. I also tested by removing proxy configuration from /etc/profile and containerd.service files completely and rebooted all the nodes but still same error for longhorn-manager pod.