k3s: Unable to start k3s service in the master node

Hi,

After a power outage, I’ve realize, my master nodes can’t boot up properly, please, could you help me??

Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.292576    3494 server.go:659] external host was not specified, using 192.168.1.242
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.296031    3494 server.go:196] Version: v1.20.4+k3s1
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.399832    3494 shared_informer.go:240] Waiting for caches to sync for node_authorizer
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.422451    3494 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.422797    3494 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.438570    3494 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.438781    3494 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 11 15:44:19 localhost k3s[3494]: I1011 15:44:19.739131    3494 instance.go:289] Using reconciler: lease
Oct 11 15:44:20 localhost k3s[3494]: I1011 15:44:20.047468    3494 rest.go:131] the default service ipfamily for this cluster is: IPv4
Oct 11 15:44:21 localhost k3s[3494]: I1011 15:44:21.093158    3494 trace.go:205] Trace[1098700547]: "List etcd3" key:/secrets,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 15:44:19.930) (total time: 1162ms):
Oct 11 15:44:21 localhost k3s[3494]: Trace[1098700547]: [1.162803764s] [1.162803764s] END
Oct 11 15:44:21 localhost k3s[3494]: time="2021-10-11T15:44:21.211867275+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:21 http: TLS handshake error from 192.168.1.247:40382: remote error: tls: bad certificate"
Oct 11 15:44:21 localhost k3s[3494]: time="2021-10-11T15:44:21.291388240+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:21 http: TLS handshake error from 192.168.1.246:32898: remote error: tls: bad certificate"
Oct 11 15:44:21 localhost k3s[3494]: I1011 15:44:21.833274    3494 trace.go:205] Trace[907858497]: "List etcd3" key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 15:44:19.477) (total time: 2355ms):
Oct 11 15:44:21 localhost k3s[3494]: Trace[907858497]: [2.355249555s] [2.355249555s] END
Oct 11 15:44:22 localhost k3s[3494]: I1011 15:44:22.348477    3494 trace.go:205] Trace[1364057824]: "List etcd3" key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 15:44:19.469) (total time: 2878ms):
Oct 11 15:44:22 localhost k3s[3494]: Trace[1364057824]: [2.878577676s] [2.878577676s] END
Oct 11 15:44:22 localhost k3s[3494]: time="2021-10-11T15:44:22.409754393+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:22 http: TLS handshake error from 192.168.1.243:57622: remote error: tls: bad certificate"
Oct 11 15:44:22 localhost k3s[3494]: time="2021-10-11T15:44:22.516127365+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:22 http: TLS handshake error from 192.168.1.244:57702: remote error: tls: bad certificate"
Oct 11 15:44:22 localhost k3s[3494]: time="2021-10-11T15:44:22.572661771+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:22 http: TLS handshake error from 192.168.1.245:53968: remote error: tls: bad certificate"
Oct 11 15:44:23 localhost k3s[3494]: W1011 15:44:23.609847    3494 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
Oct 11 15:44:23 localhost k3s[3494]: W1011 15:44:23.751256    3494 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:23 localhost k3s[3494]: W1011 15:44:23.906221    3494 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.021273    3494 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.069812    3494 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.150211    3494 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.185404    3494 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.256981    3494 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: W1011 15:44:24.257151    3494 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
Oct 11 15:44:24 localhost k3s[3494]: I1011 15:44:24.384797    3494 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Oct 11 15:44:24 localhost k3s[3494]: I1011 15:44:24.384989    3494 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.493498252+02:00" level=info msg="Waiting for API server to become available"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.493904193+02:00" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --profiling=false --secure-port=0"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.502425570+02:00" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.580013076+02:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.580396830+02:00" level=info msg="To join node to cluster: k3s agent -s https://192.168.1.242:6443 -t ${NODE_TOKEN}"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.593023712+02:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.593392725+02:00" level=info msg="Run: k3s kubectl"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.594513204+02:00" level=info msg="Module overlay was already loaded"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.594706018+02:00" level=info msg="Module nf_conntrack was already loaded"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.594843259+02:00" level=info msg="Module br_netfilter was already loaded"
Oct 11 15:44:24 localhost k3s[3494]: time="2021-10-11T15:44:24.594965447+02:00" level=info msg="Module iptable_nat was already loaded"
Oct 11 15:44:25 localhost k3s[3494]: time="2021-10-11T15:44:25.131056520+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:25 http: TLS handshake error from 127.0.0.1:46744: remote error: tls: bad certificate"
Oct 11 15:44:25 localhost k3s[3494]: time="2021-10-11T15:44:25.319794741+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:25 http: TLS handshake error from 127.0.0.1:46752: remote error: tls: bad certificate"
Oct 11 15:44:25 localhost k3s[3494]: time="2021-10-11T15:44:25.761972793+02:00" level=info msg="certificate CN=noldork3sm2 signed by CN=k3s-server-ca@1611348520: notBefore=2021-01-22 20:48:40 +0000 UTC notAfter=2022-10-11 13:44:25 +0000 UTC"
Oct 11 15:44:25 localhost k3s[3494]: time="2021-10-11T15:44:25.875424624+02:00" level=info msg="certificate CN=system:node:noldork3sm2,O=system:nodes signed by CN=k3s-client-ca@1611348520: notBefore=2021-01-22 20:48:40 +0000 UTC notAfter=2022-10-11 13:44:25 +0000 UTC"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.120051459+02:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.122832317+02:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.283677581+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:26 http: TLS handshake error from 192.168.1.247:40390: remote error: tls: bad certificate"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.422107128+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:26 http: TLS handshake error from 192.168.1.246:32906: remote error: tls: bad certificate"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.562304764+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:26 localhost k3s[3494]: time="2021-10-11T15:44:26.869943503+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.147306089+02:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.519192677+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:27 http: TLS handshake error from 192.168.1.243:57630: remote error: tls: bad certificate"
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.612221981+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:27 http: TLS handshake error from 192.168.1.244:57710: remote error: tls: bad certificate"
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.682761192+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:27 http: TLS handshake error from 192.168.1.245:53976: remote error: tls: bad certificate"
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.902814419+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:27 localhost k3s[3494]: time="2021-10-11T15:44:27.964965849+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:28 localhost k3s[3494]: time="2021-10-11T15:44:28.087406450+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:28 localhost k3s[3494]: time="2021-10-11T15:44:28.161383137+02:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Oct 11 15:44:29 localhost k3s[3494]: time="2021-10-11T15:44:29.181541574+02:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Oct 11 15:44:30 localhost k3s[3494]: time="2021-10-11T15:44:30.191166566+02:00" level=info msg="Containerd is now running"
Oct 11 15:44:30 localhost k3s[3494]: time="2021-10-11T15:44:30.704789001+02:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Oct 11 15:44:30 localhost k3s[3494]: time="2021-10-11T15:44:30.863986948+02:00" level=info msg="Handling backend connection request [noldork3sm2]"
Oct 11 15:44:30 localhost k3s[3494]: time="2021-10-11T15:44:30.871063103+02:00" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/912de41a65c99bc4d50bbb78e6106f3acbf3a70b8dead77b4c4ebc6755b4f9d6/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=noldork3sm2 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --register-with-taints=k3s-controlplane=true:NoSchedule --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Oct 11 15:44:30 localhost k3s[3494]: time="2021-10-11T15:44:30.877528837+02:00" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=noldork3sm2 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Oct 11 15:44:30 localhost k3s[3494]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Oct 11 15:44:30 localhost k3s[3494]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Oct 11 15:44:30 localhost k3s[3494]: W1011 15:44:30.885477    3494 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Oct 11 15:44:31 localhost systemd[1]: Started Kubernetes systemd probe.
Oct 11 15:44:31 localhost k3s[3494]: I1011 15:44:31.110173    3494 server.go:412] Version: v1.20.4+k3s1
Oct 11 15:44:31 localhost systemd[1]: run-rd3dafb4b84874f43849d3a6be96122d2.scope: Succeeded.
Oct 11 15:44:31 localhost k3s[3494]: time="2021-10-11T15:44:31.397245120+02:00" level=info msg="Node CIDR assigned for: noldork3sm2"
Oct 11 15:44:31 localhost k3s[3494]: I1011 15:44:31.399130    3494 flannel.go:92] Determining IP address of default interface
Oct 11 15:44:31 localhost k3s[3494]: I1011 15:44:31.406546    3494 flannel.go:105] Using interface with name eth0 and address 192.168.1.242
Oct 11 15:44:31 localhost k3s[3494]: I1011 15:44:31.439786    3494 kube.go:117] Waiting 10m0s for node controller to sync
Oct 11 15:44:31 localhost k3s[3494]: I1011 15:44:31.440206    3494 kube.go:300] Starting kube subnet manager
Oct 11 15:44:31 localhost k3s[3494]: time="2021-10-11T15:44:31.542731591+02:00" level=info msg="labels have already set on node: noldork3sm2"
Oct 11 15:44:31 localhost k3s[3494]: time="2021-10-11T15:44:31.622141811+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:31 http: TLS handshake error from 192.168.1.247:40418: remote error: tls: bad certificate"
Oct 11 15:44:31 localhost k3s[3494]: E1011 15:44:31.766657    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
Oct 11 15:44:31 localhost k3s[3494]: E1011 15:44:31.858410    3494 node.go:161] Failed to retrieve node info: nodes "noldork3sm2" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 11 15:44:31 localhost k3s[3494]: E1011 15:44:31.899419    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
Oct 11 15:44:31 localhost k3s[3494]: time="2021-10-11T15:44:31.945105707+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:31 http: TLS handshake error from 192.168.1.246:32934: remote error: tls: bad certificate"
Oct 11 15:44:31 localhost k3s[3494]: E1011 15:44:31.969123    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
Oct 11 15:44:31 localhost k3s[3494]: time="2021-10-11T15:44:31.994156966+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:32 localhost k3s[3494]: I1011 15:44:32.194272    3494 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
Oct 11 15:44:32 localhost k3s[3494]: time="2021-10-11T15:44:32.326172029+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:32 localhost k3s[3494]: I1011 15:44:32.443535    3494 kube.go:124] Node controller sync successful
Oct 11 15:44:32 localhost k3s[3494]: I1011 15:44:32.449585    3494 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Oct 11 15:44:32 localhost k3s[3494]: time="2021-10-11T15:44:32.943749512+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:32 http: TLS handshake error from 192.168.1.244:57738: remote error: tls: bad certificate"
Oct 11 15:44:33 localhost k3s[3494]: E1011 15:44:33.011722    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
Oct 11 15:44:33 localhost k3s[3494]: time="2021-10-11T15:44:33.020476322+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:33 http: TLS handshake error from 192.168.1.243:57658: remote error: tls: bad certificate"
Oct 11 15:44:33 localhost k3s[3494]: E1011 15:44:33.113077    3494 node.go:161] Failed to retrieve node info: nodes "noldork3sm2" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 11 15:44:33 localhost k3s[3494]: time="2021-10-11T15:44:33.138814897+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:33 http: TLS handshake error from 192.168.1.245:54004: remote error: tls: bad certificate"
Oct 11 15:44:33 localhost k3s[3494]: time="2021-10-11T15:44:33.165318095+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:33 localhost k3s[3494]: E1011 15:44:33.316889    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
Oct 11 15:44:33 localhost k3s[3494]: time="2021-10-11T15:44:33.362522247+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:33 localhost k3s[3494]: E1011 15:44:33.504235    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
Oct 11 15:44:33 localhost k3s[3494]: time="2021-10-11T15:44:33.517797286+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:34 localhost k3s[3494]: time="2021-10-11T15:44:34.612139961+02:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Oct 11 15:44:35 localhost k3s[3494]: E1011 15:44:35.321816    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
Oct 11 15:44:35 localhost k3s[3494]: E1011 15:44:35.331517    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
Oct 11 15:44:35 localhost k3s[3494]: E1011 15:44:35.383080    3494 node.go:161] Failed to retrieve node info: nodes "noldork3sm2" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 11 15:44:35 localhost k3s[3494]: E1011 15:44:35.681164    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
Oct 11 15:44:35 localhost k3s[3494]: time="2021-10-11T15:44:35.762841849+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 11 15:44:37 localhost k3s[3494]: time="2021-10-11T15:44:37.028444908+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:37 http: TLS handshake error from 192.168.1.247:40442: remote error: tls: bad certificate"
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.212074    3494 nvidia.go:81] Error reading "/sys/bus/pci/devices/": open /sys/bus/pci/devices/: no such file or directory
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.224774    3494 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.226547    3494 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.226918    3494 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu1/online: open /sys/devices/system/cpu/cpu1/online: no such file or directory
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.227352    3494 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu2/online: open /sys/devices/system/cpu/cpu2/online: no such file or directory
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.227908    3494 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu3/online: open /sys/devices/system/cpu/cpu3/online: no such file or directory
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.229521    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.229719    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.229883    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.230040    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: E1011 15:44:37.230103    3494 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.231754    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.231935    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.232094    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: W1011 15:44:37.232238    3494 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Oct 11 15:44:37 localhost k3s[3494]: E1011 15:44:37.232299    3494 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.235732    3494 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.238433    3494 container_manager_linux.go:287] container manager verified user specified cgroup-root exists: []
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.239216    3494 container_manager_linux.go:292] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.239972    3494 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.240099    3494 container_manager_linux.go:323] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.240173    3494 container_manager_linux.go:328] Creating device plugin manager: true
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.242412    3494 kubelet.go:265] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.242668    3494 kubelet.go:276] Watching apiserver
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.243392    3494 kubelet.go:453] Kubelet client is not nil
Oct 11 15:44:37 localhost k3s[3494]: time="2021-10-11T15:44:37.243663523+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.251012    3494 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.3-k3s3, apiVersion: v1alpha2
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.255086    3494 server.go:1177] Started kubelet
Oct 11 15:44:37 localhost k3s[3494]: E1011 15:44:37.287874    3494 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
Oct 11 15:44:37 localhost k3s[3494]: E1011 15:44:37.288058    3494 kubelet.go:1292] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.289573    3494 server.go:148] Starting to listen on 0.0.0.0:10250
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.307930    3494 server.go:410] Adding debug handlers to kubelet server.
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.333512    3494 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.375940    3494 volume_manager.go:271] Starting Kubelet Volume Manager
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.401189    3494 desired_state_of_world_populator.go:142] Desired state populator starts to run
Oct 11 15:44:37 localhost k3s[3494]: time="2021-10-11T15:44:37.499390402+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:37 http: TLS handshake error from 192.168.1.246:32958: remote error: tls: bad certificate"
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.547089    3494 kubelet.go:449] kubelet nodes not sync
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.547440    3494 kubelet.go:449] kubelet nodes not sync
Oct 11 15:44:37 localhost k3s[3494]: I1011 15:44:37.712551    3494 kubelet_node_status.go:71] Attempting to register node noldork3sm2
Oct 11 15:44:38 localhost k3s[3494]: time="2021-10-11T15:44:38.376591610+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:38 http: TLS handshake error from 192.168.1.244:57762: remote error: tls: bad certificate"
Oct 11 15:44:38 localhost k3s[3494]: time="2021-10-11T15:44:38.466238574+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:38 localhost k3s[3494]: time="2021-10-11T15:44:38.521111595+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:38 http: TLS handshake error from 192.168.1.243:57682: remote error: tls: bad certificate"
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.570206    3494 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.42.2.0/24
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.594525    3494 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.2.0/24
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.616714    3494 cpu_manager.go:193] [cpumanager] starting with none policy
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.616859    3494 cpu_manager.go:194] [cpumanager] reconciling every 10s
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.617089    3494 state_mem.go:36] [cpumanager] initializing new in-memory state store
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.625968    3494 state_mem.go:88] [cpumanager] updated default cpuset: ""
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.626139    3494 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.626312    3494 policy_none.go:43] [cpumanager] none policy: Start
Oct 11 15:44:38 localhost k3s[3494]: W1011 15:44:38.634372    3494 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.639185    3494 plugin_manager.go:114] Starting Kubelet Plugin Manager
Oct 11 15:44:38 localhost k3s[3494]: time="2021-10-11T15:44:38.685001955+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:38 http: TLS handshake error from 192.168.1.245:54028: remote error: tls: bad certificate"
Oct 11 15:44:38 localhost k3s[3494]: time="2021-10-11T15:44:38.862148835+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.903514    3494 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.903914    3494 status_manager.go:158] Starting to sync pod status with apiserver
Oct 11 15:44:38 localhost k3s[3494]: I1011 15:44:38.904137    3494 kubelet.go:1829] Starting kubelet main sync loop.
Oct 11 15:44:38 localhost k3s[3494]: E1011 15:44:38.904614    3494 kubelet.go:1853] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful
Oct 11 15:44:39 localhost k3s[3494]: time="2021-10-11T15:44:39.028526715+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:39 localhost k3s[3494]: I1011 15:44:39.128733    3494 reconciler.go:157] Reconciler: start to sync state
Oct 11 15:44:39 localhost k3s[3494]: E1011 15:44:39.140997    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
Oct 11 15:44:39 localhost k3s[3494]: time="2021-10-11T15:44:39.175366145+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:39 localhost k3s[3494]: E1011 15:44:39.461582    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
Oct 11 15:44:39 localhost k3s[3494]: E1011 15:44:39.526817    3494 node.go:161] Failed to retrieve node info: nodes "noldork3sm2" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 11 15:44:40 localhost k3s[3494]: time="2021-10-11T15:44:40.823311390+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 11 15:44:41 localhost k3s[3494]: E1011 15:44:41.640768    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
Oct 11 15:44:42 localhost k3s[3494]: time="2021-10-11T15:44:42.274294551+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:42 http: TLS handshake error from 192.168.1.247:40466: remote error: tls: bad certificate"
Oct 11 15:44:42 localhost k3s[3494]: time="2021-10-11T15:44:42.440869505+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:43 localhost k3s[3494]: time="2021-10-11T15:44:43.514190368+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:43 http: TLS handshake error from 192.168.1.246:32982: remote error: tls: bad certificate"
Oct 11 15:44:43 localhost k3s[3494]: time="2021-10-11T15:44:43.844702540+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:43 localhost k3s[3494]: time="2021-10-11T15:44:43.905892693+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:43 http: TLS handshake error from 192.168.1.244:57786: remote error: tls: bad certificate"
Oct 11 15:44:44 localhost k3s[3494]: time="2021-10-11T15:44:44.092561406+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:44 http: TLS handshake error from 192.168.1.243:57706: remote error: tls: bad certificate"
Oct 11 15:44:44 localhost k3s[3494]: time="2021-10-11T15:44:44.124497041+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:44 localhost k3s[3494]: time="2021-10-11T15:44:44.223655170+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:44 http: TLS handshake error from 192.168.1.245:54052: remote error: tls: bad certificate"
Oct 11 15:44:44 localhost k3s[3494]: time="2021-10-11T15:44:44.446750283+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:44 localhost k3s[3494]: time="2021-10-11T15:44:44.574290005+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:45 localhost k3s[3494]: time="2021-10-11T15:44:45.652838945+02:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Oct 11 15:44:45 localhost k3s[3494]: time="2021-10-11T15:44:45.888170771+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 11 15:44:47 localhost k3s[3494]: E1011 15:44:47.039253    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
Oct 11 15:44:47 localhost k3s[3494]: time="2021-10-11T15:44:47.470846416+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:47 http: TLS handshake error from 192.168.1.247:40490: remote error: tls: bad certificate"
Oct 11 15:44:47 localhost k3s[3494]: time="2021-10-11T15:44:47.666725195+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:48 localhost k3s[3494]: I1011 15:44:48.185482    3494 trace.go:205] Trace[1021595152]: "Create" url:/api/v1/namespaces/default/events,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (11-Oct-2021 15:44:38.179) (total time: 10005ms):
Oct 11 15:44:48 localhost k3s[3494]: Trace[1021595152]: [10.005268937s] [10.005268937s] END
Oct 11 15:44:48 localhost k3s[3494]: E1011 15:44:48.192958    3494 node.go:161] Failed to retrieve node info: nodes "noldork3sm2" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Oct 11 15:44:48 localhost k3s[3494]: E1011 15:44:48.310307    3494 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"noldork3sm2.16acfdd9ed048b6f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"noldork3sm2", UID:"noldork3sm2", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"noldork3sm2"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0512dd14f31196f, ext:20715723711, loc:(*time.Location)(0x5917578)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0512dd14f31196f, ext:20715723711, loc:(*time.Location)(0x5917578)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "noldork3sm2.16acfdd9ed048b6f" is forbidden: not yet ready to handle request' (will not retry!)
Oct 11 15:44:48 localhost k3s[3494]: E1011 15:44:48.335745    3494 controller.go:187] failed to update lease, error: Put "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/noldork3sm2?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Oct 11 15:44:48 localhost k3s[3494]: I1011 15:44:48.361750    3494 trace.go:205] Trace[1287239117]: "Create" url:/api/v1/nodes,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (11-Oct-2021 15:44:38.327) (total time: 10034ms):
Oct 11 15:44:48 localhost k3s[3494]: Trace[1287239117]: [10.034476968s] [10.034476968s] END
Oct 11 15:44:48 localhost k3s[3494]: E1011 15:44:48.366844    3494 kubelet_node_status.go:93] Unable to register node "noldork3sm2" with API server: nodes "noldork3sm2" is forbidden: not yet ready to handle request
Oct 11 15:44:48 localhost k3s[3494]: I1011 15:44:48.539107    3494 trace.go:205] Trace[1957906643]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/noldork3sm2,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (11-Oct-2021 15:44:38.534) (total time: 10003ms):
Oct 11 15:44:48 localhost k3s[3494]: Trace[1957906643]: [10.00398902s] [10.00398902s] END
Oct 11 15:44:48 localhost k3s[3494]: I1011 15:44:48.573959    3494 trace.go:205] Trace[349769923]: "GuaranteedUpdate etcd3" type:*coordination.Lease (11-Oct-2021 15:44:38.559) (total time: 10014ms):
Oct 11 15:44:48 localhost k3s[3494]: Trace[349769923]: [10.014266289s] [10.014266289s] END
Oct 11 15:44:48 localhost k3s[3494]: I1011 15:44:48.576147    3494 kubelet_node_status.go:71] Attempting to register node noldork3sm2
Oct 11 15:44:48 localhost k3s[3494]: E1011 15:44:48.602808    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
Oct 11 15:44:48 localhost k3s[3494]: time="2021-10-11T15:44:48.894876995+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:48 http: TLS handshake error from 192.168.1.246:33006: remote error: tls: bad certificate"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.167525108+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:49 http: TLS handshake error from 192.168.1.244:57810: remote error: tls: bad certificate"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.253824006+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.388213202+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.501691650+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:49 http: TLS handshake error from 192.168.1.243:57730: remote error: tls: bad certificate"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.627186657+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:49 http: TLS handshake error from 192.168.1.245:54076: remote error: tls: bad certificate"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.878580737+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:49 localhost k3s[3494]: time="2021-10-11T15:44:49.981068208+02:00" level=error msg="runtime core not ready"
Oct 11 15:44:50 localhost k3s[3494]: time="2021-10-11T15:44:50.968247637+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Oct 11 15:44:52 localhost k3s[3494]: E1011 15:44:52.293654    3494 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
Oct 11 15:44:52 localhost k3s[3494]: time="2021-10-11T15:44:52.700332454+02:00" level=info msg="Cluster-Http-Server 2021/10/11 15:44:52 http: TLS handshake error from 192.168.1.247:40514: remote error: tls: bad certificate"
Oct 11 15:44:52 localhost k3s[3494]: I1011 15:44:52.776908    3494 trace.go:205] Trace[802387318]: "GuaranteedUpdate etcd3" type:*core.Node (11-Oct-2021 15:44:32.726) (total time: 20050ms):
Oct 11 15:44:52 localhost k3s[3494]: Trace[802387318]: [20.05053322s] [20.05053322s] END
Oct 11 15:44:52 localhost k3s[3494]: I1011 15:44:52.777779    3494 trace.go:205] Trace[1432266599]: "Patch" url:/api/v1/nodes/noldork3sm2/status,user-agent:k3s/v1.20.4+k3s1 (linux/arm) kubernetes/838a906,client:127.0.0.1 (11-Oct-2021 15:44:32.724) (total time: 20052ms):
Oct 11 15:44:52 localhost k3s[3494]: Trace[1432266599]: ---"About to apply patch" 10010ms (15:44:00.762)
Oct 11 15:44:52 localhost k3s[3494]: Trace[1432266599]: [20.05269579s] [20.05269579s] END
Oct 11 15:44:52 localhost k3s[3494]: time="2021-10-11T15:44:52.835532430+02:00" level=fatal msg="flannel exited: failed to acquire lease: nodes \"noldork3sm2\" is forbidden: not yet ready to handle request"
Oct 11 15:44:53 localhost systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 15:44:53 localhost systemd[1]: k3s.service: Failed with result 'exit-code'.
Oct 11 15:44:53 localhost systemd[1]: Failed to start Lightweight Kubernetes.

It’s a raspberry cluster, running v1.20.4+k3s1, two master nodes, and 5 workers.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 24 (9 by maintainers)

Most upvoted comments

SD cards are pretty easy to burn out with the sort of repetitive writes that the Kubernetes datastore does. As they fill up and/or run out of write cycles the performance dips as they try to find fresh cells that can be written to. In general I recommend against using them for anything critical. If this is a Raspberry Pi i usually recommend picking up a USB SATA or NVMe device to use for storage on the server node.