k3s: k3s server crashes on startup with ipset v7.1: Cannot open session to kernel.
Version: k3s version v0.10.0 (f9888ca3)
Describe the bug k3s server crashes on startup with “ipset v7.1: Cannot open session to kernel.”
To Reproduce
run the install.sh script and run k3s server
Expected behavior k3s should be starting and not crash Actual behavior k3s crashes Additional context The device is a coral dev board (aarch64) with 1GB RAM and 4x 1.5GHz Cores OS: Mendel GNU/Linux 3 (Chef) Linux Kernel: Linux xenial-shrimp 4.9.51-imx terminal output:
# k3s server
INFO[2019-10-25T16:14:23.021874359Z] Starting k3s v0.10.0 (f9888ca3)
INFO[2019-10-25T16:14:23.030106985Z] Kine listening on unix://kine.sock
INFO[2019-10-25T16:14:23.034439838Z] Fetching bootstrap data from etcd
INFO[2019-10-25T16:14:23.112319123Z] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
I1025 16:14:23.114883 10226 server.go:650] external host was not specified, using 10.100.34.91
I1025 16:14:23.116001 10226 server.go:162] Version: v1.16.2-k3s.1
I1025 16:14:23.139064 10226 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1025 16:14:23.139402 10226 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1025 16:14:23.142489 10226 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1025 16:14:23.142730 10226 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1025 16:14:23.264858 10226 master.go:259] Using reconciler: lease
I1025 16:14:23.359099 10226 rest.go:115] the default service ipfamily for this cluster is: IPv4
W1025 16:14:24.544964 10226 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W1025 16:14:24.621309 10226 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1025 16:14:24.699489 10226 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1025 16:14:24.714405 10226 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1025 16:14:24.763767 10226 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1025 16:14:24.848953 10226 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1025 16:14:24.849084 10226 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1025 16:14:24.891012 10226 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1025 16:14:24.891128 10226 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
INFO[2019-10-25T16:14:24.912033055Z] Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0
INFO[2019-10-25T16:14:24.916101308Z] Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true
I1025 16:14:24.936095 10226 controllermanager.go:161] Version: v1.16.2-k3s.1
I1025 16:14:24.937684 10226 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I1025 16:14:24.939704 10226 server.go:143] Version: v1.16.2-k3s.1
I1025 16:14:24.940119 10226 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W1025 16:14:24.945219 10226 authorization.go:47] Authorization is disabled
W1025 16:14:24.945474 10226 authentication.go:79] Authentication is disabled
I1025 16:14:24.945646 10226 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1025 16:14:34.069325 10226 secure_serving.go:123] Serving securely on 127.0.0.1:6444
I1025 16:14:34.069517 10226 available_controller.go:383] Starting AvailableConditionController
I1025 16:14:34.069590 10226 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1025 16:14:34.069800 10226 controller.go:81] Starting OpenAPI AggregationController
I1025 16:14:34.071302 10226 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1025 16:14:34.071413 10226 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1025 16:14:34.071511 10226 autoregister_controller.go:140] Starting autoregister controller
I1025 16:14:34.071539 10226 cache.go:32] Waiting for caches to sync for autoregister controller
I1025 16:14:34.072598 10226 crd_finalizer.go:274] Starting CRDFinalizer
I1025 16:14:34.075837 10226 controller.go:85] Starting OpenAPI controller
I1025 16:14:34.076246 10226 customresource_discovery_controller.go:208] Starting DiscoveryController
I1025 16:14:34.076721 10226 naming_controller.go:288] Starting NamingConditionController
I1025 16:14:34.077002 10226 establishing_controller.go:73] Starting EstablishingController
I1025 16:14:34.077297 10226 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1025 16:14:34.077583 10226 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1025 16:14:34.137103 10226 crdregistration_controller.go:111] Starting crd-autoregister controller
I1025 16:14:34.137298 10226 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
E1025 16:14:34.262650 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1025 16:14:34.263201 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1025 16:14:34.263485 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1025 16:14:34.263754 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1025 16:14:34.264168 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1025 16:14:34.264411 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1025 16:14:34.264653 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1025 16:14:34.264884 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1025 16:14:34.265110 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1025 16:14:34.265356 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1025 16:14:34.265586 10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
INFO[2019-10-25T16:14:34.274158675Z] Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m --secure-port=0
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
I1025 16:14:34.278846 10226 cache.go:39] Caches are synced for AvailableConditionController controller
I1025 16:14:34.291702 10226 controllermanager.go:117] Version: v1.16.2-k3s.1
W1025 16:14:34.291862 10226 controllermanager.go:129] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues
E1025 16:14:34.300465 10226 core.go:85] Failed to start service controller: the cloud provider does not support external load balancers
W1025 16:14:34.300612 10226 controllermanager.go:241] Skipping "service"
W1025 16:14:34.300652 10226 core.go:103] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes.
W1025 16:14:34.300685 10226 controllermanager.go:241] Skipping "route"
I1025 16:14:34.307486 10226 node_controller.go:71] Sending events to api server.
I1025 16:14:34.307843 10226 controllermanager.go:244] Started "cloud-node"
INFO[2019-10-25T16:14:34.311086630Z] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz
INFO[2019-10-25T16:14:34.312697756Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml
INFO[2019-10-25T16:14:34.313479078Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
INFO[2019-10-25T16:14:34.313960639Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml
INFO[2019-10-25T16:14:34.314266400Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml
INFO[2019-10-25T16:14:34.314537841Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml
I1025 16:14:34.317231 10226 node_lifecycle_controller.go:77] Sending events to api server
I1025 16:14:34.321917 10226 controllermanager.go:244] Started "cloud-node-lifecycle"
E1025 16:14:34.337341 10226 controller.go:158] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1025 16:14:34.338303 10226 shared_informer.go:204] Caches are synced for crd-autoregister
I1025 16:14:34.373163 10226 cache.go:39] Caches are synced for autoregister controller
I1025 16:14:34.374000 10226 cache.go:39] Caches are synced for APIServiceRegistrationController controller
INFO[2019-10-25T16:14:34.511729298Z] Listening on :6443
INFO[2019-10-25T16:14:34.716317338Z] Starting k3s.cattle.io/v1, Kind=Addon controller
INFO[2019-10-25T16:14:34.716715139Z] Starting k3s.cattle.io/v1, Kind=ListenerConfig controller
INFO[2019-10-25T16:14:34.717631582Z] Node token is available at /var/lib/rancher/k3s/server/node-token
INFO[2019-10-25T16:14:34.717768863Z] To join node to cluster: k3s agent -s https://10.100.34.91:6443 -t ${NODE_TOKEN}
INFO[2019-10-25T16:14:34.716496739Z] Waiting for master node startup: node "" not found
I1025 16:14:34.882087 10226 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
INFO[2019-10-25T16:14:34.932885375Z] Starting /v1, Kind=Endpoints controller
INFO[2019-10-25T16:14:34.933278977Z] Starting helm.cattle.io/v1, Kind=HelmChart controller
INFO[2019-10-25T16:14:34.933381937Z] Starting /v1, Kind=Pod controller
INFO[2019-10-25T16:14:34.933412057Z] Starting batch/v1, Kind=Job controller
INFO[2019-10-25T16:14:34.933440377Z] Starting /v1, Kind=Node controller
INFO[2019-10-25T16:14:34.933467257Z] Starting /v1, Kind=Service controller
I1025 16:14:34.975419 10226 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io
INFO[2019-10-25T16:14:35.054199275Z] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml
INFO[2019-10-25T16:14:35.054269715Z] Run: k3s kubectl
INFO[2019-10-25T16:14:35.054290955Z] k3s is up and running
I1025 16:14:35.069890 10226 controller.go:107] OpenAPI AggregationController: Processing item
I1025 16:14:35.069995 10226 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1025 16:14:35.070086 10226 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1025 16:14:35.103700 10226 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
INFO[2019-10-25T16:14:35.251017050Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[2019-10-25T16:14:35.251356531Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
INFO[2019-10-25T16:14:35.271350034Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
INFO[2019-10-25T16:14:36.272515804Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
I1025 16:14:36.596223 10226 plugins.go:100] No cloud provider specified.
I1025 16:14:36.605289 10226 shared_informer.go:197] Waiting for caches to sync for tokens
I1025 16:14:36.707957 10226 shared_informer.go:204] Caches are synced for tokens
I1025 16:14:36.944494 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I1025 16:14:36.944654 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I1025 16:14:36.944786 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I1025 16:14:36.944908 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I1025 16:14:36.944983 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I1025 16:14:36.945067 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I1025 16:14:36.945200 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I1025 16:14:36.945314 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for listenerconfigs.k3s.cattle.io
I1025 16:14:36.945398 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I1025 16:14:36.945479 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I1025 16:14:36.945560 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I1025 16:14:36.946024 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I1025 16:14:36.946146 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I1025 16:14:36.946224 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I1025 16:14:36.946291 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I1025 16:14:36.946386 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I1025 16:14:36.946451 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I1025 16:14:36.946520 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I1025 16:14:36.946725 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I1025 16:14:36.946869 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I1025 16:14:36.946958 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I1025 16:14:36.947054 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I1025 16:14:36.947141 10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I1025 16:14:36.947242 10226 controllermanager.go:534] Started "resourcequota"
I1025 16:14:36.947273 10226 resource_quota_controller.go:271] Starting resource quota controller
I1025 16:14:36.947308 10226 shared_informer.go:197] Waiting for caches to sync for resource quota
I1025 16:14:36.947383 10226 resource_quota_monitor.go:303] QuotaMonitor running
I1025 16:14:36.996017 10226 controllermanager.go:534] Started "disruption"
I1025 16:14:36.996115 10226 disruption.go:333] Starting disruption controller
I1025 16:14:36.996542 10226 shared_informer.go:197] Waiting for caches to sync for disruption
I1025 16:14:37.021092 10226 controllermanager.go:534] Started "clusterrole-aggregation"
I1025 16:14:37.021542 10226 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I1025 16:14:37.021642 10226 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
I1025 16:14:37.045776 10226 controllermanager.go:534] Started "endpoint"
I1025 16:14:37.045847 10226 endpoints_controller.go:176] Starting endpoint controller
I1025 16:14:37.045922 10226 shared_informer.go:197] Waiting for caches to sync for endpoint
I1025 16:14:37.144303 10226 controllermanager.go:534] Started "horizontalpodautoscaling"
I1025 16:14:37.144579 10226 horizontal.go:156] Starting HPA controller
I1025 16:14:37.144664 10226 shared_informer.go:197] Waiting for caches to sync for HPA
E1025 16:14:37.169363 10226 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1025 16:14:37.169418 10226 controllermanager.go:526] Skipping "service"
I1025 16:14:37.193822 10226 controllermanager.go:534] Started "serviceaccount"
I1025 16:14:37.193916 10226 serviceaccounts_controller.go:116] Starting service account controller
I1025 16:14:37.194277 10226 shared_informer.go:197] Waiting for caches to sync for service account
I1025 16:14:37.223736 10226 controllermanager.go:534] Started "job"
I1025 16:14:37.223827 10226 job_controller.go:143] Starting job controller
I1025 16:14:37.223907 10226 shared_informer.go:197] Waiting for caches to sync for job
I1025 16:14:37.256308 10226 controllermanager.go:534] Started "replicaset"
W1025 16:14:37.256417 10226 controllermanager.go:513] "endpointslice" is disabled
I1025 16:14:37.256367 10226 replica_set.go:182] Starting replicaset controller
I1025 16:14:37.256669 10226 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
INFO[2019-10-25T16:14:37.278092427Z] module br_netfilter was already loaded
INFO[2019-10-25T16:14:37.278611548Z] module overlay was already loaded
INFO[2019-10-25T16:14:37.278732149Z] module nf_conntrack was already loaded
INFO[2019-10-25T16:14:37.298179169Z] Connecting to proxy url="wss://10.100.34.91:6443/v1-k3s/connect"
INFO[2019-10-25T16:14:37.306763036Z] Handling backend connection request [xenial-shrimp]
WARN[2019-10-25T16:14:37.310168407Z] Disabling CPU quotas due to missing cpu.cfs_period_us
INFO[2019-10-25T16:14:37.310702288Z] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/807a6004690ef344efe993a6867bb3dacbb1ffaa584eedbe54bb206743c599cf/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=xenial-shrimp --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/user.slice/user-1000.slice --node-labels= --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/systemd/user.slice/user-1000.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key
W1025 16:14:37.311320 10226 server.go:208] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1025 16:14:37.390558 10226 server.go:406] Version: v1.16.2-k3s.1
W1025 16:14:37.428539 10226 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1025 16:14:37.451726 10226 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I1025 16:14:37.485111 10226 controllermanager.go:534] Started "ttl"
I1025 16:14:37.485757 10226 ttl_controller.go:116] Starting TTL controller
I1025 16:14:37.485906 10226 shared_informer.go:197] Waiting for caches to sync for TTL
I1025 16:14:37.486195 10226 flannel.go:91] Determining IP address of default interface
I1025 16:14:37.486957 10226 flannel.go:104] Using interface with name eth0 and address 10.100.34.91
I1025 16:14:37.491210 10226 kube.go:117] Waiting 10m0s for node controller to sync
I1025 16:14:37.491521 10226 kube.go:300] Starting kube subnet manager
W1025 16:14:37.499783 10226 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I1025 16:14:37.522530 10226 server.go:637] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I1025 16:14:37.523327 10226 container_manager_linux.go:272] container manager verified user specified cgroup-root exists: []
I1025 16:14:37.523442 10226 container_manager_linux.go:277] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/user.slice/user-1000.slice SystemCgroupsName: KubeletCgroupsName:/systemd/user.slice/user-1000.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I1025 16:14:37.523796 10226 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
I1025 16:14:37.523871 10226 container_manager_linux.go:312] Creating device plugin manager: true
I1025 16:14:37.523994 10226 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/rancher/k3s/agent/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x25f7550 0x614f9b8 0x25f7d20 map[] map[] map[] map[] map[] 0x40066bec00 [] 0x614f9b8}
I1025 16:14:37.524147 10226 state_mem.go:36] [cpumanager] initializing new in-memory state store
I1025 16:14:37.524464 10226 state_mem.go:84] [cpumanager] updated default cpuset: ""
I1025 16:14:37.524548 10226 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
I1025 16:14:37.524634 10226 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x614f9b8 10000000000 0x40077748a0 <nil> <nil> <nil> <nil> map[]}
I1025 16:14:37.524946 10226 kubelet.go:312] Watching apiserver
W1025 16:14:37.559316 10226 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
INFO[2019-10-25T16:14:37.602768081Z] addresses labels has already been set succesfully on node: xenial-shrimp
I1025 16:14:37.636676 10226 kuberuntime_manager.go:207] Container runtime containerd initialized, version: v1.3.0-k3s.2, apiVersion: v1alpha2
I1025 16:14:37.637926 10226 server.go:1066] Started kubelet
I1025 16:14:37.653004 10226 node_ipam_controller.go:94] Sending events to api server.
I1025 16:14:37.653455 10226 server.go:145] Starting to listen on 0.0.0.0:10250
I1025 16:14:37.656905 10226 server.go:354] Adding debug handlers to kubelet server.
I1025 16:14:37.662016 10226 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I1025 16:14:37.662189 10226 status_manager.go:156] Starting to sync pod status with apiserver
I1025 16:14:37.662395 10226 kubelet.go:1822] Starting kubelet main sync loop.
I1025 16:14:37.662486 10226 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I1025 16:14:37.667192 10226 volume_manager.go:249] Starting Kubelet Volume Manager
I1025 16:14:37.698503 10226 desired_state_of_world_populator.go:131] Desired state populator starts to run
E1025 16:14:37.735242 10226 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E1025 16:14:37.735463 10226 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I1025 16:14:37.787227 10226 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I1025 16:14:37.787396 10226 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
I1025 16:14:37.808085 10226 kuberuntime_manager.go:961] updating runtime config through cri with podcidr 10.42.0.0/24
I1025 16:14:37.818028 10226 node.go:135] Successfully retrieved node IP: 10.100.34.91
I1025 16:14:37.818124 10226 server_others.go:150] Using iptables Proxier.
I1025 16:14:37.822690 10226 kubelet_network.go:77] Setting Pod CIDR: -> 10.42.0.0/24
I1025 16:14:37.826993 10226 kubelet_node_status.go:72] Attempting to register node xenial-shrimp
E1025 16:14:37.830120 10226 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error appending rule: exit status 1: iptables: No chain/target/match by that name.
I1025 16:14:37.856071 10226 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
FATA[2019-10-25T16:14:37.861994691Z] ipset v6.30: Cannot open session to kernel.
syslog:
k3s[11482]: time="2019-10-25T14:00:07.518271281Z" level=info msg="Starting k3s v0.10.0 (f9888ca3)"
Oct 25 14:00:07 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:07.534233287Z" level=info msg="Kine listening on unix://kine.sock"
Oct 25 14:00:07 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:07.535872843Z" level=info msg="Fetching bootstrap data from etcd"
Oct 25 14:00:07 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:07.632438434Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Oct 25 14:00:07 xenial-shrimp k3s[11482]: Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.637571 11482 server.go:650] external host was not specified, using 10.100.36.171
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.639564 11482 server.go:162] Version: v1.16.2-k3s.1
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.660236 11482 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.661276 11482 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.664870 11482 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.665765 11482 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.774506 11482 master.go:259] Using reconciler: lease
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.874219 11482 rest.go:115] the default service ipfamily for this cluster is: IPv4
Oct 25 14:00:08 xenial-shrimp k3s[11482]: W1025 14:00:08.987641 11482 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.111903 11482 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.190186 11482 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.204780 11482 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.253145 11482 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.335838 11482 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.335960 11482 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.376907 11482 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.377030 11482 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:09.403295107Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Oct 25 14:00:09 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:09.411088010Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.424191 11482 controllermanager.go:161] Version: v1.16.2-k3s.1
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.425660 11482 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.430550 11482 server.go:143] Version: v1.16.2-k3s.1
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.430823 11482 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.435856 11482 authorization.go:47] Authorization is disabled
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.435955 11482 authentication.go:79] Authentication is disabled
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.435984 11482 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.471459 11482 secure_serving.go:123] Serving securely on 127.0.0.1:6444
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.472602 11482 apiservice_controller.go:94] Starting APIServiceRegistrationController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.472693 11482 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.472760 11482 available_controller.go:383] Starting AvailableConditionController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.472777 11482 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.473221 11482 crd_finalizer.go:274] Starting CRDFinalizer
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.476504 11482 autoregister_controller.go:140] Starting autoregister controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.476546 11482 cache.go:32] Waiting for caches to sync for autoregister controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.484739 11482 controller.go:85] Starting OpenAPI controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.484935 11482 customresource_discovery_controller.go:208] Starting DiscoveryController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.485015 11482 naming_controller.go:288] Starting NamingConditionController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.485083 11482 establishing_controller.go:73] Starting EstablishingController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.485149 11482 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.485213 11482 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.486620 11482 crdregistration_controller.go:111] Starting crd-autoregister controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.486650 11482 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.476505 11482 controller.go:81] Starting OpenAPI AggregationController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.546404 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.552092 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.553287 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.554298 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.555111 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.555858 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.556584 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.557310 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.558024 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.558788 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.559527 11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.619392084Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m --secure-port=0"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.651785 11482 controllermanager.go:117] Version: v1.16.2-k3s.1
Oct 25 14:00:18 xenial-shrimp k3s[11482]: W1025 14:00:18.651912 11482 controllermanager.go:129] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.660305 11482 node_lifecycle_controller.go:77] Sending events to api server
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.660483 11482 controllermanager.go:244] Started "cloud-node-lifecycle"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.666401499Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.668609 11482 core.go:85] Failed to start service controller: the cloud provider does not support external load balancers
Oct 25 14:00:18 xenial-shrimp k3s[11482]: W1025 14:00:18.668709 11482 controllermanager.go:241] Skipping "service"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: W1025 14:00:18.668737 11482 core.go:103] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes.
Oct 25 14:00:18 xenial-shrimp k3s[11482]: W1025 14:00:18.668755 11482 controllermanager.go:241] Skipping "route"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.675272 11482 node_controller.go:71] Sending events to api server.
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.675448 11482 controllermanager.go:244] Started "cloud-node"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.676765996Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.677583555Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.678223033Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.678727872Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.679033271Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.682062 11482 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.683725 11482 cache.go:39] Caches are synced for autoregister controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.695167 11482 cache.go:39] Caches are synced for AvailableConditionController controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.697338 11482 shared_informer.go:204] Caches are synced for crd-autoregister
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.724270 11482 node_controller.go:148] NodeAddress: Error fetching by providerID: unimplemented Error fetching by NodeName: Failed to find node xenial-shrimp: node "xenial-shrimp" not found
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.841435 11482 controller.go:158] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.855913719Z" level=info msg="Listening on :6443"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.157591728Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.158539366Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.162847477Z" level=info msg="Waiting for master node startup: node \"\" not found"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.164638593Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.164783312Z" level=info msg="To join node to cluster: k3s agent -s https://10.100.36.171:6443 -t ${NODE_TOKEN}"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.271677 11482 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.362110753Z" level=info msg="Starting /v1, Kind=Endpoints controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.362448673Z" level=info msg="Starting /v1, Kind=Pod controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.362170753Z" level=info msg="Starting /v1, Kind=Service controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.362207353Z" level=info msg="Starting /v1, Kind=Node controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.363283751Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.365188986Z" level=info msg="Starting batch/v1, Kind=Job controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.416274 11482 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.471660 11482 controller.go:107] OpenAPI AggregationController: Processing item
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.472659 11482 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.473418 11482 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.492432 11482 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.623724811Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.623862211Z" level=info msg="Run: k3s kubectl"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.623888611Z" level=info msg="k3s is up and running"
Oct 25 14:00:19 xenial-shrimp systemd[1]: Started Lightweight Kubernetes.
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.775366 11482 controller.go:606] quota admission added evaluator for: events.events.k8s.io
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.821790851Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.823209967Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.824555765Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Oct 25 14:00:20 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:20.829722844Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Oct 25 14:00:20 xenial-shrimp k3s[11482]: I1025 14:00:20.943276 11482 plugins.go:100] No cloud provider specified.
Oct 25 14:00:20 xenial-shrimp k3s[11482]: W1025 14:00:20.950266 11482 controllermanager.go:513] "bootstrapsigner" is disabled
Oct 25 14:00:20 xenial-shrimp k3s[11482]: W1025 14:00:20.950403 11482 controllermanager.go:513] "tokencleaner" is disabled
Oct 25 14:00:20 xenial-shrimp k3s[11482]: I1025 14:00:20.951526 11482 shared_informer.go:197] Waiting for caches to sync for tokens
Oct 25 14:00:20 xenial-shrimp k3s[11482]: I1025 14:00:20.999731 11482 node_ipam_controller.go:94] Sending events to api server.
Oct 25 14:00:21 xenial-shrimp k3s[11482]: I1025 14:00:21.052406 11482 shared_informer.go:204] Caches are synced for tokens
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.834017762Z" level=info msg="module br_netfilter was already loaded"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.834309001Z" level=info msg="module overlay was already loaded"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.834354961Z" level=info msg="module nf_conntrack was already loaded"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.853657758Z" level=info msg="Connecting to proxy" url="wss://10.100.36.171:6443/v1-k3s/connect"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.861900779Z" level=info msg="Handling backend connection request [xenial-shrimp]"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.865070572Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.866219330Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/807a6004690ef344efe993a6867bb3dacbb1ffaa584eedbe54bb206743c599cf/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=xenial-shrimp --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/system.slice --node-labels= --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/system.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: W1025 14:00:21.912832 11482 server.go:208] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Oct 25 14:00:21 xenial-shrimp systemd[1]: Started Kubernetes systemd probe.
Oct 25 14:00:21 xenial-shrimp k3s[11482]: I1025 14:00:21.958174 11482 server.go:406] Version: v1.16.2-k3s.1
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.021837 11482 flannel.go:91] Determining IP address of default interface
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.028090 11482 flannel.go:104] Using interface with name wlan0 and address 10.100.36.171
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.036694 11482 kube.go:117] Waiting 10m0s for node controller to sync
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.036931 11482 kube.go:300] Starting kube subnet manager
Oct 25 14:00:22 xenial-shrimp k3s[11482]: W1025 14:00:22.045977 11482 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Oct 25 14:00:22 xenial-shrimp k3s[11482]: W1025 14:00:22.102723 11482 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.140744 11482 server.go:637] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.142921 11482 container_manager_linux.go:272] container manager verified user specified cgroup-root exists: []
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.143976 11482 container_manager_linux.go:277] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.145583 11482 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.146489 11482 container_manager_linux.go:312] Creating device plugin manager: true
Oct 25 14:00:22 xenial-shrimp k3s[11482]: W1025 14:00:22.146846 11482 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.147335 11482 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/rancher/k3s/agent/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x25f7550 0x614f9b8 0x25f7d20 map[] map[] map[] map[] map[] 0x4006695f50 [] 0x614f9b8}
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.166494 11482 state_mem.go:36] [cpumanager] initializing new in-memory state store
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.177055 11482 state_mem.go:84] [cpumanager] updated default cpuset: ""
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.178936 11482 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.180740 11482 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x614f9b8 10000000000 0x4005e5fe60 <nil> <nil> <nil> <nil> map[]}
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.187181 11482 kubelet.go:312] Watching apiserver
Oct 25 14:00:22 xenial-shrimp k3s[11482]: W1025 14:00:22.221438 11482 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Oct 25 14:00:22 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:22.268661829Z" level=info msg="addresses labels has already been set succesfully on node: xenial-shrimp"
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.331384 11482 kuberuntime_manager.go:207] Container runtime containerd initialized, version: v1.3.0-k3s.2, apiVersion: v1alpha2
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.332410 11482 server.go:1066] Started kubelet
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.377435 11482 server.go:145] Starting to listen on 0.0.0.0:10250
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.400269 11482 server.go:354] Adding debug handlers to kubelet server.
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404303 11482 node.go:135] Successfully retrieved node IP: 10.100.36.171
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404439 11482 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404454 11482 server_others.go:150] Using iptables Proxier.
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404483 11482 status_manager.go:156] Starting to sync pod status with apiserver
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404523 11482 kubelet.go:1822] Starting kubelet main sync loop.
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404572 11482 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Oct 25 14:00:22 xenial-shrimp k3s[11482]: E1025 14:00:22.404975 11482 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
Oct 25 14:00:22 xenial-shrimp k3s[11482]: E1025 14:00:22.405016 11482 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.409566 11482 volume_manager.go:249] Starting Kubelet Volume Manager
Oct 25 14:00:22 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:22.468459462Z" level=fatal msg="ipset v7.1: Cannot open session to kernel.\n"
Oct 25 14:00:22 xenial-shrimp systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Oct 25 14:00:22 xenial-shrimp systemd[1]: k3s.service: Unit entered failed state.
Oct 25 14:00:22 xenial-shrimp systemd[1]: k3s.service: Failed with result 'exit-code'.
Oct 25 14:00:27 xenial-shrimp systemd[1]: k3s.service: Service hold-off time over, scheduling restart.
Oct 25 14:00:27 xenial-shrimp systemd[1]: Stopped Lightweight Kubernetes.
I tried to replace ipset and iptables with different versions, however this results in the same error.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 3
- Comments: 19 (9 by maintainers)
Got workaround with server start option –disable-network-policy . ipset is used by NetworkPolicy function. The most scenarios it can be disabled.