cilium: Unable to use Cilium in a k3s env

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

Unable to configure Cilium, while wanting to replace kubeproxy with Cilium, on a k3s enviroment with 1 master + 3 agents. DaemonSet just fall in Crashloopbackoff with the following error:

level=fatal msg="Error while creating daemon" error="error while initializing daemon: failed while reinitializing datapath: failed to setup base devices in mode tunnel: invalid argument" subsys=daemon

Current config:

---
debug:
  enabled: true

k8sServiceHost: 172.16.68.65
k8sServicePort: 6443

devices: eth0
enableRuntimeDeviceDetection: true

encryption:
  enabled: false
  nodeEncryption: false

ipam:
  mode: "cluster-pool"
  operator:
    clusterPoolIPv4PodCIDR: "10.69.0.0/16"
    clusterPoolIPv4MaskSize: 24

kubeProxyReplacement: "strict"
kubeProxyReplacementHealthzBindAddr: "[::]:10256"

operator:
  replicas: 1
  rollOutPods: false
  nodeSelector:
    node-role.kubernetes.io/master: "true"

Cilium Version

cilium-cli: v0.11.11 compiled with go1.18.3 on linux/amd64 cilium image (default): v1.11.6 cilium image (stable): v1.12.0 cilium image (running): -ci:latest

Kernel Version

Linux redacted 5.18.11-1.el7.elrepo.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jul 12 09:44:11 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux

Kubernetes Version

WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:“1”, Minor:“24”, GitVersion:“v1.24.2+k3s2”, GitCommit:“a237260237b549b90dd3aae449de09231caf1351”, GitTreeState:“clean”, BuildDate:“2022-07-06T20:08:22Z”, GoVersion:“go1.18.1”, Compiler:“gc”, Platform:“linux/amd64”} Kustomize Version: v4.5.4 Server Version: version.Info{Major:“1”, Minor:“24”, GitVersion:“v1.24.2+k3s2”, GitCommit:“a237260237b549b90dd3aae449de09231caf1351”, GitTreeState:“clean”, BuildDate:“2022-07-06T20:08:22Z”, GoVersion:“go1.18.1”, Compiler:“gc”, Platform:“linux/amd64”}

Sysdump

🔍 Collecting sysdump with cilium-cli version: v0.11.11, args: [-n cilium sysdump] 🔍 Collecting Kubernetes nodes 🔍 Collect Kubernetes nodes 🔍 Collecting Kubernetes events 🔍 Collect Kubernetes version 🔍 Collecting Kubernetes pods 🔍 Collecting Kubernetes services 🔍 Collecting Kubernetes pods summary 🔍 Collecting Kubernetes endpoints 🔍 Collecting Kubernetes network policies 🔍 Collecting Cilium cluster-wide network policies 🔍 Collecting Kubernetes namespaces 🔍 Collecting Cilium network policies 🔍 Collecting Cilium local redirect policies 🔍 Collecting Cilium endpoints 🔍 Collecting Cilium egress NAT policies 🔍 Collecting Cilium nodes 🔍 Collecting Cilium identities 🔍 Collecting Ingresses 🔍 Collecting CiliumClusterwideEnvoyConfigs 🔍 Collecting CiliumEnvoyConfigs 🔍 Collecting Cilium etcd secret 🔍 Collecting the Cilium configuration 🔍 Collecting the Cilium daemonset(s) 🔍 Collecting the Hubble daemonset 🔍 Collecting the Hubble Relay configuration 🔍 Collecting the Hubble Relay deployment 🔍 Collecting the Hubble UI deployment 🔍 Collecting the Cilium operator deployment 🔍 Collecting the ‘clustermesh-apiserver’ deployment ⚠️ Deployment “hubble-relay” not found in namespace “kube-system” - this is expected if Hubble is not enabled 🔍 Collecting the CNI configuration files from Cilium pods 🔍 Collecting the CNI configmap ⚠️ Deployment “clustermesh-apiserver” not found in namespace “kube-system” - this is expected if ‘clustermesh-apiserver’ isn’t enabled 🔍 Collecting gops stats from Cilium pods 🔍 Collecting gops stats from Hubble pods 🔍 Collecting gops stats from Hubble Relay pods 🔍 Collecting ‘cilium-bugtool’ output from Cilium pods ⚠️ Deployment “hubble-ui” not found in namespace “kube-system” - this is expected if Hubble UI is not enabled 🔍 Collecting logs from Cilium pods 🔍 Collecting logs from Cilium operator pods 🔍 Collecting logs from ‘clustermesh-apiserver’ pods 🔍 Collecting logs from Hubble pods 🔍 Collecting logs from Hubble Relay pods 🔍 Collecting logs from Hubble UI pods 🔍 Collecting platform-specific data 🔍 Collecting Hubble flows from Cilium pods ⚠️ The following tasks failed, the sysdump may be incomplete: ⚠️ [11] Collecting Cilium egress NAT policies: failed to collect Cilium egress NAT policies: the server could not find the requested resource (get ciliumegressnatpolicies.cilium.io) ⚠️ [12] Collecting Cilium local redirect policies: failed to collect Cilium local redirect policies: the server could not find the requested resource (get ciliumlocalredirectpolicies.cilium.io) ⚠️ [17] Collecting CiliumClusterwideEnvoyConfigs: failed to collect CiliumClusterwideEnvoyConfigs: the server could not find the requested resource (get ciliumclusterwideenvoyconfigs.cilium.io) ⚠️ [18] Collecting CiliumEnvoyConfigs: failed to collect CiliumEnvoyConfigs: the server could not find the requested resource (get ciliumenvoyconfigs.cilium.io) ⚠️ [20] Collecting the Cilium configuration: failed to collect the Cilium configuration: configmaps “cilium-config” not found ⚠️ [21] Collecting the Cilium daemonset(s): failed to find Cilium daemonsets with label “k8s-app=cilium” in namespace “kube-system” ⚠️ [23] Collecting the Hubble Relay configuration: failed to collect the Hubble Relay configuration: configmaps “hubble-relay-config” not found ⚠️ [26] Collecting the Cilium operator deployment: failed to collect the Cilium operator deployment: deployments.apps “cilium-operator” not found ⚠️ Please note that depending on your Cilium version and installation options, this may be expected

Relevant log output

level=fatal msg="Error while creating daemon" error="error while initializing daemon: failed while reinitializing datapath: failed to setup base devices in mode tunnel: invalid argument" subsys=daemon

Anything else?

Deployed with Helm, and custom namespace (cilium)

Code of Conduct

  • I agree to follow this project’s Code of Conduct

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 16 (10 by maintainers)

Commits related to this issue

Most upvoted comments

Awesome! Now we are able to bind on lo for MGMT stuff! I think we can close this issues since the problem is very specific to our infrastructure.

the cilium agent has an argument here https://docs.cilium.io/en/stable/cmdref/cilium-agent --mtu int Overwrite auto-detected MTU of underlying network it seems not supported in helm install though. but you could change the configmap cilium-config like

kubectl edit configmap/cilium-config -n kube-system

add for example

mtu: "1500"

then

kubectl rollout restart ds/cilium -n kube-system

I tried, it works