cilium: Cilium segfaults with --enable-l7-proxy=false

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

A user reports that Cilium hit a segfault when the l7 proxy was disabled.

Cilium Version

1.11.1

Kernel Version

n/a

Kubernetes Version

n/a

Sysdump

No response

Relevant log output

[cilium-ptl2k cilium-agent] panic: runtime error: invalid memory address or nil pointer dereference
[cilium-ptl2k cilium-agent] [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1f6243c]
[cilium-ptl2k cilium-agent]
[cilium-ptl2k cilium-agent] goroutine 887 [running]:
[cilium-ptl2k cilium-agent] github.com/cilium/cilium/pkg/proxy.(*Proxy).CreateOrUpdateRedirect(0x0, {0x2c53a30, 0xc00013bf00}, {0xc0044ae750, 0x4}, {0x2cb2e30, 0xc0009ac380}, 0xc0020f3fc0)

Anything else?

No response

Code of Conduct

  • I agree to follow this project’s Code of Conduct

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 17 (7 by maintainers)

Commits related to this issue

Most upvoted comments

No L7 policies:

root@k3s01:~# kubectl get ciliumclusterwidenetworkpolicy --all-namespaces -oyaml | grep -i rules
root@k3s01:~# kubectl get ciliumnetworkpolicy --all-namespaces -oyaml | grep -i rules
root@k3s01:~#

@Preisschild Do you have any visibility annotations on your pods? https://docs.cilium.io/en/stable/policy/visibility/

Simple attempt to fix this: https://github.com/cilium/cilium/pull/19092

Deployed with this values.yaml with helm, single node.

(intentionally garbled some ipv6 addresses in the copy/paste.)

bpf:
  hostRouting: false
  lbBypassFIBLookup: false
  lbExternalClusterIP: true
  masquerade: false
  tproxy: true
cluster:
  id: 1
  name: brg-k3s
containerRuntime:
  integration: containerd
  socketPath: /var/run/k3s/containerd/containerd.sock
devices: enp1s0f0
enableIPv4Masquerade: false
enableIPv6Masquerade: false
externalIPs:
  enabled: true
extraConfig:
  enable-endpoint-routes: "true"
  enable-local-node-route: "false"
  enable-node-port: "true"
  ipv4-node: 10.13.100.11
  ipv4-service-range: 10.13.208.0/20
  ipv6-node: 2a0e:xxxx::11
  ipv6-service-range: 2a0e:xxxxxxxx::/108
  node-port-algorithm: random
  node-port-mode: dsr
  node-port-acceleration: native
  bpf-lb-acceleration: native
  enable-xdp-prefilter: "true"
  enable-icmp-rules: "true"
fragmentTracking: true
hostPort:
  enabled: "true"
hostServices:
  enabled: "true"
  protocols: tcp,udp
installIptablesRules: true
installNoConntrackIptablesRules: false
ipMasqAgent:
  enabled: false
ipam:
  mode: cluster-pool
  operator:
    clusterPoolIPv4MaskSize: 24
    clusterPoolIPv4PodCIDRList:
    - 10.13.192.0/20
    clusterPoolIPv6MaskSize: 112
    clusterPoolIPv6PodCIDRList:
    - 2a0e:xxxxxx14::/112
    - 2a0e:xxxxxx14::1:0/112
    - 2a0e:xxxxxx14::2:0/112
ipv4:
  enabled: true
ipv6:
  enabled: true
k8sServiceHost: "api.k3s.local"
k8sServicePort: "6443"
kubeProxyReplacement: strict
l2NeighDiscovery:
  enabled: false
l7Proxy: true
loadBalancer:
  algorithm: random
  mode: dsr
localRedirectPolicy: false
nodePort:
  enabled: "true"
operator:
  replicas: 1
sessionAffinity: "true"
tunnel: disabled
image:
  repository: "quay.io/cilium/cilium"
  tag: "v1.11.1"
  useDigest: false
hubble:
  listenAddress: ":4244"
  relay:
    enabled: true
  ui:
    enabled: true

the “full” stacktrace with some pretext also:

[cilium-ptl2k cilium-agent] level=info msg="Restored endpoint" endpointID=3462 ipAddr="[10.13.192.217 2a0e:97c0:250:14::b6bb]" subsys=endpoint
[cilium-ptl2k cilium-agent] level=info msg="Rewrote endpoint BPF program" containerID=fa6a2b2bb9 datapathPolicyRevision=0 desiredPolicyRevision=65 endpointID=2189 identity=92217 ipv4=10.13.192.232 ipv6="2a0e:97c0:250:14::8ee7" k8sPodName=kube-system/hubble-ui-7c6789876c-cz7qq subsys=endpoint
[hubble-relay-85664d47c4-8x2pb] level=warning msg="Error while receiving peer change notification; will try again after the timeout has expired" connection timeout=30s error="rpc error: code = Unavailable desc = error reading from server: EOF" subsys=hubble-relay
[cilium-ptl2k cilium-agent] level=info msg="Restored endpoint" endpointID=2189 ipAddr="[10.13.192.232 2a0e:97c0:250:14::8ee7]" subsys=endpoint
[cilium-ptl2k cilium-agent] level=info msg="Rewrote endpoint BPF program" containerID=29402ba7d7 datapathPolicyRevision=0 desiredPolicyRevision=65 endpointID=740 identity=66382 ipv4=10.13.192.24 ipv6="2a0e:97c0:250:14::7927" k8sPodName=kube-system/hubble-relay-85664d47c4-8x2pb subsys=endpoint
[cilium-ptl2k cilium-agent] level=info msg="Restored endpoint" endpointID=740 ipAddr="[10.13.192.24 2a0e:97c0:250:14::7927]" subsys=endpoint
[cilium-ptl2k cilium-agent] level=info msg="Rewrote endpoint BPF program" containerID=ef2919a30d datapathPolicyRevision=0 desiredPolicyRevision=65 endpointID=311 identity=104767 ipv4=10.13.192.133 ipv6="2a0e:97c0:250:14::3ffb" k8sPodName=dns-recursive/recursive-dns-0 subsys=endpoint
[cilium-ptl2k cilium-agent] level=info msg="Restored endpoint" endpointID=311 ipAddr="[10.13.192.133 2a0e:97c0:250:14::3ffb]" subsys=endpoint
[cilium-ptl2k cilium-agent] panic: runtime error: invalid memory address or nil pointer dereference
[cilium-ptl2k cilium-agent] [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1f6243c]
[cilium-ptl2k cilium-agent]
[cilium-ptl2k cilium-agent] goroutine 887 [running]:
[cilium-ptl2k cilium-agent] github.com/cilium/cilium/pkg/proxy.(*Proxy).CreateOrUpdateRedirect(0x0, {0x2c53a30, 0xc00013bf00}, {0xc0044ae750, 0x4}, {0x2cb2e30, 0xc0009ac380}, 0xc0020f3fc0)
[cilium-ptl2k cilium-agent] 	/go/src/github.com/cilium/cilium/pkg/proxy/proxy.go:388 +0x9c
[cilium-ptl2k cilium-agent] github.com/cilium/cilium/pkg/endpoint.(*Endpoint).addVisibilityRedirects(0xc0009ac380, 0x1, 0x27db1bd, 0x6)
[cilium-ptl2k cilium-agent] 	/go/src/github.com/cilium/cilium/pkg/endpoint/bpf.go:384 +0x439
[cilium-ptl2k cilium-agent] github.com/cilium/cilium/pkg/endpoint.(*Endpoint).addNewRedirects(0xc0009ac380, 0x4179ab)
[cilium-ptl2k cilium-agent] 	/go/src/github.com/cilium/cilium/pkg/endpoint/bpf.go:470 +0x3c7
[cilium-ptl2k cilium-agent] github.com/cilium/cilium/pkg/endpoint.(*Endpoint).runPreCompilationSteps(0xc0009ac380, 0xc0011b9c00)
[cilium-ptl2k cilium-agent] 	/go/src/github.com/cilium/cilium/pkg/endpoint/bpf.go:834 +0x447
[cilium-ptl2k cilium-agent] github.com/cilium/cilium/pkg/endpoint.(*Endpoint).regenerateBPF(0xc0009ac380, 0xc0011b9c00)
[cilium-ptl2k cilium-agent] 	/go/src/github.com/cilium/cilium/pkg/endpoint/bpf.go:584 +0x19d
[cilium-ptl2k cilium-agent] github.com/cilium/cilium/pkg/endpoint.(*Endpoint).regenerate(0xc0009ac380, 0xc0011b9c00)
[cilium-ptl2k cilium-agent] 	/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go:405 +0x7b3
[cilium-ptl2k cilium-agent] github.com/cilium/cilium/pkg/endpoint.(*EndpointRegenerationEvent).Handle(0xc0011b16f0, 0x85)
[cilium-ptl2k cilium-agent] 	/go/src/github.com/cilium/cilium/pkg/endpoint/events.go:53 +0x32c
[cilium-ptl2k cilium-agent] github.com/cilium/cilium/pkg/eventqueue.(*EventQueue).run.func1()
[cilium-ptl2k cilium-agent] 	/go/src/github.com/cilium/cilium/pkg/eventqueue/eventqueue.go:245 +0x13b
[cilium-ptl2k cilium-agent] sync.(*Once).doSlow(0xffffffffffffff38, 0x0)
[cilium-ptl2k cilium-agent] 	/usr/local/go/src/sync/once.go:68 +0xd2
[cilium-ptl2k cilium-agent] sync.(*Once).Do(...)
[cilium-ptl2k cilium-agent] 	/usr/local/go/src/sync/once.go:59
[cilium-ptl2k cilium-agent] github.com/cilium/cilium/pkg/eventqueue.(*EventQueue).run(0x0)
[cilium-ptl2k cilium-agent] 	/go/src/github.com/cilium/cilium/pkg/eventqueue/eventqueue.go:233 +0x45
[cilium-ptl2k cilium-agent] created by github.com/cilium/cilium/pkg/eventqueue.(*EventQueue).Run
[cilium-ptl2k cilium-agent] 	/go/src/github.com/cilium/cilium/pkg/eventqueue/eventqueue.go:229 +0x7b