cilium: Cilium agent stopped working after some time
Bug report
General Information
- Cilium version (v1.10.4)
- Kernel version ( 5.4.0-6-cloud-amd64 # 1 SMP Debian 5.4.93-1 (2021-02-09) x86_64 GNU/Linux)
- Orchestration system version in use (kubectl v1.21.0)
- Generate and upload a system zip: Since
kubectl execandkubectl logsare not working due to the cilium issue, the output in the system zip is of limited use cilium-sysdump-20211025-140612.zip.
Bug
We run cilium with our kubernetes clusters. After a couple of days or weeks the metrics server and coredns pods start crash looping. After further investigation we found out that a restart of the cilium-agent solves the problem for us.
After further investigation we saw:
# cilium status
KVStore: Ok Disabled
Kubernetes: Ok 1.20 (v1.20.11) [linux/amd64]
Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe [ens4 10.250.0.23 (Direct Routing)]
Cilium: Ok 1.10.4 (v1.10.4-2a46fd6)
NodeMonitor: Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 7/254 allocated from 100.96.2.0/24,
BandwidthManager: Disabled
Host Routing: Legacy
Masquerading: BPF [ens4] 100.96.2.0/24 [IPv4: Enabled, IPv6: Disabled]
Controller Status: 41/42 healthy
Name Last success Last error Count Message
cilium-health-ep 135h45m8s ago 5m59s ago 958 Get "http://100.96.2.165:4240/hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Proxy Status: OK, ip 100.96.2.101, 0 redirects active on ports 10000-20000
Hubble: Ok Current/Max Flows: 4095/4095 (100.00%), Flows/s: 2.59 Metrics: Ok
Encryption: Disabled
Cluster health: 1/2 reachable (2021-10-25T12:13:16Z)
Name IP Node Endpoints
shoot--core--cli-cilium-worker-vezh0-z1-5c5f5-p2kf9 (localhost) 10.250.0.23 reachable unreachable
# cilium-health status
Probe time: 2021-10-25T12:11:16Z
Nodes:
shoot--core--cli-cilium-worker-vezh0-z1-5c5f5-p2kf9 (localhost):
Host connectivity to 10.250.0.23:
ICMP to stack: OK, RTT=252.64µs
HTTP to agent: OK, RTT=236.72µs
Endpoint connectivity to 100.96.2.165:
ICMP to stack: OK, RTT=290.486µs
HTTP to agent: Get "http://100.96.2.165:4240/hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
shoot--core--cli-cilium-worker-vezh0-z1-5c5f5-qgggc:
Host connectivity to 10.250.0.24:
ICMP to stack: OK, RTT=1.373656ms
HTTP to agent: OK, RTT=575.961µs
Endpoint connectivity to 100.96.3.182:
ICMP to stack: OK, RTT=437.056µs
HTTP to agent: OK, RTT=1.224137ms
On the affected node pods can not communicate locally with each other. However cross-node pod communication works. In case both nodes are affected local communication does not work while cross-node pod communication works.
This problem is teasing us at least since cilium v1.9.x.
Our readiness/liveness probe is not failing as we would expect and the cilium health status suggests:
readinessProbe:
httpGet:
host: '127.0.0.1'
path: /healthz
port: 12345
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
Cilium is configured with the following configmap:
data:
auto-direct-node-routes: "false"
bpf-ct-global-any-max: "262144"
bpf-ct-global-tcp-max: "524288"
bpf-lb-external-clusterip: "false"
bpf-lb-map-max: "65536"
bpf-map-dynamic-size-ratio: "0.0025"
bpf-nat-global-max: "524288"
bpf-policy-map-max: "16384"
cgroup-root: /run/cilium/cgroupv2
cilium-endpoint-gc-interval: 5m0s
cluster-name: default
cluster-pool-ipv4-cidr: 100.96.0.0/11
cluster-pool-ipv4-mask-size: "24"
debug: "false"
disable-cnp-status-updates: "true"
enable-api-rate-limit: "false"
enable-auto-protect-node-port-range: ""
enable-bpf-clock-probe: "true"
enable-bpf-masquerade: "true"
enable-endpoint-health-checking: "true"
enable-hubble: "true"
enable-ipv4: "true"
enable-ipv4-masquerade: "true"
enable-ipv6: "false"
enable-ipv6-masquerade: "true"
enable-metrics: "true"
enable-policy: default
enable-remote-node-identity: "true"
enable-session-affinity: "true"
enable-well-known-identities: "false"
enable-xt-socket-fallback: "true"
hubble-disable-tls: "false"
hubble-listen-address: :4244
hubble-metrics: dns drop tcp flow port-distribution icmp http
hubble-metrics-server: :9091
hubble-socket-path: /var/run/cilium/hubble.sock
hubble-tls-auto-enabled: "true"
hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
identity-allocation-mode: crd
install-iptables-rules: "true"
install-no-conntrack-iptables-rules: "false"
ipam: cluster-pool
kube-proxy-replacement: probe
monitor-aggregation: medium
monitor-aggregation-flags: all
monitor-aggregation-interval: 5s
node-port-bind-protection: ""
operator-api-serve-addr: 127.0.0.1:9234
operator-prometheus-serve-addr: :6942
preallocate-bpf-maps: "false"
prometheus-serve-addr: :9090
sidecar-istio-proxy-image: cilium/istio_proxy
tofqdns-enable-poller: "false"
tunnel: vxlan
/cc: @scheererj
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 18 (5 by maintainers)
Meanwhile we have tried more debugging on this… we tried pwru and hubble observe but the output doesn’t really help us a lot… does it give anybody else an idea, what might be going wrong?
this is the working case: root@shoot–ringdev–cilium-test-worker-ryg26-z1-5b6bb-t7n86:/home/cilium# hubble observe --ip 52.47.209.216 Feb 14 15:17:33.622: default/nginx:43946 -> 52.47.209.216:80 to-stack FORWARDED (TCP Flags: SYN) Feb 14 15:17:33.634: default/nginx:43946 <- 52.47.209.216:80 to-endpoint FORWARDED (TCP Flags: SYN, ACK) Feb 14 15:17:33.634: default/nginx:43946 -> 52.47.209.216:80 to-stack FORWARDED (TCP Flags: ACK) Feb 14 15:17:41.601: default/nginx:43946 -> 52.47.209.216:80 to-stack FORWARDED (TCP Flags: ACK, PSH) Feb 14 15:17:41.613: default/nginx:43946 <- 52.47.209.216:80 to-endpoint FORWARDED (TCP Flags: ACK) Feb 14 15:17:41.613: default/nginx:43946 <- 52.47.209.216:80 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Feb 14 15:17:41.613: default/nginx:43946 <- 52.47.209.216:80 to-endpoint FORWARDED (TCP Flags: ACK, FIN) Feb 14 15:17:41.613: default/nginx:43946 -> 52.47.209.216:80 to-stack FORWARDED (TCP Flags: ACK, FIN) root@shoot–ringdev–cilium-test-worker-ryg26-z1-5b6bb-t7n86:/home/cilium#
This is the broken case: root@shoot–ringdev–cilium-test-worker-ryg26-z1-5b6bb-hzcv7:/home/cilium# hubble observe --ip 52.47.209.216 Feb 16 08:05:53.405: default/nginx:33640 -> 52.47.209.216:80 to-stack FORWARDED (TCP Flags: SYN) Feb 16 08:06:08.541: default/nginx:33722 -> 52.47.209.216:80 to-stack FORWARDED (TCP Flags: SYN) Feb 16 08:06:28.651: default/nginx:33808 -> 52.47.209.216:80 to-stack FORWARDED (TCP Flags: SYN)
It seems as if the packet never reaches the “to-endpoint” stage. Why could that be? Is there any further data we could provide to help this?
We found the culprit. It seems
net.ipv4.conf.all.rp_filterwas reset to1bysystemd-sysctl.serviceafter some time of operation, which does not work at all with cilium.We actually did the same with pwru:
output of a working connection: markus@ubuntu:~/cilium$ cat pwru_output_working_outbound_connection.txt SKB PROCESS FUNC TIMESTAMP 0xffff9e13d17588e0 [<empty>] __ip_local_out 3602471645673 0xffff9e13d17588e0 [<empty>] nf_hook_slow 3602471658789 0xffff9e13d17588e0 [<empty>] selinux_ipv4_output 3602471661603 0xffff9e13d17588e0 [<empty>] ip_output 3602471663809 0xffff9e13d17588e0 [<empty>] nf_hook_slow 3602471665399 0xffff9e13d17588e0 [<empty>] selinux_ipv4_postroute 3602471666912 0xffff9e13d17588e0 [<empty>] selinux_ip_postroute 3602471669128 0xffff9e13d17588e0 [<empty>] ip_finish_output 3602471671434 0xffff9e13d17588e0 [<empty>] __cgroup_bpf_run_filter_skb 3602471673878 0xffff9e13d17588e0 [<empty>] __ip_finish_output 3602471675314 0xffff9e13d17588e0 [<empty>] ip_finish_output2 3602471678802 0xffff9e13d17588e0 [<empty>] dev_queue_xmit 3602471680466 0xffff9e13d17588e0 [<empty>] __dev_queue_xmit 3602471682887 0xffff9e13d17588e0 [<empty>] netdev_core_pick_tx 3602471684916 0xffff9e13d17588e0 [<empty>] validate_xmit_skb 3602471686712 0xffff9e13d17588e0 [<empty>] netif_skb_features 3602471688271 0xffff9e13d17588e0 [<empty>] passthru_features_check 3602471690617 0xffff9e13d17588e0 [<empty>] skb_network_protocol 3602471692836 0xffff9e13d17588e0 [<empty>] skb_csum_hwoffload_help 3602471698711 0xffff9e13d17588e0 [<empty>] validate_xmit_xfrm 3602471700974 0xffff9e13d17588e0 [<empty>] dev_hard_start_xmit 3602471702556 0xffff9e13d17588e0 [<empty>] __dev_kfree_skb_any 3602471851008 0xffff9e13d17588e0 [<empty>] consume_skb 3602471859514 0xffff9e13d17588e0 [<empty>] skb_release_head_state 3602471860912 0xffff9e13d17588e0 [<empty>] skb_release_data 3602471862607 0xffff9e13d17588e0 [<empty>] kfree_skbmem 3602471864289 0xffff9e13d17588e0 [<empty>] __dev_forward_skb 3602471704729 0xffff9e13d17588e0 [<empty>] skb_scrub_packet 3602471706317 0xffff9e13d17588e0 [<empty>] eth_type_trans 3602471707820 0xffff9e13d17588e0 [<empty>] netif_rx 3602471709832 0xffff9e13d17588e0 [<empty>] netif_rx_internal 3602471711471 0xffff9e13d17588e0 [<empty>] enqueue_to_backlog 3602471714559 0xffff9e13d17588e0 [<empty>] __netif_receive_skb 3602471718499 0xffff9e13d17588e0 [<empty>] __netif_receive_skb_one_core 3602471720247 0xffff9e13d17588e0 [<empty>] tcf_classify_ingress 3602471722038 0xffff9e13d17588e0 [<empty>] skb_ensure_writable 3602471735724 0xffff9e13d17588e0 [<empty>] skb_ensure_writable 3602471737304 2022/02/14 14:42:19 Perf event ring buffer full, dropped 42 samples 0xffff9e13d17588e0 [<empty>] netdev_core_pick_tx 3602471830716 2022/02/14 14:42:19 Perf event ring buffer full, dropped 7 samples 0xffff9e13d17588e0 [<empty>] skb_csum_hwoffload_help 3602471837543 0xffff9e14d7aae200 [<empty>] __ip_local_out 3602483644372 0xffff9e14d7aae200 [<empty>] nf_hook_slow 3602483648945 0xffff9e14d7aae200 [<empty>] selinux_ipv4_output 3602483651042 0xffff9e14d7aae200 [<empty>] ip_output 3602483652429 0xffff9e14d7aae200 [<empty>] nf_hook_slow 3602483653611 0xffff9e14d7aae200 [<empty>] selinux_ipv4_postroute 3602483654861 0xffff9e14d7aae200 [<empty>] selinux_ip_postroute 3602483656098 0xffff9e14d7aae200 [<empty>] ip_finish_output 3602483657423 0xffff9e14d7aae200 [<empty>] __cgroup_bpf_run_filter_skb 3602483658583 0xffff9e14d7aae200 [<empty>] __ip_finish_output 3602483659740 0xffff9e14d7aae200 [<empty>] ip_finish_output2 3602483660930 0xffff9e14d7aae200 [<empty>] dev_queue_xmit 3602483662167 0xffff9e14d7aae200 [<empty>] __dev_queue_xmit 3602483663332 0xffff9e14d7aae200 [<empty>] netdev_core_pick_tx 3602483664402 0xffff9e14d7aae200 [<empty>] validate_xmit_skb 3602483665587 0xffff9e14d7aae200 [<empty>] netif_skb_features 3602483666574 0xffff9e14d7aae200 [<empty>] passthru_features_check 3602483667726 0xffff9e14d7aae200 [<empty>] skb_network_protocol 3602483668928 0xffff9e14d7aae200 [<empty>] skb_csum_hwoffload_help 3602483670452 0xffff9e14d7aae200 [<empty>] validate_xmit_xfrm 3602483671608 0xffff9e14d7aae200 [<empty>] dev_hard_start_xmit 3602483672751 0xffff9e14d7aae200 [<empty>] __dev_forward_skb 3602483673893 0xffff9e14d7aae200 [<empty>] skb_scrub_packet 3602483675098 0xffff9e14d7aae200 [<empty>] eth_type_trans 3602483676092 0xffff9e14d7aae200 [<empty>] netif_rx 3602483677645 0xffff9e14d7aae200 [<empty>] netif_rx_internal 3602483678745 0xffff9e14d7aae200 [<empty>] enqueue_to_backlog 3602483679967 0xffff9e14d7aae200 [<empty>] __netif_receive_skb 3602483683470 0xffff9e14d7aae200 [<empty>] __netif_receive_skb_one_core 3602483684654 0xffff9e14d7aae200 [<empty>] tcf_classify_ingress 3602483686110 0xffff9e14d7aae200 [<empty>] skb_ensure_writable 3602483690076
output of a broken connection attempt: markus@ubuntu:~/cilium$ cat pwru_output_broken_outbound_connection.txt 2022/02/16 08:06:28 Perf event ring buffer full, dropped 11 samples 0xffff8fcab4c026e0 [<empty>] __ip_local_out 78702774685074 0xffff8fcab4c026e0 [<empty>] nf_hook_slow 78702774697740 0xffff8fcab4c026e0 [<empty>] selinux_ipv4_output 78702774700184 0xffff8fcab4c026e0 [<empty>] ip_output 78702774701848 0xffff8fcab4c026e0 [<empty>] nf_hook_slow 78702774703291 0xffff8fcab4c026e0 [<empty>] selinux_ipv4_postroute 78702774704900 0xffff8fcab4c026e0 [<empty>] selinux_ip_postroute 78702774706517 0xffff8fcab4c026e0 [<empty>] ip_finish_output 78702774708397 0xffff8fcab4c026e0 [<empty>] __cgroup_bpf_run_filter_skb 78702774709844 0xffff8fcab4c026e0 [<empty>] __ip_finish_output 78702774711662 0xffff8fcab4c026e0 [<empty>] ip_finish_output2 78702774713333 0xffff8fcab4c026e0 [<empty>] dev_queue_xmit 78702774714919 0xffff8fcab4c026e0 [<empty>] __dev_queue_xmit 78702774716365 0xffff8fcab4c026e0 [<empty>] netdev_core_pick_tx 78702774718311 0xffff8fcab4c026e0 [<empty>] validate_xmit_skb 78702774720295 0xffff8fcab4c026e0 [<empty>] netif_skb_features 78702774722023 0xffff8fcab4c026e0 [<empty>] passthru_features_check 78702774723588 0xffff8fcab4c026e0 [<empty>] skb_network_protocol 78702774725064 0xffff8fcab4c026e0 [<empty>] skb_csum_hwoffload_help 78702774726529 0xffff8fcab4c026e0 [<empty>] validate_xmit_xfrm 78702774728325 0xffff8fcab4c026e0 [<empty>] dev_hard_start_xmit 78702774729609 0xffff8fcab4c026e0 [<empty>] __dev_forward_skb 78702774731909 0xffff8fcab4c026e0 [<empty>] skb_scrub_packet 78702774733474 0xffff8fcab4c026e0 [<empty>] eth_type_trans 78702774734768 0xffff8fcab4c026e0 [<empty>] netif_rx 78702774736430 0xffff8fcab4c026e0 [<empty>] netif_rx_internal 78702774737516 0xffff8fcab4c026e0 [<empty>] enqueue_to_backlog 78702774738771 0xffff8fcab4c026e0 [<empty>] __netif_receive_skb 78702774741134 0xffff8fcab4c026e0 [<empty>] __netif_receive_skb_one_core 78702774742486 0xffff8fcab4c026e0 [<empty>] tcf_classify_ingress 78702774744510 0xffff8fcab4c026e0 [<empty>] skb_ensure_writable 78702774752333 0xffff8fcab4c026e0 [<empty>] skb_ensure_writable 78702774753518 2022/02/16 08:06:28 Perf event ring buffer full, dropped 16 samples 0xffff8fcab4c026e0 [<empty>] kfree_skb 78702777131253 0xffff8fcab4c026e0 [<empty>] skb_release_head_state 78702777142716 0xffff8fcab4c026e0 [<empty>] skb_release_data 78702777145653 0xffff8fcab4c026e0 [<empty>] kfree_skbmem 78702777146907 0xffff8fcab4c026e0 [<empty>] __copy_skb_header 78703803562832 0xffff8fcab4c026e0 [<empty>] __ip_local_out 78703803576904 0xffff8fcab4c026e0 [<empty>] nf_hook_slow 78703803578646 0xffff8fcab4c026e0 [<empty>] selinux_ipv4_output 78703803580584 0xffff8fcab4c026e0 [<empty>] ip_output 78703803581962 0xffff8fcab4c026e0 [<empty>] nf_hook_slow 78703803583167 0xffff8fcab4c026e0 [<empty>] selinux_ipv4_postroute 78703803584437 0xffff8fcab4c026e0 [<empty>] selinux_ip_postroute 78703803585978 0xffff8fcab4c026e0 [<empty>] ip_finish_output 78703803587516 0xffff8fcab4c026e0 [<empty>] __cgroup_bpf_run_filter_skb 78703803588657 0xffff8fcab4c026e0 [<empty>] __ip_finish_output 78703803590579 0xffff8fcab4c026e0 [<empty>] ip_finish_output2 78703803592456 0xffff8fcab4c026e0 [<empty>] dev_queue_xmit 78703803593913 0xffff8fcab4c026e0 [<empty>] __dev_queue_xmit 78703803595241 0xffff8fcab4c026e0 [<empty>] netdev_core_pick_tx 78703803597137 0xffff8fcab4c026e0 [<empty>] validate_xmit_skb 78703803599156 0xffff8fcab4c026e0 [<empty>] netif_skb_features 78703803600723 0xffff8fcab4c026e0 [<empty>] passthru_features_check 78703803602427 0xffff8fcab4c026e0 [<empty>] skb_network_protocol 78703803603944 0xffff8fcab4c026e0 [<empty>] skb_csum_hwoffload_help 78703803605583 0xffff8fcab4c026e0 [<empty>] validate_xmit_xfrm 78703803607097 0xffff8fcab4c026e0 [<empty>] dev_hard_start_xmit 78703803608344 0xffff8fcab4c026e0 [<empty>] __dev_forward_skb 78703803610329 0xffff8fcab4c026e0 [<empty>] skb_scrub_packet 78703803611829 0xffff8fcab4c026e0 [<empty>] eth_type_trans 78703803613067 0xffff8fcab4c026e0 [<empty>] netif_rx 78703803614532 0xffff8fcab4c026e0 [<empty>] netif_rx_internal 78703803615890 0xffff8fcab4c026e0 [<empty>] enqueue_to_backlog 78703803617137 0xffff8fcab4c026e0 [<empty>] __netif_receive_skb 78703803650643 0xffff8fcab4c026e0 [<empty>] __netif_receive_skb_one_core 78703803652129 0xffff8fcab4c026e0 [<empty>] tcf_classify_ingress 78703803653796 0xffff8fcab4c026e0 [<empty>] skb_ensure_writable 78703803659963 2022/02/16 08:06:31 Perf event ring buffer full, dropped 21 samples 0xffff8fcab4c026e0 [<empty>] __copy_skb_header 78705819566712 0xffff8fcab4c026e0 [<empty>] __ip_local_out 78705819582060 0xffff8fcab4c026e0 [<empty>] nf_hook_slow 78705819584112 0xffff8fcab4c026e0 [<empty>] selinux_ipv4_output 78705819586358 0xffff8fcab4c026e0 [<empty>] ip_output 78705819588039 0xffff8fcab4c026e0 [<empty>] nf_hook_slow 78705819589310 0xffff8fcab4c026e0 [<empty>] selinux_ipv4_postroute 78705819590731 0xffff8fcab4c026e0 [<empty>] selinux_ip_postroute 78705819595327 0xffff8fcab4c026e0 [<empty>] ip_finish_output 78705819596948 0xffff8fcab4c026e0 [<empty>] __cgroup_bpf_run_filter_skb 78705819598138 0xffff8fcab4c026e0 [<empty>] __ip_finish_output 78705819600178 0xffff8fcab4c026e0 [<empty>] ip_finish_output2 78705819602146 0xffff8fcab4c026e0 [<empty>] dev_queue_xmit 78705819603798 0xffff8fcab4c026e0 [<empty>] __dev_queue_xmit 78705819605174 0xffff8fcab4c026e0 [<empty>] netdev_core_pick_tx 78705819606901 0xffff8fcab4c026e0 [<empty>] validate_xmit_skb 78705819608447 0xffff8fcab4c026e0 [<empty>] netif_skb_features 78705819610097 0xffff8fcab4c026e0 [<empty>] passthru_features_check 78705819611626 0xffff8fcab4c026e0 [<empty>] skb_network_protocol 78705819613016 0xffff8fcab4c026e0 [<empty>] skb_csum_hwoffload_help 78705819614542 0xffff8fcab4c026e0 [<empty>] validate_xmit_xfrm 78705819616349 0xffff8fcab4c026e0 [<empty>] dev_hard_start_xmit 78705819617648 0xffff8fcab4c026e0 [<empty>] __dev_forward_skb 78705819619339 0xffff8fcab4c026e0 [<empty>] skb_scrub_packet 78705819620807 0xffff8fcab4c026e0 [<empty>] eth_type_trans 78705819622022 0xffff8fcab4c026e0 [<empty>] netif_rx 78705819623431 0xffff8fcab4c026e0 [<empty>] netif_rx_internal 78705819624542 0xffff8fcab4c026e0 [<empty>] enqueue_to_backlog 78705819625814 0xffff8fcab4c026e0 [<empty>] __netif_receive_skb 78705819662870 0xffff8fcab4c026e0 [<empty>] __netif_receive_skb_one_core 78705819664461 0xffff8fcab4c026e0 [<empty>] tcf_classify_ingress 78705819666373 0xffff8fcab4c026e0 [<empty>] skb_ensure_writable 78705819672039 0xffff8fcab4c026e0 [<empty>] skb_ensure_writable 78705819673163 0xffff8fcab4c026e0 [<empty>] skb_ensure_writable 78705819674239 2022/02/16 08:06:31 Perf event ring buffer full, dropped 3 samples 0xffff8fcab4c026e0 [<empty>] nf_hook_slow 78705819679802 2022/02/16 08:06:31 Perf event ring buffer full, dropped 2 samples 0xffff8fcab4c026e0 [<empty>] __inet_lookup_listener 78705819687192 2022/02/16 08:06:31 Perf event ring buffer full, dropped 5 samples 0xffff8fcab4c026e0 [<empty>] fib_validate_source 78705819695577 2022/02/16 08:06:31 Perf event ring buffer full, dropped 4 samples 0xffff8fcab4c026e0 [<empty>] skb_release_data 78705819701128
Internal DNS of GCP