k3s: missing 'flanneld masq' rules

Environmental Info:

k3s version v1.22.13+k3s1 (3daf4ed4)
go version go1.16.10

Node(s) CPU architecture, OS, and Version:

Linux ip-172-31-15-30 5.4.0-1078-aws #84~18.04.1-Ubuntu SMP Fri Jun 3 12:59:49 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration: 1 server

Describe the bug: After upgrading the k3s version from 1.22.12 to 1.22.13 on my ubuntu 18.04 server, I noticed that my pods couldn’t reach the internet, after debugging it turned out that there aren’t default masquerade rules on 1.22.13. On ubuntu 20.04 and 22.04, rules are there, and everything works as usual.

Steps To Reproduce:

  • On freshly installed ubuntu 18.04, install the latest k3s from 1.22 channel curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=v1.22 sh -s - --write-kubeconfig-mode 644
  • Wait for all pods to become ready (~30 sec) and check the iptables default masquerade rules iptables -vnL -t nat |grep 'MASQUERADE all'
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x2000/0x2000
    5   220 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */
  • Then downgrade the k3s version to 1.22.12 and recheck the masquerade rules curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.22.12+k3s1 sh -s - --write-kubeconfig-mode 644 iptables -vnL -t nat |grep 'MASQUERADE all'
    2   141 MASQUERADE  all  --  *      *       10.42.0.0/16        !224.0.0.0/4          /* flanneld masq */
    0     0 MASQUERADE  all  --  *      *      !10.42.0.0/16         10.42.0.0/16         /* flanneld masq */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x2000/0x2000
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 21 (11 by maintainers)

Most upvoted comments

@rancher-max should we close this out