istio: iptables-restore v1.6.1: iptables-restore: unable to initialize table 'nat'
Bug description
When istio starts it fails with iptables-restore v1.6.1: iptables-restore: unable to initialize table 'nat'
istio-init
startup log:
Environment:
------------
ENVOY_PORT=
INBOUND_CAPTURE_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_MARK=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=
Variables:
----------
PROXY_PORT=15001
PROXY_INBOUND_CAPTURE_PORT=15006
PROXY_UID=1337
PROXY_GID=1337
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=*
INBOUND_PORTS_EXCLUDE=15090,15020
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
OUTBOUND_PORTS_EXCLUDE=
KUBEVIRT_INTERFACES=
ENABLE_INBOUND_IPV6=false
Writing following contents to rules file: /tmp/iptables-rules-1587065958218530124.txt999895894
* nat
-N ISTIO_REDIRECT
-N ISTIO_IN_REDIRECT
-N ISTIO_INBOUND
-N ISTIO_OUTPUT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-port 15001
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-port 15006
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A ISTIO_INBOUND -p tcp --dport 22 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT
iptables-restore --noflush /tmp/iptables-rules-1587065958218530124.txt999895894
iptables-restore v1.6.1: iptables-restore: unable to initialize table 'nat'
Error occurred at line: 1
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
iptables-save
panic: exit status 2
goroutine 1 [running]:
istio.io/istio/tools/istio-iptables/pkg/dependencies.(*RealDependencies).RunOrFail(0xd819c0, 0x9739b8, 0x10, 0xc000084bc0, 0x2, 0x2)
istio.io/istio@/tools/istio-iptables/pkg/dependencies/implementation.go:44 +0x96
istio.io/istio/tools/istio-iptables/pkg/cmd.(*IptablesConfigurator).executeIptablesRestoreCommand(0xc0000f7d30, 0x7f88e067a601, 0x0, 0x0)
istio.io/istio@/tools/istio-iptables/pkg/cmd/run.go:474 +0x3aa
istio.io/istio/tools/istio-iptables/pkg/cmd.(*IptablesConfigurator).executeCommands(0xc0000f7d30)
istio.io/istio@/tools/istio-iptables/pkg/cmd/run.go:481 +0x45
istio.io/istio/tools/istio-iptables/pkg/cmd.(*IptablesConfigurator).run(0xc0000f7d30)
istio.io/istio@/tools/istio-iptables/pkg/cmd/run.go:428 +0x24e2
istio.io/istio/tools/istio-iptables/pkg/cmd.glob..func1(0xd5c740, 0xc0000b0900, 0x0, 0x10)
istio.io/istio@/tools/istio-iptables/pkg/cmd/root.go:56 +0x14e
github.com/spf13/cobra.(*Command).execute(0xd5c740, 0xc000098010, 0x10, 0x11, 0xd5c740, 0xc000098010)
github.com/spf13/cobra@v0.0.5/command.go:830 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0xd5c740, 0x40574f, 0xc00006a058, 0x0)
github.com/spf13/cobra@v0.0.5/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v0.0.5/command.go:864
istio.io/istio/tools/istio-iptables/pkg/cmd.Execute()
istio.io/istio@/tools/istio-iptables/pkg/cmd/root.go:284 +0x2d
main.main()
istio.io/istio@/tools/istio-iptables/main.go:22 +0x20
83,1-8 Bot
Expected behavior istio-init should start properly and setup network.
Steps to reproduce the bug
#install istio (description of installation provided below)
kubectl create namespace test
kubectl label namespace test istio-injection=enabled
kubectl -n test create deployment nginx --image=nginx
#deployment will fail to start on init
Version (include the output of istioctl version --remote
and kubectl version
and helm version
if you used Helm)
#istio
client version: 1.5.1
control plane version: 1.5.1
data plane version: 1.5.1 (3 proxies)
#kubectl
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:38:50Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:30:47Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
#cni: weave v2.6.2
How was Istio installed?
istioctl manifest apply \
--set profile=demo \
--set values.gateways.istio-ingressgateway.sds.enabled=true \
--set values.global.k8sIngress.enabled=true \
--set values.global.k8sIngress.enableHttps=true \
--set values.global.k8sIngress.gatewayName=ingressgateway
Environment where bug was observed (cloud vendor, OS, etc)
Virtualbox VM cluster
1x master
3x worker
installed with kubeadm v1.18
CentOS 8.1 (SELinux enabled)
Possible conflicts could be due to SELinux being enabled and the iptables-legacy package.
Thanks in advance.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 16 (2 by maintainers)
istio use iptables to intercept traffic , by adding nat rules . Linux should enable netfix linux kernel modules. In all hosts do the command to enable.
In addition, centos8 use iptables-nftables,which you cannot use
iptables
command to see the rules in the pod.We encountered this issue when deploying Kubeflow v1.3.1 and v1.4.1 through its manifests deployment method, which injects Istio sidecars. Our target environment is a downstream Kubernetes v1.21.9 cluster that is deployed by Rancher v2.6.3-patch1. Each node in our cluster has AlmaLinux 8.5 (one of the CentOS 8 derivatives) as its OS. SELinux and firewalld are both disabled on our nodes.
The kernel module loading approach suggested by @zackzhangkai above (and quoted here) was what fixed this issue for us:
We ran this command to load these modules on all of our nodes in the downstream Kubernetes cluster that have the
worker
role. (EDIT: These commands take effect immediately, and do not persist if the node is rebooted.) In order to get the module loads to persist across reboots, we created a file –/etc/modules-load.d/99-istio-modules.conf
– on eachworker
node with the following contents:This workaround is great, and we’ve documented it internally, but it would be really nice if this issue could be fixed at the Istio level, especially since we encountered it while trying to install a separate piece of software that happens to depend upon / use Istio.
I was able to overcome the iptables nat issue by enabling CNI in conjunction with IPVS: