moby: Unable to create new networks in Docker 1.10 (+ iptables weirdness)

I upgraded one of my hosts to Docker 1.10, and found that the one container running on that host was no longer able to make network connections off-container. As step one of debugging it, I decided to remove the container, remove the user-created bridge network, recreate it, and re-launch the container… but this failed when I tried to recreate the network and received a unable to insert jump to DOCKER-ISOLATION rule in FORWARD chain error.

$ docker network create logging
Error response from daemon: unable to insert jump to DOCKER-ISOLATION rule in FORWARD chain:  (iptables failed: iptables --wait -I FORWARD -j DOCKER-ISOLATION: iptables v1.4.21: Couldn't load target `DOCKER-ISOLATION':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.
 (exit status 2))

I’ve tried everything I can think of to resolve this, all to no avail. Of note, on the host prior to the 1.10 upgrade, I had all the right iptables rules in the FORWARD chain that forwarded traffic to the DOCKER chain; since the 1.10 upgrade, none of these rules are there, and aren’t getting created when I run the Docker daemon.

Help! As of now, this docker host is functionally offline… ugh.

$ docker version
Client:
 Version:      1.10.0
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   590d5108
 Built:        Thu Feb  4 18:36:33 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.10.0
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   590d5108
 Built:        Thu Feb  4 18:36:33 2016
 OS/Arch:      linux/amd64
$ docker info
Containers: 3
 Running: 0
 Paused: 0
 Stopped: 3
Images: 149
Server Version: 1.10.0
Storage Driver: aufs
 Root Dir: /data/docker/aufs
 Backing Filesystem: extfs
 Dirs: 183
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
 Volume: local
 Network: bridge null host
Kernel Version: 3.13.0-77-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.955 GiB
Name: FR-S-CCR-DOCK3
ID: OM2G:5GWL:DR43:5ZS5:RBJF:6FBA:MV5U:AOTY:XJAH:6QI4:7AJM:T33R

Host OS: Ubuntu 14.04 LTS x64, running virtualized via vSphere infrastructure.

Thanks a ton in advance.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 15 (6 by maintainers)

Most upvoted comments

I’m just reboot docker daemon and it fixed this situation!

What worked for me was: systemctl restart docker, also systemctl restart balena if anyone is using balena, this is where I ran into it many times (maybe because I was switching between docker and balena for testing). Anyway, hope this helps any future googlers

@delfuego

I think the chains were somehow removed after being created during daemon boot.

I know it is not common since you are on ubuntu, but are you using firewalld by any chance ? We are aware of issues there. You may want to reboot the daemon after disabling it (or disabling ufw if running)

Are you passing any options when starting the daemon ? Any WARNING in the daemon logs ? If you start the daemon with DEBUG logs, do you see the iptables related logging about chain creations ?

As a quick workaround, you can manually create the chain iptables -t filter -N DOCKER-ISOLATION. You will be able to create the network, assuming no other chains are missing.

@aboch Thanks for the info. This morning, despite me manually creating it, I still had the same error. So, I decided to take the scorched-earth approach, stopped the Docker daemon, and flushed and removed ALL Docker-created chains from both the filter and nat tables (so the DOCKER and DOCKER-ISOLATION chains, and any rules in the other chains that referenced these chains).

iptables -t filter -F DOCKER
iptables -t filter -X DOCKER
iptables -t filter -F DOCKER-ISOLATION
iptables -t filter -X DOCKER-ISOLATION
iptables -t nat -F DOCKER
iptables -t nat -X DOCKER-ISOLATION

Then I restarted the Docker daemon, and it re-created all the relevant chains in the right tables. And only then was I able to recreate my user-created network.

So, I’m back in business… but am baffled what happened. And I have two other Docker hosts in this cluster that I need to now upgrade to 1.10… so I’m dreading having to do all this manually, but we’ll see.

For googlers, this can be related to your firewall, in my case shorewall. AFAICS it depends on the start-timing.

service shorewall stop
service docker stop
service shorewall start
service docker start

Starting in this order will fix this again - this will most probably happen with other firewalls too. (i am using shorewall 5.x with DOCKER=yes so i am actually using the exact thing that should be used with docker)