moby: iptables failed - No chain/target/match by that name
Bug Report Info
docker version
:
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (Client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API verson: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64
docker info
:
Containers: 41
Images: 172
Storage Driver: devicemapper
Pool Name: docker-253:2-4026535945-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 7.748 GB
Data Space Total: 107.4 GB
Data Space Available: 99.63 GB
Metadata Space Used: 12.55 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.135 GB
Udev Sync Supported: true
Deferred Removal Enabled: true
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-123.el7.x86_64
Operating SYstem: CentOS Linux 7 (Core)
CPUs: 24
Total Memory: 125.6 GiB
Name: <hostname>
ID: <id>
uname -a
:
Linux <hostname> 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Environment details (AWS, VirtualBox, physical, etc.): Physical iptables version 1.4.21
How reproducible: Random
Steps to Reproduce:
- Start container with exposed ports mapped to host ports
- Stop container
- Repeat, good luck.
Actual Results:
Cannot start container <container id>: iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.23 --dport 4000 -J ACCEPT: iptables: No chain/
target/match by that name.
Expected Results:
Container starts without a problem.
Additional info:
I’ll also mention these containers are being launched via Apache Mesos (0.23.0) using Marathon. Appears similar to #13914.
About this issue
- Original URL
- State: closed
- Created 9 years ago
- Reactions: 55
- Comments: 63 (9 by maintainers)
Commits related to this issue
- added description of a workaround for docker being unable to start the container https://github.com/docker/docker/issues/16816 — committed to ofayans/familyalbum by ofayans 8 years ago
- added description of a workaround for docker being unable to start the container https://github.com/docker/docker/issues/16816 — committed to ofayans/familyalbum by ofayans 8 years ago
Exactly the same issue here as @shayts7 is describing. Workaround for now is to restart the daemon:
I have met a similar problem and it was solved by running this command:
# iptables -t filter -N DOCKER
Hope it helps!It happened to us as well, but in our case
iptables -t filter -L -v -n
showed that DOCKER chain exists, only when checking the nat table usingiptables -t nat -L -v -n
we found out that somehow DOCKER chain was disappear…After restarting docker daemon everything worked fine and we could see DOCKER chain came back to nat table:
If someone has a clue for why the chain disappear I’ll be more than happy to hear about it.
Hello everyone,
I’m using coreos and have this problem too but only on my master.
Running
iptables -t nat -N DOCKER
solves the problem, pods are automatically created and everything is fine. I’m looking to know why this chain is removed on my master and not on my workers.follow @fredrikaverpil .Thank you.
I try:
this issue occurs when I restart container after I stop the firewalld
docker version: Docker version 1.9.1, build a34a1d5 docker info: uname -a: Linux databus0 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Provide additional environment details (AWS, VirtualBox, physical, etc.):
List the steps to reproduce the issue:
Describe the results you received:
Describe the results you expected: restart ok
Provide additional info you think is important:
----------END REPORT ---------
#ENEEDMOREINFO
Hi All,
I faced the same problem, it fixed for me.
Enter below command, it will clear all chains.
Then restart Docker Service using below comamnd
I hope it will work.
This worked for me !! on all CentOS7.2 systems
@veuncent docker creates the docker chain in IPTables rules on startup; if some other system (such as firewalld) is removing those rules after docker is started, this error can occur. Make sure the docker daemon i started after firewalld.
Hello everyone!
I have faced this issue, and found out that after running my firewall script it removes the DOCKER chain, reason why it gets this error… so, when restarting docker service, it will fix this problem, because docker recreate the chains used by its service.
To fix:
But, it would be nice if when running any create container command docker check if there is its chain or recreate that.
Would it be possible to update it?
Sorry not being possible to contribute with a pull request.
Was having this issue. For us it turned out docker was starting before our firewall persistence (iptables-persistent) and its rules were getting overwritten. I resolved by removing the package as we were using it for only 1 rule.
There are ways to keep it working side by side by either ensuring docker starts after iptables-persistent(https://groups.google.com/forum/#!topic/docker-dev/4SfOwCOmw-E) or by adding whatever rules the docker service adds into the persistent iptables configuration(didn’t test this). May be of help @Seraf, @shayts7
This is not a docker bug but maybe it should be addressed in docs or something
This seems to only happen on CentOS 7 for me.
This is what I did
stop firewalld
Restart your machine
As long as you’ve put –restart=always to your docker instance. When your machine is reboot, the docker instance should be running, and the port should be binded. I believe this issue is specificly to CentOS7 family who uses firewalld instead of iptables.
I have solved the issue by typing
service iptables restart
andservice docker restart
. Hope it helps.Hi There. Im runing a VM INFO:
Docker Version:
So, Im working arround a almost 1 week to solve this issue! My MAIN issue is i have detected some random disconects to my VPS, disconects are afected on all ports lossing all acess! I made some research and i find on ```/var/log/firewalld logs the issues that I will mention below OUTPUT:
I already have executed this commands :
Then restart Docker Service using below comamnd
I have tried to make some this commands, and deinstalled docker to remove dockers configs… without much sucess… 👎 …
It is sad that this is happening! I have some work to do in a production environment
firewalled remove DOCKER’s rule do something that systemctl restart docker to solves
Hi
I used docker-compose command to start elasticSerach, logstash and Kibana and run normally for several hours, then ELK can not work properly. So I tried to restart the docker elasticSearch or logstatsh or Kibana but met similar problem.
Steps to reproduce the issue:
Describe the results you received: Error response from daemon: Cannot restart container <container ID>: driver failed programming external connectivity on endpoint dockerelk_elasticsearch_1 : (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9300 -j DNAT --to-destination 172.18.0.2:9300 ! -i: iptables: No chain/target/match by that name.
Describe the results you expected: Docker can run normally without this problem and no need restart.
Additional information you deem important (e.g. issue happens only occasionally): The problem happened after several hours normal running.
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.): uname -a Linux scav-dev.fordme.com 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux
Try creating the chain in iptables by running
iptables -N DOCKER
and if that doesn’t work, try upgrading docker and iptables