moby: Error response from daemon: Cannot start container (fork/exec /usr/sbin/iptables: cannot allocate memory)
We are running some coreos clusters. Over time (after a week) we see the following error taking place.
# systemd error log
2014/10/13 16:12:08 Error response from daemon: Cannot start container f9e42f092597e46f5cf6a507d7e70662e6ef1035a8f01d95f56c1f2934234361: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 172.31.31.159 --dport 49450 ! -i docker0 -j DNAT --to-destination 172.17.0.95:80: (fork/exec /usr/sbin/iptables: cannot allocate memory)
Here we need to restart the docker daemon and everything is back to normal. Is there a memory leak? I see there is enough memory available. Is it because of the swap?
environment information
# top
top - 16:13:01 up 22 days, 7:56, 1 user, load average: 6.50, 5.44, 4.58
Tasks: 186 total, 3 running, 183 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 14.2%id, 1.3%wa, 33.7%hi, 0.0%si, 50.8%st
Mem: 4051204k total, 3886188k used, 165016k free, 3168k buffers
Swap: 0k total, 0k used, 0k free, 1681572k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20318 root 20 0 3449m 1.3g 10m S 0 34.0 18:27.87 docker
# memory stats
$ free
total used free shared buffers cached
Mem: 4051204 3884208 166996 0 3168 1681744
-/+ buffers/cache: 2199296 1851908
Swap: 0 0 0
# coreos version
$ cat /etc/lsb-release
DISTRIB_ID=CoreOS
DISTRIB_RELEASE=410.0.0
DISTRIB_CODENAME="Red Dog"
DISTRIB_DESCRIPTION="CoreOS 410.0.0"
# docker info
$ docker info
Containers: 22
Images: 332
Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.15.8+
# docker version
$ docker -v
Docker version 1.1.2, build d84a070
About this issue
- Original URL
- State: closed
- Created 10 years ago
- Comments: 134 (34 by maintainers)
Commits related to this issue
- YOLO - Looks like were are stiking a bug - https://github.com/docker/docker/issues/8539 - changing logging to try and fix — committed to universityofadelaide/docker-apache2-php7 by singularo 8 years ago
This error has been resolved after restart of docker daemon.
The commands are: sudo service docker stop
sudo service docker start
@thaJeztah When is 1.13 scheduled to be released? What is the workaround until then?
This error has been resolved after restart of docker daemon. It’s works for me, but is intermittent
Is this even fixed in 1.13? I’m seeing some interesting memory creeps and then eventually I start seeing docker top start to fail with the fork/exec error. I’m still trying to figure out where the memory creep is at.
Closing this as the original problem of not capping the attach stream was fixed in #17877 (v1.10). There was still an issue with long log lines (#18057) that corresponds to report from @bioothod . Fix for that is in master (#22982). I’ve added it to 1.12.2 milestone in case there will be one.
Thanks for all the help @jonathanperret !
Final note, for completeness: the issue was tracked as #18057; the commit I referenced (513ec73) was actually part of #22982 which was only merged a month ago.
@bioothod it looks like you’re hitting the bug that was squashed in https://github.com/docker/docker/commit/513ec73831269947d38a644c278ce3cac36783b2 :
This commit was made back in May, but was not cherry-picked on the
1.12.x
branch so I guess you’ll have to wait for 1.13 to get the fix. I just tested a masterdockerd
binary from https://master.dockerproject.org/ and the unbounded memory usage does not occur anymore. By the way, here’s a simpler test case:This ends up crashing the 1.12.1 daemon, works fine on
master
.I had been running fine for over a month on Docker 1.9.1 with default logging. Yesterday I enabled log_driver syslog in all my containers and within a couple of hours ran into this problem. Then it happened a second time soon after. Then it ran for some hours ok.
I haven’t yet tried 1.10 to see if it fixes the problem as per https://github.com/docker/docker/pull/17877
Just run into this. Swap is not a fix - the daemon appears to be using a ridiculous amount of memory.
Sorry to say that, but the most obvious reason why this issue should not be closed is that adding swap only solves a symptom. In case the docker ecosystem wants to provide some kind of quality software, somebody with more knowledge of what is really going on here should dig into the root cause of those memory leaks. Sadly, I am not that person, otherwise I would do it. Fact is the docker daemon grows in time by memory, goroutines and whatever. Swapping memory will not help anybody in the long term.
+1 Having this issue. It seems to correlate with new image downloads.
Same problem with Docker 1.4.1,
Docker daemon is using almost 3GB of virtual memory. System isn’t willing to commit to forking, leading to
Error pulling image (gc-setup_teamcity_37) from example.com/gc-setup, ApplyLayer fork/exec /usr/bin/docker: cannot allocate memory