moby: Port forwarding problems on Ubuntu 14.04 with 100+ containers running
Output of docker version
:
lient:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:12:04 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:12:04 UTC 2015
OS/Arch: linux/amd64
Output of docker info
:
Containers: 138
Images: 293
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 569
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-83-generic
Operating System: Ubuntu 14.04.4 LTS
CPUs: 10
Total Memory: 13.7 GiB
Name: dockertest.anttiviljami.com
ID: XOJO:TPPD:TXB4:7Z4Z:MH4L:LKFV:BMPG:3K4Z:ZPSF:Y7EU:JQMU:HW7P
Username: dockeruser
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Additional environment details (AWS, VirtualBox, physical, etc.): Upcloud scalable VPS
Steps to reproduce the issue:
- Have 100+ containers running, each with unique ports forwarded with
-p 127.0.0.1:<unique port between 30000-50000>:80
- Select one of the containers and run
docker kill
anddocker rm
on it. Do adocker run
again with all the same parameters and unique port number as originally run for the container.
Describe the results you received: After doing this, you will find that exactly one of the other assigned ports, that we didn’t touch, will also now be pointing to the newly started container’s port 80.
It seems completely random which port will be the one mapped incorrectly. You have to find it by checking each assigned port individually.
The workaround is to simply restart the docker daemon and magically without any configuration changes, all ports will be mapped correctly again after this.
Describe the results you expected: The only port that should be pointing to the new container, should be the one assigned to it. None of the previous port forwardings should be affected.
Additional information you deem important (e.g. issue happens only occasionally): The Docker daemon is running with custom opts to bind running containers to the old default 127.17.421/16 address space.
--bip 172.17.42.1/16
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 2
- Comments: 31 (12 by maintainers)
I specify explicit ports like so:
Yes. I always specify:
--restart=always
So the container wouldn’t restart after stopping due to the
bind: address already in use.
error. But yes, even after stopping the container, the host port was proxied to a different running container. This was all confirmed on 1.12.1.I’m specifying
--bip 172.17.42.1/16
for the docker daemon.I did, but I’m afraid I didn’t keep the iptables config. I’ll start up a daemon up with
--userland-proxy=false
so I can get the rules for debug next time this issue occurs.Yes. I checked
netstat -lntp
output and confirmed nothing was listening on the port docker run was complaining about.These seemingly unexplained
bind: address already in use.
errors have been showing up more frequently in the last few weeks. Any specific data you want me to provide you for when I run into one?Thanks @anttiviljami! It would help us narrowing down what can cause this
Same problem :
I have multiple web application containers with ports 80 / 8000 / 8080 and all of them redirect only to container with port 80 I checked nat rules all ok
what do you suggest ?
thank you