moby: Connecting container to multiple bridge networks breaks port forwarding from external IPs
It may just be a problem on my machine, but it seems that creating a container that sits on two networks is somehow interfering with port forwarding. The forwarding works when accessing 127.0.01, but not when accessing the IP address for another interface.
Output of docker version:
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:54:52 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:54:52 2016
OS/Arch: linux/amd64
Output of docker info:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 68
Server Version: 1.10.3
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 304
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.13.0-83-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.36 GiB
Name: kryton
ID: MFLN:XRFL:372N:VPKR:3AWK:USXB:3Q3E:EAH2:66ZK:T5U2:NEOP:TSCH
Username: bmerry
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Additional environment details (AWS, VirtualBox, physical, etc.):
Physical machine
Steps to reproduce the issue:
- Create a
docker-compose.ymlfile with the following content
version: "2"
services:
server:
image: nginx:1.9.12
networks:
- front
- back
ports:
- "8080:80"
networks:
front:
back:
In actual use, there would be other services connect to the back network but not the front network, but they’re not necessary to demonstrate the bug.
2. With docker-compose 1.6.2, run docker-compose up.
3. From the host, run curl http://localhost:8080.
4. From the host, run curl http://IPADDRESS:8080, where IPADDRESS is an IP address of a non-local interface on the machine.
Describe the results you received: Step 3 spits out an HTML page from nginx. Step 4 outputs nothing and eventually times out.
Describe the results you expected: Step 4 should return the same HTML page as step 3.
Additional information you deem important (e.g. issue happens only occasionally):
I’m not running any other firewall software on this machine. If I remove the config line putting the service on the back network, then the problem disappears. Similarly, if I run docker network disconnect to disconnect the container from the back network, the problem disappears, and reconnecting it makes the problem come back.
Output of iptables -vnL:
Chain INPUT (policy ACCEPT 4912 packets, 2311K bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER-ISOLATION all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * br-b81344fadd68 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-b81344fadd68 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- br-b81344fadd68 !br-b81344fadd68 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-b81344fadd68 br-b81344fadd68 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * br-a4d09867c7ea 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-a4d09867c7ea 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- br-a4d09867c7ea !br-a4d09867c7ea 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-a4d09867c7ea br-a4d09867c7ea 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 4538 packets, 674K bytes)
pkts bytes target prot opt in out source destination
Chain DOCKER (3 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.2 tcp dpt:5000
0 0 ACCEPT tcp -- !br-a4d09867c7ea br-a4d09867c7ea 0.0.0.0/0 172.18.0.2 tcp dpt:80
Chain DOCKER-ISOLATION (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- br-a4d09867c7ea br-b81344fadd68 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- br-b81344fadd68 br-a4d09867c7ea 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- docker0 br-b81344fadd68 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- br-b81344fadd68 docker0 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- docker0 br-a4d09867c7ea 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- br-a4d09867c7ea docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Output of iptables -t nat -vnL
hain PREROUTING (policy ACCEPT 8 packets, 536 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 2 packets, 272 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 528 packets, 34127 bytes)
pkts bytes target prot opt in out source destination
1 60 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 529 packets, 34187 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * !br-b81344fadd68 172.19.0.0/16 0.0.0.0/0
0 0 MASQUERADE all -- * !br-a4d09867c7ea 172.18.0.0/16 0.0.0.0/0
0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:5000
0 0 MASQUERADE tcp -- * * 172.18.0.2 172.18.0.2 tcp dpt:80
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- br-b81344fadd68 * 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- br-a4d09867c7ea * 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000 to:172.17.0.2:5000
1 60 DNAT tcp -- !br-a4d09867c7ea * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:172.18.0.2:80
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 19 (4 by maintainers)
@railnet
From the container perspective, being connected to multiple networks is like a server with multiple NICs. In such case you end up with multiple default gateways in the host routing table but only one will be used, the one on top of the table. Same end result for container where libnetwork makes sure only one default gw is programmed.
libnetwork chooses the default gateway for the container based on the priority associated to the network attachment point (Endpoint). Given the UI does not provide yet a way for user to set this priority, we fall-back to default logic which is choosing the first network in alphabetically order… That is why it is choosing the
backversus thefrontnetwork.Based on the current container’s default gateway
I am not sure I understand this. You can configure multiple routes, but the default is the default (0.0.0.0/0). And if more default routes are specified, then only the one on top will be used. And the order is dictated by their metric, the lowest the first.
A container does never become the gateway for a network. The network has its own gateway (for bridge network is the bridge interface). When you connect a container to a network, the network driver provides libnetwork with the default gateway to be programmed for the container. In most cases, the network gateway is the one returned, unless you play with driver options and are telling the bridge driver to return a custom chosen IP.
Please try play with the network names.
0front,1backas an example.@railnet Thank you for verifying ! That’s a very good news we could get you unblocked.
The default gateway pick up logic has been in place since docker supports container connecting to multiple networks. Clearly this has not been spelled out enough, even though there are some issue opened in docker/docker and libnetwork. So the blame is on us. We must specify it in the documentation if not already present.
Hi @aboch my previous post was liable to misinterpretations but your intro was clear, sharable and it highlighed a perfect alignment in the expected goal. No comments about the explanation.
The core of the post is obviously the sentence “we fall-back to default logic which is choosing the first network in alphabetically order”. Lack of knowledge of this part was the root cause of my failure.
NOw I can share that it solves for me and then the issue can be closed from my point of view. Your post was a guiding light for me. 😃 Just verified the new configuration and it works as expected. docker-compose version 1.7.0 docker engine 1.11.0
Issue solved.
Have a great time
@railnet I’ve not actually tried it, but my understanding of
--ipis that it specifies a host IP address to bind i.e., only external connections coming in through that host interface will be forwarded.I also have the same issue. I found out that it seems to be an intermittent problem as restarting the compose project many times finally made the port forwarding working. I’m on Ubuntu 15.10.