moby: docker and ufw serious problems

Having installed ufw and blocking all incoming traffic by default (sudo ufw default deny) by running docker images that map the ports to my host machine, these mapped docker ports are accessible from outside, even though they are never allowed to be accessed.

Please note that on this machine DEFAULT_FORWARD_POLICY="ACCEPT" as described on this page http://docs.docker.io/en/latest/installation/ubuntulinux/#ufw has not been enabled and the property DEFAULT_FORWARD_POLICY="DROP" is still set.

Any ideas what might causing this?

Output of ufw status:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing)
New profiles: skip

To                         Action      From
--                         ------      ----
22                         ALLOW IN    Anywhere
443/tcp                    ALLOW IN    Anywhere
80/tcp                     ALLOW IN    Anywhere
5666                       ALLOW IN    95.xx.xx.xx
4949                       ALLOW IN    95.xx.xx.xx
22                         ALLOW IN    Anywhere (v6)
443/tcp                    ALLOW IN    Anywhere (v6)
80/tcp                     ALLOW IN    Anywhere (v6)

Here is the output of my rabbitmq via docker ps:

cf4028680530        188.xxx.xx.xx:5000/rabbitmq:latest           /bin/sh -c /usr/bin/   5 weeks ago         Up 5 days           0.0.0.0:15672->15672/tcp, 0.0.0.0:5672->5672/tcp   ecstatic_darwin/rabbitmq,focused_torvalds/rabbitmq,rabbitmq,sharp_bohr/rabbitmq,trusting_pike/rabbitm

Nmap test:

nmap -P0 example.com -p 15672

Starting Nmap 5.21 ( http://nmap.org ) at 2014-03-18 11:27 CET
Nmap scan report for example.com (188.xxx.xxx.xxx)
Host is up (0.048s latency).
PORT      STATE SERVICE
15672/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds

General infos:

  • Ubuntu 12.04 server
$ uname -a
Linux production 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

$ docker version
Client version: 0.9.0
Go version (client): go1.2.1
Git commit (client): 2b3fdf2
Server version: 0.9.0
Git commit (server): 2b3fdf2
Go version (server): go1.2.1
Last stable version: 0.9.0

$ docker info
Containers: 12
Images: 315
Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Dirs: 339
WARNING: No swap limit support

About this issue

  • Original URL
  • State: closed
  • Created 10 years ago
  • Reactions: 75
  • Comments: 150 (18 by maintainers)

Commits related to this issue

Most upvoted comments

After spending 2 hours reading various GitHub issues, I settled for the following workaround, which also works for custom container networks, based on this gist (HT @rubot):

Append the following at the end of /etc/ufw/after.rules (replace eth0 with your external facing interface):

# Put Docker behind UFW
*filter
:DOCKER-USER - [0:0]
:ufw-user-input - [0:0]

-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth0 -j ufw-user-input
-A DOCKER-USER -i eth0 -j DROP
COMMIT

And undo any and all of:

  • Remove "iptables": "false" from /etc/docker/daemon.json
  • Revert to DEFAULT_FORWARD_POLICY="DROP" in /etc/default/ufw
  • Remove any docker related changes to /etc/ufw/before.rules

Be sure to test that everything comes up fine after a reboot.

I still believe Docker’s out of the box behavior is dangerous and many more people will continue to unintentionally expose internal services to the outside world due to Docker punching holes in otherwise safe iptables configs.

(edit: I didn’t see the need to set MANAGE_BUILTINS=no and IPV6=no, or to fiddle with /etc/ufw/before.init, not sure why @rubot did that)

Guys, this is a serious security issue. Why is there no hint in the documentation for it? Only by accident I found out, that my MySQL Port is wide open to the world. I absolutely didn’t expect that as I’ve used ufw before and it was reliable enough to not spend another thought on it. So I trusted the advice to change the forward policy to ACCEPT. I would never have expected that it basically completely suspends ufw.

I have been experimenting with this a few hours now. I think I got it figured out.

… the installaton instructions on https://docs.docker.com/installation/ubuntulinux/ have a section “Enable UFW forwarding” which appears to be unnecessary.

The FORWARD chain does need policy set to ACCEPT if you have --iptables=false. It only appears this is not needed because the Docker installation package auto starts Docker and adds iptable rules the FORWARD chain. When afterwards you add --iptables=false to your config and restart docker those rules are still there. After the next reboot these rules will be gone and your containers wont be able to communicate unless you have the FORWARD chain policy set to ACCEPT.

What you need for a setup that allows filtering with UFW, inter container networking and outbound connectivity is

  • start docker with --iptables=false
  • FORWARD chain policy set to ACCEPT
  • add the following NAT rule:
    iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE

So what is the story here? With Docker version 1.7.0, build 0baf609 on Ubuntu 14 this is still completely broken. Also the installaton instructions on https://docs.docker.com/installation/ubuntulinux/ have a section “Enable UFW forwarding” which appears to be unnecessary. Anyone installing docker on an Ubuntu box exposes any forwarded ports from their containers to the outside world, and even worse looking at the ufw rules gives no hints that this is occurring which is needless to stay pretty bad.

For the record, the solution from @VascoVisser worked for me with docker V1.10. Here are the files I had to change:

  • Set DEFAULT_FORWARD_POLICY="ACCEPT" in /etc/default/ufw

  • Set DOCKER_OPTS="--iptables=false" in /etc/default/docker

  • Add the following block with my custom bridge’s ip range to the top of /etc/ufw/before.rules:

    # nat Table rules
    *nat
    :POSTROUTING ACCEPT [0:0]
    
    # Forward traffic from eth1 through eth0.
    -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j MASQUERADE
    
    # don't delete the 'COMMIT' line or these nat table rules won't be processed
    COMMIT
    

Note: I’m using a custom network for my docker containers, so you may have to change the 192.168.0.0 above to match your network range. The default is 172.17.0.0/16 as in Vasco’s comment above.

UPDATE: On Ubuntu 16.04 things are different, because docker is started by systemd, so /etc/default/docker is ignored. The solution described here creates the file /etc/systemd/system/docker.service.d/noiptables.conf with this content

[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --iptables=false

and issue systemctl daemon-reload afterwards.

Just wanted to update with some thoughts on changes we discussed amongst maintainers:

https://github.com/moby/moby/pull/45076 sends a warning to clients on container create when port forwards are requested for anything other than localhost with an option to disable via env var (on the daemon).

Also planning to abstract port forwarding logic and provide an implementation that uses IPVS NAT instead of iptables. This would mean dockerd is not poking holes in the firewall. It would be opt-in to not break existing users.

After this interface is well tested we plan to make this available as a plugin (exactly how is not determined yet, maybe an API over a unix socket or just a simple call to a host binary). There could potentially be an iptables/nftables version of this implementation that takes a better approach than the existing one does as well.

I can confirm that this is still an issue with docker 1.12.1 and UFW on Ubuntu 16.04

It’s unbelievably irresponsible of Docker to have not fixed this major security gotcha for 9 years. Strange…

Hi Folks,

here my solution/workaround. Most of the stuff is already written in the comments above.

I’ve tested this approach with Ubuntu 16.04 & Docker 1.13.1.

  1. Before installation of docker create the daemon.json in /etc/docker/ containing { "iptables": false }. If you do this after the installation of Docker, there are already iptables rules created by docker during its first startup at the end of the installation process

  2. Change UFW default forward policy to ACCEPT in the file /etc/default/ufw

  3. Add these after rules in /etc/ufw/after.rules:

*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
COMMIT

and

*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING ! -o docker_gwbridge -s 172.18.0.0/16 -j MASQUERADE
COMMIT

The 2nd entry is to allow containers attached to an overlay-network access to the internet. Edit/Update In the initial comment there was an missing ! - see comment from @lsapan

  1. Set further forwarding configurations for UFW in /etc/ufw/sysctl.conf
          net/ipv4/ip_forward=1
          net/ipv6/conf/default/forwarding=1
          net/ipv6/conf/all/forwarding=1
  1. Disable all incoming traffic and allow only the IP (ranges) you would allow. Especially, if you plan to run swarm mode, all involved nodes should be added to the allow list.

At the end, my UFW status looks like this:

# ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
Anywhere                   ALLOW IN    xxx.xxx.xxx.xxx
Anywhere                   ALLOW IN    xxx.xxx.xxx.xxx
Anywhere                   ALLOW IN    xxx.xxx.xxx.xxx            
Anywhere                   ALLOW IN    xxx.xxx.xxx.xxx            
Anywhere                   ALLOW IN    xxx.xxx.xxx.xxx           
22/tcp                     LIMIT IN    Anywhere                  
22/tcp (v6)                LIMIT IN    Anywhere (v6) 
  1. Deploy service without route mesh! As already stated, it looks like that swarm mode itself ignores iptables=false. Then you start a service using --publish <port:port> this ends again in ports which are accessible from all around the world. To avoid this, you have to use the mode format in publish, e.g.:
docker service create \
--name myweb \
--replicas 5 \
--network testnet \
--publish mode=host,target=80,published=80 \
nginx

In this case, the port is published only on the node (host) and no iptables entries are created by swarm.

Now, you can add a UFW rule to allow some IP’s to access these hosts on port 80. You can think about using specific “frontend” nodes which are running a proxy server to access services behind wich have no ports exposed/published.

Unfortunately compose file 3.0/3.1 doesn’t support this extended publish format but there is already a solution in sight: https://github.com/docker/docker/pull/30476 and several other PRs

So, I hope this helps.

@tsuna I like your solution. But if I apply it the first time I disable and enable the ufw to apply the changes everything works as expected. But if I then reload the ufw or execute ufw disable/enable again I get the following error from ufw and ufw is then inactive.:

$ sudo ufw reload
ERROR: Could not load logging rules
$ sudo ufw status verbose
Status: inactive

The problem goes away if I comment out the rule -A DOCKER-USER -i eth0 -j ufw-user-input. But of course this rule is required to make user defined rules work. If I set MANAGE_BUILTINS=yes in /etc/default/ufw it is also possible to restart/reload ufw. But after ufw has restarted I must also restart docker service to fix docker’s iptable rules. Disabling logging in /etc/ufw/ufw.conf with LOGLEVEL=off has no effect.

Edit: I know think I understand what’s happening. The default setting for MANAGE_BUILTINS is no. Which means that ufw will not touch any chains except its own. But by adjusting the after.rules like @tsuna suggests we are changing other chains. Now ufw can’t clean up the rules correctly. I have decided to set MANAGE_BUILTINS to yes as a solution.

there is a nother iptables rule that is important in some cases: iptables -t nat -A PREROUTING ! -i docker0 -p udp --dport 3478 -j DNAT --to-destination 172.17.0.7:3478 (-p udp/tcp depending on port type, --dport variable, IP also variable, check with docker inspect)

if you dont have this rule, containers will see docker0 bridge IP as source of incoming requests and not the clients real IP. my example causes a STUN server to return proper client IP instead of 172…

I don’t know maybe it’s not relevant already but when I create /etc/docker/daemon.json with such content:

{"iptables": false}

And restart docker sudo systemctl restart docker it starts to work without any additional efforts, so ports no longer available to the world.

So the question do I miss something or this is just fine?

I’m going to close this. Docker includes support for a DOCKER-USER chain where which all traffic is configured to pass through. This is where rules can be added and won’t be touched by Docker.

Thanks!

In my case I wanted to only allow a specific IP to connect to the exposed port. I’ve managed to do this with this rule.

It drops all connections to port <Port> if source IP is not <RemoteIP>. I suppose that if you would want to completely block all connections, then simply remove the ! -s <RemoteIP> bit.

iptables -I PREROUTING 1 -t mangle ! -s <RemoteIP> -p tcp --dport <Port> -j DROP

Ufw is only setting things in the filter table. Basically, the docker traffic is diverted before and goes through the nat table, so ufw in this case is basically useless, if you want to drop the traffic for a container you need to add rules in the mangle/nat table.

http://cesarti.files.wordpress.com/2012/02/iptables.gif

Also planning to abstract port forwarding logic and provide an implementation that uses IPVS NAT instead of iptables. This would mean dockerd is not poking holes in the firewall. It would be opt-in to not break existing users.

@cpuguy83 I don’t think anyone is relying on such an extremely unexpected behavior, and even if they are relying on it unknowingly, they shouldn’t be. I would argue this is one of those cases where it’s definitely okay to break the current behavior, when the current behavior can be considered a blatant bug by any reasonable definition: why would a program (e.g. Docker) punch holes in the firewall?!

It should not be opt-in IMO, just fix it as a default. This is the sort of thing that plagues many people without them even being aware of it, therefore it’s precisely the kind of thing you want to have ‘fixed by default’, so to speak.

I created a discussion about what’s going on with custom iptables rules (whether created through ufw or manually) and what we can do to improve that. It’s available here:

Does anyone know if it’s possible to somehow use the new DOCKER-USER (and not having to set the --iptables=false launch option) to get it to play nicely with ufw (i.e. the rules set up in ufw are respected and exposing a port with docker doesn’t mean it’ll bypass ufw)?

@mikehaertl I’m surprised too. I just gave it a try and it worked. I’ll be watching it over the next few days and report back. Just to be sure, can you try an install from scratch with the versions I have reported above?

All I did was update my /etc/default/docker to have DOCKER_OPTS="--iptables=false" and setup basic UFW rules. Nothing complicated.

@cpuguy83 ufw is the standard firewall on Ubuntu which is probably used on tens of thousands of machines. Many developers will simply trust ufw. They will not expect that by default docker messes up their firewall and basically completely bypasses it.

Maybe a doc change is enough. But on the other hand, there’s nothing worse then a hacked server due to an open firewall e.g. on MySQL Port 3306. If docker is taking potential security issues serious it should be farseeing. Thus maybe --iptables=false should really become the default.

Hi, any update on this? I can’t find any official source how to fix this. Currently I have a simple setup like:

/etc/defaults/ufw: DEFAULT_FORWARD_POLICY="ACCEPT"
/etc/defaults/docker: DOCKER_OPTS="--iptables=false"

ufw enable
ufw allow 22/tcp
ufw deny 80/tcp
ufw reload

host# docker run -it --rm -p 80:8000 ubuntu bash
container# apt-get update
container# python3 -m http.server

.1 I can reach Internet from container .2 Internet can reach container via public-address:80

Am I missing something here? 10x

Hi @rubot

I found your typo, the port of jwilder/whoami is 8000, not 80.

docker run --rm -p 8000:80 jwilder/whoami

should be

docker run --rm -p 9999:8000 jwilder/whoami

curl dev:9999

Thanks!

Hi @rubot

At the beginning of this thread, @Soulou has a comment https://github.com/moby/moby/issues/4737#issuecomment-38044320

Ufw is only setting things in the filter table. Basically, the docker traffic is diverted before and goes through the nat table, so ufw in this case is basically useless, if you want to drop the traffic for a container you need to add rules in the mangle/nat table.

http://cesarti.files.wordpress.com/2012/02/iptables.gif

ufw allow 80

This means all published container services whose ports are 80 are exposed to the public by default.

Maybe I misunderstand. But to be honest, that’s exactly what I would expect. And I think that’s the root what this issue here is all about. Why would you

  1. publish the container port to the host and then
  2. open this port in your firewall to the outside

if you don’t want to make the service accessible? If you really don’t want that, then you’d probably map the container port to some other port on the host that is denied from outside by ufw.

Hi @tsuna, thank you for your opinion.

In the case of using private IP address or ethernet cards. In my opinion, it’s hard to say which solution is better. It depends on our requirements or network environments.

In some cases, it’s better to use ethernet cards to filter traffics. In our case, we have a complex network environment. We also don’t want all public/private networks to access the published container service, but specific public/private IP addresses. So I use IP ranges in my solution. And people can easily modify these IP ranges to meet their requirements, including using ethernet cards.

But, by using ufw-user-input, I’ll keep my opinion unless we are using an older version of UFW which doesn’t support ufw route.

For example, if we were already using the following command to allow port 80 on the host:

ufw allow 80

This means all published container services whose ports are 80 are exposed to the public by default. Maybe that’s not we want.

I personally prefer using ufw-user-forward, I think this can prevent me from inadvertently exposing services that shouldn’t be exposed.

I found it too and preferred not adding those 9 rules pertaining to the RFC1918 address space because I don’t see the value. I felt better just dropping traffic originating from the external interface.

The only notable difference is that the workaround I used ties into the ufw-user-input chain whereas that one ties into ufw-user-forward. In my case the ufw-user-forward chain is empty while the ufw-user-input contains rules based from my regular ufw config (e.g. open port 80/443 for nginx, 22 for SSH etc). So I felt like it was better to tie into ufw-user-input.

ufw is an iptables manager. iptables rules need to be applied at every boot, or any time iptables is flushed. ufw stores it’s own rules and applies this when ufw is started.

Should we use this command verbatim?

This was a suggested command, I’m not 100% positive that this is the exact command you want to run. But basically you want to add a rule to DOCKER-USER which hands off the traffic to one of ufw’s chains.

Do we do this one time or on each host reboot?

Managing DOCKER-USER is up to the user. iptables rules are not persistent, so you’ll need to make sure the rules get applied any time the table is flushed (e.g. on reboot).

Is it safe to run this command repeatedly (eg during each deploy)? Or is it meant to be only run once? If once, how do we check if the required rule is already in place?

Do not run it repeatedly, it will just add uneccessary overhead.

Is the command above the only change needed, or do we set any config files/other settings?

The command would add the rule to iptables, which is what needs to be done. How you get it to iptables and make sure it is applied is up to you and would likely require some configuration file somewhere.

On Sat, May 26, 2018 at 3:37 PM Tom J notifications@github.com wrote:

@cpuguy83 https://github.com/cpuguy83 Can you please elaborate in more detail about the recommended fix?

What I’ve see from you is:

iptables -I DOCKER-USER 1 -j ufw-user-input

  1. Should we use this command verbatim?
  2. Do we do this one time or on each host reboot?
  3. Is it safe to run this command repeatedly (eg during each deploy)? Or is it meant to be only run once? If once, how do we check if the required rule is already in place?
  4. After we run it, do we use ufw commands as usual (eg ufw allow 80/tcp) and it will work as expected?
  5. Is the command above the only change needed, or do we set any config files/other settings?

Thank you. Having clear guidance would be very helpful.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/moby/moby/issues/4737#issuecomment-392283608, or mute the thread https://github.com/notifications/unsubscribe-auth/AAwxZpf1lLGfIEEYbWKx0QAkhjgL-TjCks5t2a8UgaJpZM4Bqh_r .

So here’s what I’m thinking:

Solution 1: We can add a configuration to the daemon like --iptables-insert-after=<some chain>

Solution 2: Have a dedicated chain (e.g. docker-user, or docker-pre where we always insert after.

Solution 3: In addition to solution 2, include whatever the chain name is that ufw uses for it’s early-on chains (can’t remember what they are off the top of my head).

Another workaround:

  1. Standard first step. Add { "iptables": false } to /etc/docker/daemon.json and restart docker service sudo service docker restart
  2. Allow all docker networks. By default I am allowing docker0 network ufw allow in on docker0
  3. Check bridge networks: ifconfig | grep br- and add all of them: ufw allow in on br-12341234

Could somebody test this workaround?

@mikehaertl Here are my test results (note that I did not setup UFW as our main challenge is figuring out where is iptables rule going wrong):

This is the script I have used to test things:

#!/usr/bin/env bash

# this command when --iptables=false
# iptables -t nat -A POSTROUTING ! -o docker0 -s 172.19.0.0/16 -j MASQUERADE

# start container
docker network create --subnet=172.19.0.0/16 nginx-net
docker run -d -p 2000:80 --name nginx1 --net=nginx-net nginx:stable-alpine
docker run -d -p 3000:80 --name nginx2 --net=nginx-net nginx:stable-alpine

echo

# check external connectivity
docker exec nginx1 ping -c 2 google.com
echo
docker exec nginx2 ping -c 2 google.com
echo

# check cross-container connectivity
docker exec nginx1 ping -c 2 nginx2
echo
docker exec nginx2 ping -c 2 nginx1
echo

# cleanup
docker rm -f nginx1 nginx2
docker network rm nginx-net

Output for --iptables=true (default)

2506e33026df52e7b2d26100e59742cd4f46a28e4770d331bce5aa816d3696f3
5a2d89aea6668b0eed44cca201c09e223380867f99067d1f9be6cb25bcd0cc74
a534949a4ef89f1acc2f84563d7a415e3f4f5866b5f8b83447020cde39f86311

PING google.com (216.58.203.78): 56 data bytes
64 bytes from 216.58.203.78: seq=0 ttl=61 time=25.570 ms
64 bytes from 216.58.203.78: seq=1 ttl=61 time=25.707 ms

--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 25.570/25.638/25.707 ms

PING google.com (216.58.203.78): 56 data bytes
64 bytes from 216.58.203.78: seq=0 ttl=61 time=25.593 ms
64 bytes from 216.58.203.78: seq=1 ttl=61 time=25.685 ms

--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 25.593/25.639/25.685 ms

PING nginx2 (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.070 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.086 ms

--- nginx2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.070/0.078/0.086 ms

PING nginx1 (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.119 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.087 ms

--- nginx1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.087/0.103/0.119 ms

nginx1
nginx2

#### Nginx Access Log Entry
172.16.1.1 - - [14/Jul/2016:13:44:31 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/601.7.6 (KHTML, like Gecko) Version/9.1.2 Safari/601.7.6" "-"

Output for --iptables=false (after masquerading)

3046d89c4aba842c5a18be923706e21b9acf3b2992bcf83028b21f97abe64877
4bbf64ff3c942d603276eb5c17530e5b76a0569e7e43903bfb76241d129c7745
7f7bb78e29039b547f47e01307f66c689c0b2530ad48034b114103e458993b76

PING google.com (216.58.203.78): 56 data bytes
64 bytes from 216.58.203.78: seq=0 ttl=61 time=25.761 ms
64 bytes from 216.58.203.78: seq=1 ttl=61 time=25.336 ms

--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 25.336/25.548/25.761 ms

PING google.com (216.58.203.78): 56 data bytes
64 bytes from 216.58.203.78: seq=0 ttl=61 time=26.078 ms
64 bytes from 216.58.203.78: seq=1 ttl=61 time=27.104 ms

--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 26.078/26.591/27.104 ms

PING nginx2 (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.063 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.072 ms

--- nginx2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.063/0.067/0.072 ms

PING nginx1 (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.045 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.092 ms

--- nginx1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.045/0.068/0.092 ms

nginx1
nginx2

#### Nginx Access Log Entry
172.19.0.1 - - [14/Jul/2016:13:37:48 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/601.7.6 (KHTML, like Gecko) Version/9.1.2 Safari/601.7.6" "-"

So, I guess we do have a working solution of “--iptables=false + masquerading”. But this starts trouble with the nginx proxy (as reported by @hbokh as well) which is not reporting the correct client IP address but instead starts reporting the gateway IP address of the bridge interface. (nginx-net in the above example). Can somebody help solve the final puzzle in iptables configuration?

As a sysadmin I fully agree with @mikehaertl. What I (we?) need is also a way to block specific misbehaving IP-addresses to ports opened up by Docker. That used to be easy with UFW and without Docker, but with Docker it is not. Workarounds like setting up a HAProxy-container in front or even “Docker Firewall Framework” (https://github.com/irsl/dfwfw) should not be necessary on default installations IMHO.

@mikehaertl Yep, definitely agree. I still don’t really know how to use UFW with Docker properly. (After 5+ Months of going through / opening issues etc. Will try your solution later this day though 👍.