nginx-proxy: Can't proxy to containers running in host network mode

When using nginx-proxy to try to proxy to a container running in host networking mode, I assume I also have to run nginx-proxy in host network mode as well (although I’ve tried both ways without success) but I can’t get it to work. Here’s a sample compose file using the “web” image used in the test suite:

version: '2'
services:
  nginx-proxy:
    image: jwilder/nginx-proxy:test
    network_mode: "host"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro

  web1:
    image: web
    expose:
      - "81"
    environment:
      WEB_PORTS: 81
      VIRTUAL_HOST: web1.nginx-proxy.local

  web2:
    image: web
    expose:
      - "82"
    network_mode: "host"
    environment:
      WEB_PORTS: 82
      VIRTUAL_HOST: web2.nginx-proxy.local

after running this with docker-compose -f test_network_mode_host.yml up -d I try to curl each:

$ curl localhost:80/port -H "Host: web1.nginx-proxy.local"
answer from port 81

$ curl localhost:80/port -H "Host: web2.nginx-proxy.local"
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.13.8</center>
</body>
</html>

I can, however get to web2 using localhost

curl 127.0.0.1:82/port
answer from port 82

The problem seems to be in the upstream section for web2, which just has server 127.0.0.1 down; Here’s the full /etc/nginx/conf.d/default.conf:

# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
  default $http_x_forwarded_port;
  ''      $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
  default off;
  https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
resolver 10.0.2.3;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
	server_name _; # This is just an invalid value which will never trigger on a real hostname.
	listen 80;
	access_log /var/log/nginx/access.log vhost;
	return 503;
}
# web1.nginx-proxy.local
upstream web1.nginx-proxy.local {
				## Can be connect with "test_sneakernet" network
			# test_web1_1
			server 172.18.0.3:81;
}
server {
	server_name web1.nginx-proxy.local;
	listen 80 ;
	access_log /var/log/nginx/access.log vhost;
	location / {
		proxy_pass http://web1.nginx-proxy.local;
	}
}
# web2.nginx-proxy.local
upstream web2.nginx-proxy.local {
				## Can be connect with "host" network
		# test_web2_1
			server 127.0.0.1 down;
}
server {
	server_name web2.nginx-proxy.local;
	listen 80 ;
	access_log /var/log/nginx/access.log vhost;
	location / {
		proxy_pass http://web2.nginx-proxy.local;
	}
}

Am I missing something in setting this up or is it just not working like it’s supposed to?

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 21
  • Comments: 36

Most upvoted comments

Same issue…having trouble getting this to work with Home-Assistant (which needs network_mode host to do some UPnP discovery)

I’m currently skirting around this “bug”/“limitation” by using socat, since I was previously using socat to handle redirection to a raspberry pi with homeassistant on it, but since I have been consolidating a few things I decided to move hass to a container. Similar to the above openhab cases, its beneficial to use host mode networking. Anyway, the short version of my solution is to use the following in one of my docker-compose.yml:

  hass-socat:
    image: alpine/socat:latest
    container_name: hass-socat
    entrypoint: "socat tcp-listen:8122,fork,reuseaddr tcp-connect:192.168.1.110:8123"
    depends_on:
      - nginx-proxy
    environment:
      - LETSENCRYPT_HOST=home.example.com
      - LETSENCRYPT_EMAIL=email@example.com
      - VIRTUAL_PORT=8122
      - VIRTUAL_HOST=home.example.com
    network_mode: bridge
    ports:
      - 8122:8122
    restart: always

Where homeassistant is listening on the host in another stack on 8123. The socat container here handles the nginx/letsencrypt binding with this project (more or less how I had it working when it was external; however, now it points the host IP of the docker instance and just uses a different port for its nginx virtual host. Works like a charm.

Had the same issue with Home Assistant running on network_mode host and getting the upstream variable to the correct IP and port.

I ended up creating a configuration file in /etc/nginx/conf.d/your.domain.com.conf specific to the host (your.domain.com:8123). Inside my docker-compose file, I did not include Virtual_Host. Mounted the /conf.d volume outside the container as well.

version: ‘3’ services: homeassistant: container_name: homeassistant image: homeassistant/home-assistant:stable privileged: true volumes: - /path/to/configs/hass:/config environment: - PUID=1000 - PGID=1000 - TZ=America/Los_Angeles - LETSENCRYPT_HOST=your.domain.com - LETSENCRYPT_EMAIL=your@email.com network_mode: host

/etc/nginx/conf.d/your.domain.com.conf

your.domain.com

upstream your.domain.com { # Cannot connect to network of this container server 10.0.0.4:8123; #Host IP Address and port } server { server_name your.domain.com; listen 80 ; access_log /var/log/nginx/access.log vhost; # Do not HTTPS redirect Let’sEncrypt ACME challenge location /.well-known/acme-challenge/ { auth_basic off; allow all; root /usr/share/nginx/html; try_files $uri =404; break; } location / { return 301 https://$host$request_uri; } } server { server_name your.domain.com; listen 443 ssl http2 ; access_log /var/log/nginx/access.log vhost; ssl_session_timeout 5m; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; ssl_certificate /etc/nginx/certs/your.domain.com.crt; ssl_certificate_key /etc/nginx/certs/your.domain.com.key; ssl_dhparam /etc/nginx/certs/your.domain.com.dhparam.pem; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/nginx/certs/your.domain.com.chain.pem; add_header Strict-Transport-Security “max-age=31536000” always; include /etc/nginx/vhost.d/default; location / { proxy_pass http://your.domain.com; } }

This issue seems to have gotten stale, but I am running into this as well trying to get home assistant working properly. Without host networking mode, Hass can’t find things like my Plex server or Google homes.

I can run my nginx in bridge mode and have proxy a container in host mode. However, I’ve had to alter the template as I describe here:

https://github.com/jwilder/nginx-proxy/issues/832

I reported it quite a while ago, but I haven’t heard anything yet on a native solution.

Thanks @kariudo for the example. I modified it to work with non-bridge networks, maybe that’s interesting for @anLizard as well. The secret ingredient is host.docker.internal:

    socat:
      image: alpine/socat:latest
      entrypoint: "socat tcp-listen:8122,fork,reuseaddr tcp-connect:host.docker.internal:8123"
      ports:
        - 8122:8122
      expose:
        - 8122
      restart: always
      extra_hosts:
        - "host.docker.internal:host-gateway"
      environment:
        - "VIRTUAL_HOST=hass.example.com"
        - "LETSENCRYPT_HOST=hass.example.com"
        - "VIRTUAL_PORT=8122"
      networks:
        internalbr:
          ipv4_address: 10.123.0.16
        default:

See also https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host

@anLizard, Rather than clogging up this issue, here is a more complete example of a docker-compose.yml for how I am handling using nginx-proxy to resolve this.

https://gist.github.com/kariudo/0e2531ef8165a6f8650cc81df56083a7

I can’t attest to other environments, but I can confirm this works for me quite well.

Thanks @Lif3line and @Kami for the first revision. This works like a charm for OpenHAB also.

I’ve modified nginx.tmpl as you commented

  {{ if eq $host "fully.qualified.domain.name" }}
        server <server ip>:8443;
  {{ else }}
        # Cannot connect to network of this container
        server 127.0.0.1 down;
  {{ end }}

In my case:

  • “fully.qualified.domain.name” is “openhab.domain.tld”, the same as specified in docker-compose.yml for openhab
  • “server ip” is the 172.17.0.1 as you commented, ifconfig returns it as docker0: ip address

The docker-compose.yml for openhab is:

version: "3.8"

services:
  openhab:
    image: "openhab/openhab:3.0.1-debian"
    container_name: "openhab"
    network_mode: host
    restart: unless-stopped
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "/etc/timezone:/etc/timezone:ro"
      - "./addons:/openhab/addons"
      - "./conf:/openhab/conf"
      - "./userdata:/openhab/userdata"
    environment:
      OPENHAB_HTTP_PORT: "8080"
      OPENHAB_HTTPS_PORT: "8443"
      EXTRA_JAVA_OPTS: "-Duser.timezone=Europe/Andorra"
      USER_ID: "997"                                   # value returned by 'id -u openhab'
      GROUP_ID: "997"                                  # value returned by 'id -g openhab'
      # My language is Catalan
      LC_ALL: "ca_ES.UTF-8"
      LANG: "ca_ES.UTF-8"
      LANGUAGE: "ca_ES.UTF-8"
      # NGINX-PROXY ENVIRONMENT VARIABLES: UPDATE ME
      VIRTUAL_HOST: "openhab.domain.tld"
      VIRTUAL_PORT: "8080"
      LETSENCRYPT_HOST: "openhab.domain.tld"
      LETSENCRYPT_EMAIL: "user@domain.com"
      # /END NGINX-PROXY ENVIRONMENT VARIABLES

That’s all. Maybe this can help other Openhab users.

Thanks @Kami, your suggested solution was excellent. I had the same use-case as others; wanting to run Home Assistant with network_mode: host. In case anyone wants to replicate:

I found nginx fell over with the original suggested patch since it led to 2 server entries, but that’s only a minor change:

diff --git a/nginx.tmpl b/nginx.tmpl
index 07e2b50..4c9c851 100644
--- a/nginx.tmpl
+++ b/nginx.tmpl
@@ -196,8 +196,12 @@ upstream {{ $upstream_name }} {
 					{{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
 				{{ end }}
 			{{ else }}
-				# Cannot connect to network of this container
-				server 127.0.0.1 down;
+				{{ if eq $host "sub.domain.com" }}
+					server <docker internal ip>:8123;  
+				{{ else }}
+					# Cannot connect to network of this container
+					server 127.0.0.1 down;
+				{{ end }}
 			{{ end }}
 		{{ end }}
 	{{ end }}

The <docker internal ip> needs to be what ifconfig shows for Docker, normally that’s under the heading docker0: or similar.

I had difficulty rebuilding the reverse proxy image from source as well as using the docker-compose keyword command to insert the patch at start-up, so I settled on building on top of the reverse proxy image:

FROM jwilder/nginx-proxy:alpine

# COPY nginx.tmpl nginx.tmpl # Alternative if you don't want to mess with patches

RUN apk --update add git
COPY hass_fix.patch hass_fix.patch
RUN git apply hass_fix.patch

where hass_fix.patch is the above patch file and must reside in the same directory as this Dockerfile.

The process was then:

  • docker build . on the folder with hass_fix.patch and the Dockerfile
  • docker tag <hash> host_mode_jwilder
    • Just to make it easier to reference later
  • Update my reverse_proxy image to run the new local host_mode_jwilder image
  • Updated Home Assistant image to run with network_mode: host
  • Everything else remained the same
    • e.g. Home Assistant VIRTUAL_HOST and LETSENCRYPT_HOST environment variables

@YouDontGitMe I don’t think this will work correctly due to how included config files are handled in nginx (I also tested similar approach / workaround first).

This will only work if you have a single vhost served by nginx (aka your.domain.com).

If you have multiple vhosts, nginx will serve certificate for your.domain.com for all others vhosts as well and it won’t work because server block for your.domain.com will have precedence over server blocks in default.conf which is generated from template in this repo.


By default, nginx.tmpl will generate an entry like this for container which is using host networking:

upstream sub.domain.com {
				# Cannot connect to network of this container
				server 127.0.0.1 down;
}

But we want something like this:

upstream sub.domain.com {
				# home_assistant
				# Keep in mind that this needs to be internal server IP of your server where
				# container containers are running
				server <internal server ip>:8123;
				# Cannot connect to network of this container
				server 127.0.0.1 down;

}

Right now, my work around includes a custom Dockerfile + Procfile for nginx-proxy image which uses sed to manipulate default.conf entry for vhost entry where docker host networking is used.

This approach is definitely on the hacky side and there are nicer workarounds possible (e.g. just add some if statements to the template file itself or just add support for environment variables for managing more complex setups), but it works.

Here is my Dockerfile:

FROM jwilder/nginx-proxy:alpine

# Copy over custom config
COPY nginx.conf /etc/nginx/nginx.conf

COPY uploadsize.conf /etc/nginx/conf.d/uploadsize.conf

COPY fix-ha-vhost.sh /app/fix-ha-vhost.sh
COPY Procfile /app/Procfile

RUN chmod +x /app/fix-ha-vhost.sh

Procfile:

dockergen: docker-gen -watch -notify "/app/fix-ha-vhost.sh ; nginx -s reload" /app/nginx.tmpl /etc/nginx/conf.d/default.conf
nginx: nginx

fix-ha-vhost.sh:

#!/usr/bin/env bash
# Errors should not be fatal
set +e
grep '<internal ip>:8123' /etc/nginx/conf.d/default.conf || sed -i 's#upstream sub.domain.com {#upstream sub.domain.com {\n\t\t\t\tserver <internal ip>:8123;#g' /etc/nginx/conf.d/default.conf

EDIT: For completeness sake, here is also a slightly nicer hack which only relies on small change to upstream nginx.tmpl.

diff --git a/nginx.tmpl b/nginx.tmpl
index 07e2b50..5284aa9 100644
--- a/nginx.tmpl
+++ b/nginx.tmpl
@@ -196,6 +196,10 @@ upstream {{ $upstream_name }} {
                                        {{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
                                {{ end }}
                        {{ else }}
+                               {{ if eq $host "sub.domain.com" }}
+                               # Hack
+                               server 10.0.0.1:8123;
+                               {{ end }}
                                # Cannot connect to network of this container
                                server 127.0.0.1 down;
                        {{ end }}

I had a similar issue and fixed it using this configuration:

nginx-proxy/docker-compose.yml:


services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

wp1.local/docker-compose.yml

version: '3.7'

services:

  wordpress:
    build: ./
    restart: always
    links:
      - db:mysql
    ports:
      - "80"
    networks:
      - nginx-proxy_default
    environment:
      VIRTUAL_HOST: wp1.local
      VIRTUAL_PORT: 80
    working_dir: /var/www/html

  db:
    image: mysql:5.7
    restart: always
    ports:
      - "33067:3306"
    networks:
      - nginx-proxy_default

networks:
  nginx-proxy_default:
    external: true

Note the network name. I didn’t create it manually, it is based on the nginx-proxy default network. I’m on a Mac, and after setup the containers I added the following line to the end of my hosts file:

127.0.0.1 wp1.local

Yep same problem here with Home Assistant. Broke my brain to fix it all morning 😃

@neographikal Where you able to find a solution for the Home Assistant?

I’m going to chime in as yet another person trying to use Home Assistant with this container. Some services (HomeKit, in my case) don’t work unless the Home Assistant container is running in host networking mode – but doing that completely breaks the reverse proxy.

Same here. I have an OpenHAB container which has to be on the host network, but still I want to have a proxy for authentication.