compose: nginx proxy cannot resolve container hostnames (Version 2 Yaml)

I’ve attempted to migrate my stack to use version 2 docker-compose.yml and have run into a problem with network hostnames not being resolved by nginx.

My stack involves a reverse proxy (nginx + nginx extras on debian:wheezy) that serves secure content via several other software components of which I won’t go into detail (see config below).

In Version 1, I used environment variables from docker links alongside with LUA script to insert them into the nginx.conf (using nginx-extras). This worked perfectly as a reverse proxy in front of the docker containers.

In Version 2 I am using the hostnames as generated by the docker network. I am able to successfully ping these hostnames from within the container, however nginx is unable to resolve them.

2016/05/04 01:23:44 [error] 5#0: *3 no resolver defined to resolve ui, client: 10.0.2.2, server: , request: "GET / HTTP/1.1", host: "localhost"

Here is my current config:

docker-compose.yml:

version: '2'

services:
  # back-end
  api:
    build: .
    depends_on:
      - db
      - redis
      - worker
    environment:
      RAILS_ENV: development
    ports:
      - "3000:3000"
    volumes:
      - ./:/mmaps
      - /var/log/mmaps/api:/mmaps/log
    volumes_from:
      - apidata
    command: sh -c 'rm -rf /mmaps/tmp/pids/server.pid; rails server thin -b 0.0.0.0 -p 3000'

  # background process workers
  worker:
    build: .
    environment:
      RAILS_ENV: development
      QUEUE: "*"
      TERM_CHILD: "1"
    volumes:
      - ./:/mmaps
      - /var/log/mmaps/worker:/mmaps/log
    volumes_from:
      - apidata
    command: rake resque:work

  # front-end
  ui:
    image: magiandev/mmaps-ui:develop
    depends_on:
      - api
    ports:
      - "8080:80"
    volumes:
      - /var/log/mmaps/ui:/var/log/nginx

  # database
  db:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: pewpewpew
    volumes_from: 
      - mysqldata
    volumes:
      - /var/log/mmaps/db:/var/log/mysql

  # key store
  redis:
    image: redis:2.8.13
    user: root
    command: ["redis-server", "--appendonly yes"]
    volumes_from:
      - redisdata
    volumes:
      - /var/log/mmaps/redis:/var/log/redis

  # websocket server
  monitor:
    image: magiandev/mmaps-monitor:develop
    depends_on:
      - api
    environment:
      NODE_ENV: development
    ports:
      - "8888:8888"

  # media server
  media:
    image: nginx:1.7.1
    volumes_from: 
      - apidata
    ports:
      - "3080:80"
    volumes:
      - ./docker/media/nginx.conf:/etc/nginx/nginx.conf:ro
      - /srv/mmaps/public:/usr/local/nginx/html:ro
      - /var/log/mmaps/mediapool:/usr/local/nginx/logs

  # reverse proxy
  proxy:
    build: docker/proxy
    ports:
      - "80:80"
      - "443:443"
    volumes: 
      - /var/log/mmaps/proxy:/var/log/nginx


  apidata:
    image: busybox:ubuntu-14.04
    volumes:
      - /srv/mmaps/public:/mmaps/public
    command: echo api data

  mysqldata:
    image: busybox:ubuntu-14.04
    volumes:
      - /srv/mmaps/db:/var/lib/mysql
    command: echo mysql data

  redisdata:
    image: busybox:ubuntu-14.04
    volumes:
      - /srv/mmaps/redis:/data
    command: echo redis data

  # master data
  # convenience container for backups
  data:
    image: busybox:ubuntu-14.04
    volumes_from:
      - apidata
      - mysqldata
      - redisdata
    command: echo mmaps data

nginx.conf

worker_processes  1;

events {
  worker_connections  1024;
}

http {
  # permanent redirect to https
  server {
    listen         80;
    rewrite        ^ https://$host$request_uri? permanent;
  }

  server {
    listen       443 ssl;
    ssl on;
    ssl_certificate     /etc/nginx/ssl/server.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;

    location / {
      proxy_pass http://ui:80$request_uri;
    }

    location /monitor/ {
      proxy_pass http://monitor:8888$request_uri;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
    }

    location /api/ {
      client_max_body_size 0;
      proxy_pass http://api:3000$request_uri;
    }

    location /files/ {
      client_max_body_size 0;
      proxy_pass http://media:80$request_uri;
    }

    location /mediapool/ {
      proxy_pass http://media:80$request_uri;
      add_header  X-Upstream  $upstream_addr;
      if ($request_uri ~ "^.*\/(.*\..*)\?download=true.*$"){
          set $fname $1;
          add_header Content-Disposition 'attachment; filename="$fname"';
      }
      proxy_pass_request_headers      on;
    }

    error_page   500 502 503 504  /50x.html;

    location = /50x.html {
      root   /var/www;
    }
  }
}

# stay in the foreground so Docker has a process to track
daemon off;

After some reading I have tried to use ‘dnsmasq’ and set resolver 127.0.0.1 within the nginx.conf but I cannot get this to work:

2016/05/04 01:54:26 [error] 6#0: recv() failed (111: Connection refused) while resolving, resolver: 127.0.0.1:53

Is there a better way to configure nginx to proxy pass to my containers that works with V2?

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 32
  • Comments: 38 (3 by maintainers)

Commits related to this issue

Most upvoted comments

Ok, found a solution here: http://stackoverflow.com/questions/35744650/docker-network-nginx-resolver And here: https://github.com/docker/docker/issues/22652

Basically, Nginx needs a resolver to resolve hostnames. Installing dnsmasq is one way to accomplish this, but it’s not that straightforward to setup inside a container that’s already running nginx (need an entrypoint script/supervisord). The superior & recommended way is to use a user defined network, using docker network create .... Containers that are part of that network (through the --network flag on docker run), will have a dns resolver added to them, available via 127.0.0.11.

More information here: https://docs.docker.com/engine/userguide/networking/

@burnzoire, @thoeni, @MattMcFarland: I think this should work for you guys as well, so we might be able to close this issue!

I’m having the same problem.

Ok, found a solution here: http://stackoverflow.com/questions/35744650/docker-network-nginx-resolver And here: moby/moby#22652

Basically, Nginx needs a resolver to resolve hostnames. Installing dnsmasq is one way to accomplish this, but it’s not that straightforward to setup inside a container that’s already running nginx (need an entrypoint script/supervisord). The superior & recommended way is to use a user defined network, using docker network create .... Containers that are part of that network (through the --network flag on docker run), will have a dns resolver added to them, available via 127.0.0.11.

More information here: https://docs.docker.com/engine/userguide/networking/

@burnzoire, @thoeni, @MattMcFarland: I think this should work for you guys as well, so we might be able to close this issue!

I encountered similar problem. background: using openresty docker + lua + redis docker.

problem: within the openresty container, lua scripts needed to request to some APIs, which are WWW APIs, as well as connect to redis. before I encountered the redis connection problem, there was an http connection problem, and I resolved that problem by adding an resolver 114.114.114.114; directive to the http block in the nginx.conf. After solving the http connection problem, I found that the redis connection couldn’t be connected, and the redis was another container in the same network with openresty container, so the resolver 114.114.114.114; didn’t work.

solution: add resolver 127.0.0.11; with the server block instead of http block. and it worked.

solution: add resolver 127.0.0.11; with the server block instead of http block. and it worked.

worked for me too.

        ssl_certificate         /etc/tls/tls.crt;
        ssl_certificate_key     /etc/tls/tls.key;

        resolver                127.0.0.11;

        access_log              /var/log/nginx/access_log.log;

        location / {
                set             $upstream_app homer;
                set             $upstream_port 8080;
                set             $upstream_proto http;
                proxy_pass      $upstream_proto://$upstream_app:$upstream_port;
        }

        location /pihole/ {

@tomeady Does this help: Configure an upstream in Nginx

upstream backend {
    server container1;
}
server {
    location ~ ^/some_url/(.*)$ {
        proxy_pass http://backend/$1;
    }
}

With this compose file

version: "2"

services:
  nginx:
    hostname: testnginx
    domainname: testnginx.local
    image: nginx:1.17.5
    volumes:
      - ./nginx/conf.d/:/etc/nginx/conf.d/
    ports:
      - "3080:80"
    networks:
      net:
        aliases:
          - testnginx
  api:
    hostname: testapi
    domainname: testapi.local
    image: hashicorp/http-echo
    command: ["-text", "Hello Docker compose"]
    ports:
      - 5678:5678
    networks:
      net:
        aliases:
          - testapi

networks:
  net:

And this nginx config:

server {
  listen 80;
  location /api/ {
    proxy_pass http://testapi:5678;
  }
}

I can then call the api curl http://localhost:3080/api/

I also found a duplicate issue #2925.

In clear, for now if you want to do this, use an alias. I’ll talk to the team about this issue and see if we can do something about it, seeing this comment I think we should.

I’ve come up against this myself this past few weeks and found a really simple solution.

I just create each container with the net-alias option:

docker create \
   --name MyContainer \
   --hostname ABC123 \
   --net-alias container1 \
   --expose 8080 \
   myreg/container

and this allows nginx to proxy correctly when using:

proxy_pass http://container1:8080;

solution: add resolver 127.0.0.11; with the server block instead of http block. and it worked.

This did it for me. For further context, 127.0.0.11 points to Docker’s embedded DNS server.

I use OpenResty which is basically Nginx with lua libraries but it doesn’t work either… Not even localhost. Using the name of the container in the same network is possible but when I want to use the Hosts which are defined in docker-compose as extra-hosts it doesn’t (same as when writing them in to /etc/hosts)

Same problem here. I can ping containers from within the nginx container, but nginx server can not resolve them.

It works for me.

solution: add resolver 127.0.0.11; with the server block instead of http block. and it worked.

worked for me too.

        ssl_certificate         /etc/tls/tls.crt;
        ssl_certificate_key     /etc/tls/tls.key;

        resolver                127.0.0.11;

        access_log              /var/log/nginx/access_log.log;

        location / {
                set             $upstream_app homer;
                set             $upstream_port 8080;
                set             $upstream_proto http;
                proxy_pass      $upstream_proto://$upstream_app:$upstream_port;
        }

        location /pihole/ {

Can anyone provide a straightforward step-by-step on this?

links: worked like a charm, but that’s deprecated. You can setup a custom network and reference that for each container you want on the “network” and container name will resolve to container IP.