gluetun: Help: Name resolution from Gluetun or stack sharing container to other containers on network does not work

TLDR: Unable to resolve containers on the same user-defined network using built-in docker dns.

  1. Is this urgent?

    • Yes
    • No
  2. What VPN service provider are you using?

    • PIA
  3. What’s the version of the program?

    You are running on the bleeding edge of latest!

  4. What are you using to run the container?

    • Docker Compose
  5. Extra information

Logs:

Working example from container: alpine

$ docker  exec -it alpine /bin/sh
/ # host jackett
jackett has address 172.18.0.2
/ # host gluetun
gluetun has address 172.18.0.5
/ # 

Example from container: gluetun where dns fails

$ docker  exec -it gluetun /bin/sh
/ # host sonarr
Host sonarr not found: 3(NXDOMAIN)
/ # host jackett
Host jackett not found: 3(NXDOMAIN)
/ # host google.com
google.com has address 172.217.14.238

Configuration file:

version: "3.7"
services:
  gluetun:
    image: qmcgaw/private-internet-access
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    networks:
      - frontend
    ports:
      - 8000:8000/tcp # Built-in HTTP control server
      - 8080:8080/tcp #Qbittorrent
    # command:
    volumes:
      - /configs/vpn:/gluetun
    environment:
      # More variables are available, see the readme table
      - VPNSP=private internet access

      # Timezone for accurate logs times
      - TZ=America/Los_Angeles

      # All VPN providers
      - USER=username

      # All VPN providers but Mullvad
      - PASSWORD=pwd

      # All VPN providers but Mullvad
      - REGION=CA Vancouver
      
      - PORT_FORWARDING=on
      - PORT_FORWARDING_STATUS_FILE="/gluetun/forwarded_port"
      - PIA_ENCRYPTION=normal
      - GID=1000
      - UID=1000
      - FIREWALL_OUTBOUND_SUBNETS=192.168.1.0/24
    restart: always
    
   qbittorrent:
    image: linuxserver/qbittorrent
    container_name: qbittorrent
    network_mode: "service:gluetun"
    volumes:
      - /configs/qbt:/config
      - /media:/media
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
      - UMASK_SET=000
    restart: unless-stopped

  jackett:
    image: linuxserver/jackett
    container_name: jackett
    networks:
      - frontend
    ports:
      - 9117:9117/tcp #Jackett
    volumes:
      - /configs/jackett:/config
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
      - UMASK_SET=000
    restart: unless-stopped

  alpine:
    image: alpine
    networks:
      - frontend
    container_name: alpine
    command: tail -f /dev/null

networks:
  frontend:
    name: custom_net
    ipam:
      config:
        - subnet: "172.18.0.0/16"     

Host OS: Ubuntu 20.04 LTS

Hello, I am trying to setup my containers such that I can call them with names. My setup consists of Gluetun on the “frontend” network. Qbittorrent shares the network stack with Gluetun. 2 additional containers exist, Jackett and alpine. As you can see from the logs, from the alpine (test) container, I am able to resolve the names of jackett and gluetun containers.

I am however unable to do this the other way, i.e. resolve the names of jackett or alpine from the gluetun container. I am sure this has something to do with the DOT setup, but I have tried various things to no avail.

192.168.1.0/24 is my local lan. I left it in there so that traffic an talk to local LAN services. Any assistance would be appreciated.

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 9
  • Comments: 71 (26 by maintainers)

Commits related to this issue

Most upvoted comments

So I figured, this would help in understanding the overall situation. I should add that Can resolve internet names using UDP/53 – NO is a good thing.

image image image

Thanks for closing that dusty issue!

Although actually I’m working actively on this at https://github.com/qdm12/dns/tree/v2.0.0-beta which will soon replace Unbound and allow for such resolution 😉 It should close a bunch of related issues as well, let’s keep it opened!

I have a (convoluted) solution in mind which relies ‘less’ on the OS:

  1. Detect the Docker DNS address at start, i.e. 127.0.0.11
  2. Run a DNS UDP proxy (coded from scratch in Go) listening on port 53 so that it can hook into the queries and:
    • resolve local hostnames (no dot .) using 127.0.0.11 (and also check the returned address is private)
    • otherwise proxy the query to unbound listening on port 1053 for example

I’m still playing around with /etc/resolv.conf and options, as well as searching through Unbound’s configuration options, for now though. But otherwise the solution above solves the problems, and could be a first step towards moving away from Unbound (#137)

@networkprogrammer Thanks for the suggestions! Let me change that interface Unbound is listening on to the default interface, having a DNS over TLS server through the tunnel is definitely interesting 😄

@qdm12 I think this issue should resolve with an additional forward-zone in the unbound.conf.

It would use the name of the docker network as domain and forward-addr to docker’s internal DNS 127.0.0.1 like this example for an compose file mystack/docker-compose.yml:

forward-zone:
  name: "mystack_default"
  forward-addr: 127.0.0.11

This only requires that all communication between containers resolves using the docker-internal FQDN (in our example compose file above, if an application has to connect to a redis container, instead of setting CACHE_HOST: redis it would be CACHE_HOST: redis.mystack_default).

@networkprogrammer if you want to ask questions and/or propose changes, I have https://github.com/qdm12/dns/pull/58 which implements a DoT and DoH DNS servers for a bunch of providers to replace Unbound completely. It’s working and can already be imported from a Go project to another, but I want to finish a few things first like caching, dns blocking and add more unit tests.

For a more gentle introduction, I propose to you my < 300 lines DoH DNS server gist with its Reddit post

Anyway, I’ll ping you once I do another PR to address this issue here. I’m thinking of an Option to map 0-dot record queries (example qbitorrent instead of github.com) to the default DNS server. Possibly also add a check on the result to verify it’s a private IP address, otherwise fall back on the DoT/DoH server route.

Hi @denizdogan, so your shady container does not really have a network of its own. It is sharing the network of the vpn container. vpn and shady can talk to each other on localhost. clean should use the name vpn to ping shady

Here is what I mean. vpn and shady have the same IP. (I changed the network since I have 17.18/16 in use) $ docker exec -it vpn ip a show dev eth0 119: eth0@if120: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.20.0.3/16 brd 172.20.255.255 scope global eth0 valid_lft forever preferred_lft forever

$ docker exec -it shady ip a show dev eth0 119: eth0@if120: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff inet 172.20.0.3/16 brd 172.20.255.255 scope global eth0 valid_lft forever preferred_lft forever

clean has a different IP $ docker exec -it clean ip a show dev eth0 117: eth0@if118: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff inet 172.20.0.2/16 brd 172.20.255.255 scope global eth0 valid_lft forever preferred_lft forever

$ docker exec -it clean ping vpn PING vpn (172.20.0.3): 56 data bytes 64 bytes from 172.20.0.3: seq=0 ttl=63 time=0.123 ms 64 bytes from 172.20.0.3: seq=1 ttl=63 time=0.078 ms

To further validate this, I installed curl on clean and ran a shady-nginx webserver $ docker exec -it clean sh -c apk update && apk add curl

shady-nginx:  
    image: nginx:alpine
    container_name: shady-nginx
    network_mode: "service:vpn"

Then from clean I can get to the webserver using the name vpn

$ docker exec -it clean curl -Ik vpn HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Sat, 11 Jun 2022 04:33:54 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:26:06 GMT Connection: keep-alive ETag: “61f0168e-267” Accept-Ranges: bytes

Hope that helps

Just a tiny update, I fiddled with miekg/dns this weekend and am now writing a DNS over HTTPS upstream server for another repository, but it should be imported into Gluetun and replace Unbound soon. When that’s done, I should be able to ‘hack into it’ and capture container hostnames somehow and send them to another DNS as we wanted to do (so a bit of a DNS proxy). Might still be a few weeks away but it’s in progress.

That ~makes~ might make sense. If you set DNS_KEEP_NAMESERVER=on, as it states it, the nameserver in /etc/resolv.conf is kept (code).

Because by default (since a recent commit indeed), gluetun is allowed to communicate to its local Docker network subnet, it is allowed to reach the Docker network DNS.

Maybe your host DNS is set to 1.1.1.1? You can also exec in the container and check with cat /etc/resolv.conf what’s in there 👀 ?

Back to the topic, someone on the Reddit post replied, it’s apparently possible to do what I wanted to do with bind so I’ll dig into that… Not in the coming 2-3 days though, as my day job is getting intense this week and I have some Ikea drawers to assemble, which is far more complex than networking and routing 😅

OK, so the problem is with Chain OUTPUT (policy DROP). I understand that we want this to block traffic if there is no VPN and we should keep it that way. I added the following line iptables -A OUTPUT -d 172.18.0.0/16 -j ACCEPT since 172.18.0.0/16 is my local docker network. That fixed my issue.

Now Gluetun/qbt can talk to other containers on the network. So we need to allow traffic to the local network.

I’m ok with closing this issue.

Let me finish (and start haha) that DNS proxy to solve the issue properly. It’s good we have workarounds for now, but I would definitely like to fix it properly.

btw where is the code you did for the DNS server

Nowhere yet! I’ll get to it in the coming days, I’ll tag you and comment here once I have a start of a branch going if you want to pull request review/ask questions 😉 Although that will likely just be a UDP proxy inspecting DNS queries and routing them accordingly (I did a UDP proxy but never fiddled with DNS either).

I’m still testing things out, I would ideally like it to work without having to specify the DNS at the Docker configuration level.

Plus, since Unbound blocks i.e. malicious hostnames, I cannot just add the local DNS below Unbound as this would resolve blocked hostnames.

Maybe I’m asking for too much 😅 I’ll let you know what I find.

No problem, thanks a ton for stretching this out in all the directions! I can definitely test it myself too, so it should be easy to integrate nicely. Allow me 1 to 2 days so I can get to it, I’m a bit over-busy currently unfortunately, but I can’t wait to fix this up! Plus this should be how it is behaves natively imo.

So I did some more digging. On the alpine container that is not sharing the stack with Gluetun, I checked its /etc/resolv.conf config. It points to docker’s embedded dns server 127.0.0.11

$ docker exec -it alpine /bin/sh
/ # cat /etc/resolv.conf 
search local
nameserver 127.0.0.11
options ndots:0
/ # host jackett
jackett has address 172.18.0.3
/ # exit

I then ran the same test on the container that shares the network stack with gluetun and the name resolutions worked. So it seems the DNS server change is causing this change in behavior

$ docker exec -it alpine_vpn /bin/sh
/ # host jackett  127.0.0.11
Using domain server:
Name: 127.0.0.11
Address: 127.0.0.11#53
Aliases: 

jackett has address 172.18.0.3

#searching using DOT
/ # host jackett  
Host jackett not found: 3(NXDOMAIN)

I am not that familiar with Go or the way the code for gluetun works, to help with code changes. Can you do any config in unbound to send non-FQDN queries towards the built-in dns servers and everything else to DOT ?

Thanks.