aardvark-dns: Cannot map port 53 from container to host; conflicts with dnsmasq

Is this a BUG REPORT or FEATURE REQUEST?

/kind bug

Description

When using docker-compose and podman, podman fails to bring up containers trying to port map ports below 60. Additionally, when trying to map port 53 on the host, it seems to conflict with dnsmasq process podman spawns.

Steps to reproduce the issue:

Parsing Error

  1. Install podman 3.0 as root to utilize docker-compose features

  2. Make sure to disable any dns(port 53) service running on OS

  3. Using the docker-compose.yml file below issue: docker-compose up

Port 53 Conflict

  1. Install podman 3.0 as root to utilize docker-compose features

  2. Make sure to disable any dns(port 53) service running on OS

  3. Edit the docker-compose.yml file and change - 53:53 to - 53:XXXX, where XXXX is anything above 59. Example: - 53:60

  4. Then issue the following: docker-compose up

Describe the results you received:

Using the unmodified docker-compose.yml file below will generate the parsing error:

root@vm-307:/home/crowley# docker-compose up
Creating network "crowley_default" with the default driver
Creating crowley_admin_1 ...
Creating crowley_pdns_1  ... error
Creating crowley_admin_1 ... done
ERROR: for crowley_pdns_1  Cannot create container for service pdns: make cli opts(): strconv.Atoi: parsing "": invalid syntax

From my testing if I change the port mapping, - 53:53 to be anything above 59 for the container port, it passes the parsing error.

Changing the port mapping to - 53:60, allows the docker-compose up to continue but fail with this error message:

root@vm-307:/home/crowley# docker-compose up
Creating network "crowley_default" with the default driver
Creating crowley_admin_1 ...
Creating crowley_pdns_1  ... error
Creating crowley_admin_1 ... done
ERROR: for crowley_pdns_1  error preparing container ac8f5caddef9e28d43fd2f8b41d0c96845765c623b1f7fe0fef3b6692efa5910 for attach: cannot listen on the TCP port: listen tcp4 :53: bind: address already in use

ERROR: for pdns  error preparing container ac8f5caddef9e28d43fd2f8b41d0c96845765c623b1f7fe0fef3b6692efa5910 for attach: cannot listen on the TCP port: listen tcp4 :53: bind: address already in use
ERROR: Encountered errors while bringing up the project.

Just to make sure I am not crazy, I bring down the containers, docker-compose down. Then check my ports using sudo lsof -i -P -n. Which results in:

root@vm-307:/home/crowley# sudo lsof -i -P -n
COMMAND PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd    630    root    3u  IPv4  32734      0t0  TCP *:22 (LISTEN)
sshd    630    root    4u  IPv6  32736      0t0  TCP *:22 (LISTEN)
sshd    668    root    4u  IPv4  32763      0t0  TCP X.X.X.X:22->X.X.X.X:55832 (ESTABLISHED)
sshd    695 crowley    4u  IPv4  32763      0t0  TCP X.X.X.X:22->X.X.X.X:55832 (ESTABLISHED)

Please note X.X.X.X is just me censoring my IPs. As you can see I do not have any services listen on port 53.

Next I issue docker-compose up again. I see the same port conflict issue. Then issue sudo lsof -i -P -n to check my services before bringing down the containers.

root@vm-307:/home/crowley# sudo lsof -i -P -n
COMMAND   PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd      630    root    3u  IPv4  32734      0t0  TCP *:22 (LISTEN)
sshd      630    root    4u  IPv6  32736      0t0  TCP *:22 (LISTEN)
sshd      668    root    4u  IPv4  32763      0t0  TCP X.X.X.X->X.X.X.X:55832 (ESTABLISHED)
sshd      695 crowley    4u  IPv4  32763      0t0  TCP X.X.X.X:22->X.X.X.X:55832 (ESTABLISHED)
dnsmasq 16060    root    4u  IPv4 112910      0t0  UDP 10.89.0.1:53
dnsmasq 16060    root    5u  IPv4 112911      0t0  TCP 10.89.0.1:53 (LISTEN)
dnsmasq 16060    root   10u  IPv6 116160      0t0  UDP [fe80::9cc6:14ff:fe16:3953]:53
dnsmasq 16060    root   11u  IPv6 116161      0t0  TCP [fe80::9cc6:14ff:fe16:3953]:53 (LISTEN)
conmon  16062    root    5u  IPv4 111869      0t0  TCP *:9191 (LISTEN)

As you can see podman has spawned a dnsmasq process. I think this is to allow DNS between the containers, but seems to conflict if you want to run/port map port 53.

Describe the results you expected:

I expect not to hit that parsing error. I am not sure why podman/docker-compose is hitting that error. When running that exact same docker-compose.yml via docker I have no issues.

I also expect not to hit port 53 conflicts. I am not sure how podman is handling DNS between the containers but the implementation limits users ability to hosts different services.

Additional information you deem important (e.g. issue happens only occasionally):

N/A

Output of podman version:

podman version 3.0.0

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.19.2
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.26, commit: '
  cpus: 8
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: vm-307
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.4.0-28-generic
  linkmode: dynamic
  memFree: 15873085440
  memTotal: 16762957824
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.17.6-58ef-dirty
      commit: fd582c529489c0738e7039cbc036781d1d039014
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: true
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 1023406080
  swapTotal: 1023406080
  uptime: 1h 11m 7.15s (Approximately 0.04 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 0
    stopped: 4
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 4
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 0
  BuiltTime: Wed Dec 31 19:00:00 1969
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 3.0.0

Package info (e.g. output of rpm -q podman or apt list podman):

Listing... Done
podman/unknown,now 100:3.0.0-4 amd64 [installed]
podman/unknown 100:3.0.0-4 arm64
podman/unknown 100:3.0.0-4 armhf
podman/unknown 100:3.0.0-4 s390x

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.): Running on amd64 hardware. The server is a VM inside of VMware. Also running on Ubuntu 20.04.

docker-compose.yml

version: "3"
services:
  pdns:
    image:  powerdns/pdns-auth-master:latest
    ports:
      - 53:53
      - 8081:8081
  admin:
    image: ngoduykhanh/powerdns-admin:latest
    ports:
      - 9191:80

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 6
  • Comments: 47 (19 by maintainers)

Commits related to this issue

Most upvoted comments

can replicate with pi-hole in docker-compose fedora 33 server podman v3.0.1 port 53 already in use by dnsmasq this unfortunately a deal breaker, i can’t use podman until this is fixed

yeah there is a way but not through compose. and i assume you want container to container name resolution. i have some ideas on this to discuss with the team and see if we can come up with a solution.

@martinetd In order to access my my DNS server consistently over both my LAN, but also remotely via WireGuard - I gave up having the DNS port remapped and instead moved all of my DNS infra to a bridged VM. Now my DNS will show up with a dedicated IP address on both my LAN (which can be accessed by all of my other containers, my host machine, and any machines connected to my VPN).

I hate doing this. It uses way more resources than reasonable, and as a result I ended up completely replacing the hardware my services are running on - having to use a processor that supports bridging directly to the network interfaces (I needed hardware that specifically used VT-d, otherwise I’d be relying on having to manually route packets - AGAIN).

Maybe there were legitimate reasons locking up port 53 was necessary, but under most circumstances this port should be treated as a system port, and shouldn’t ever be remapped by userland software unless it’s documented to do so?

@dada513 could you be a little more specific about your use case and what your workaround actually looks like?

  • Are you using the hostnames of pods within the same network for DNS resolution? With your solution, I found this still does not work as port 53 gets overwritten. For example, if I have a pod named balancer, but have another pod named dns that maps to the host port 53, I can no longer access any pods by name on sibling container - which makes my balancer pod, and any other pods that talk over the same network - defunct.

  • What method are you using for binding? Iptables? Ifconfig? Dnsmasq? Systemd? Via podman?

Yes, I am using hostnames for DNS resolution. For example in caddy, my reverse proxy, I point to my nextcloud container using its name. But one major difference is I use containers with a podman network connecting them, and not pods.

I am binding via podman. What I do is use the IP address of the machine directly in port bindings for DNS, like: 192.168.10.250:53:53/udp This binds it only to the public ip, so podman containers can stil access aavark-dns with the local ip or some podman gateway, while devices can access the actual dns server just fine.

I am too running a wireguard VPN, but I had to set the dns for peers to my machine’s public ip

There is no workaround other than not using aardvark.

iptables is the only way IMO. There is no way to tell resolvers to connected to a different port. Of course this should not redirect all traffic from port 53 to the aardvark port instead it should only redirect traffic which comes from the container subnet.

@martinetd In order to access my my DNS server consistently over both my LAN, but also remotely via WireGuard - I gave up having the DNS port remapped and instead moved all of my DNS infra to a bridged VM. Now my DNS will show up with a dedicated IP address on both my LAN (which can be accessed by all of my other containers, my host machine, and any machines connected to my VPN).

I hate doing this. It uses way more resources than reasonable, and as a result I ended up completely replacing the hardware my services are running on - having to use a processor that supports bridging directly to the network interfaces (I needed hardware that specifically used VT-d, otherwise I’d be relying on having to manually route packets - AGAIN).

Maybe there were legitimate reasons locking up port 53 was necessary, but under most circumstances this port should be treated as a system port, and shouldn’t ever be remapped by userland software unless it’s documented to do so?

@dada513 could you be a little more specific about your use case and what your workaround actually looks like?

  • Are you using the hostnames of pods within the same network for DNS resolution? With your solution, I found this still does not work as port 53 gets overwritten. For example, if I have a pod named balancer, but have another pod named dns that maps to the host port 53, I can no longer access any pods by name on sibling container - which makes my balancer pod, and any other pods that talk over the same network - defunct.

  • What method are you using for binding? Iptables? Ifconfig? Dnsmasq? Systemd? Via podman?

no

Probably need to have a discussion about how…

the socat logs from this operation show this:

{"ExposedPorts": {"3233/tcp": {}, "8081/tcp": {}}, "Tty": false, "OpenStdin": false, "StdinOnce": false, "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Image": "powerdns/pdns-auth-master:latest", "Volumes": {}, "NetworkDisabled": false, "HostConfig": {"NetworkMode": "9444_default", "VolumesFrom": [], "Binds": [], "PortBindings": {"3233/tcp": [{"HostIp": "", "HostPort": ""}], "8081/tcp": [{"HostIp": "", "HostPort": "8081"}]}, "Links": [], "LogConfig": {"Type": "", "Config": {}}}, "NetworkingConfig": {"EndpointsConfig": {"9444_default": {"Aliases": ["pdns"], "IPAMConfig": {}}}}, "Labels": {"com.docker.compose.project": "9444", "com.docker.compose.service": "pdns", "com.docker.compose.oneoff": "False", "com.docker.compose.project.working_dir": "/home/baude/tmp/9444", "com.docker.compose.project.config_files": "docker-compose.yml", "com.docker.compose.container-number": "1", "com.docker.compose.version": "1.27.4", "com.docker.compose.config-hash": "cfced1b43ab39519b4d3b19b56a02dcc9db29dc0344c280d7857854452a2aaed"}}< 2021/02/19 15:00:53.366257  length=277 from=0 to=276

which is odd … so it seems that podman is doing exactly what it’s been told. map 3233 to a random port. so now, we should focus on why

ok, i can replicate this, i will need to dive in deeper. thanks for the issue and ill update here as soon as I can