foundryvtt-docker: Container doesn't populate resolv.conf properly
š Bug Report
Iād like to supply my own dns server with the container using the --dns attribute. Yet this is not correctly picked up and inserted into the /etc/resolv.conf. This makes it impossible for the container to run in bridge networking mode
To Reproduce
Steps to reproduce the behavior:
- Run the container using --dns 8.8.8.8
- The contain will not run properly as it cannot resolve the DNS for foundryvtt.com
Expected behavior
Iām expecting the DNS to be populated into the /etc/resolv.conf. I donāt know how ever why it isnāt This works fine for all my other containers Iām running.
Any helpful log output
I use docker-compose
version: "3.3"
secrets:
config_json:
file: /share/Container/foundryvtt-secrets.json
services:
foundry:
image: felddy/foundryvtt:0.7.8
hostname: foundryvtt
mac_address: 24:5E:BE:00:00:F6
dns:
- 192.168.1.2
- 8.8.8.8
- 8.8.4.4
networks:
qnet-static-eth0-79e6cc:
ipv4_address: 192.168.1.246
volumes:
- type: bind
source: /share/Container/foundryvtt
target: /data
environment:
- FOUNDRY_LICENSE_KEY=*
- CONTAINER_CACHE=/data/container_cache
- CONTAINER_PATCHES=/data/container_patches
secrets:
- source: config_json
target: config.json
networks:
qnet-static-eth0-79e6cc:
external: true
Paste the results here:
Entrypoint | 2020-12-25 09:30:07 | [info] Starting felddy/foundryvtt container v0.7.8
Entrypoint | 2020-12-25 09:30:07 | [info] Reading configured secrets from: /run/secrets/config.json
Entrypoint | 2020-12-25 09:30:09 | [info] No Foundry Virtual Tabletop installation detected.
Entrypoint | 2020-12-25 09:30:09 | [info] Using FOUNDRY_USERNAME and FOUNDRY_PASSWORD to authenticate.
Authenticate | 2020-12-25 09:30:14 | [info] Requesting CSRF tokens from https://foundryvtt.com
Authenticate | 2020-12-25 09:30:19 | [error] Unable to authenticate: request to https://foundryvtt.com/ failed, reason: getaddrinfo EAI_AGAIN foundryvtt.com
The /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 24 (8 by maintainers)
Commits related to this issue
- Merge pull request #135 from cisagov/improvement/dockerfile Improve Dockerfile — committed to felddy/foundryvtt-docker by mcdonnnj 2 years ago
- Merge pull request #135 from cisagov/improvement/dockerfile Improve Dockerfile — committed to Sallenmoore/foundryvtt-docker by mcdonnnj 2 years ago
Iāve resolved the DNS issue Iāve been having while running this and other Alpine based images in Kubernetes clusters on my network.
Short answer: I turned off DNSSEC for my domain name managed by Cloudflare and everything started working.
Read on for details.
Some information about my setup:
Some general information about what causes the problem for me (and possibly for you):
options ndots:5
to/etc/resolve.conf
inside the containermylocaldomain.tld
in my case) and adds a bunch of Kubernetes specific ones likecluster.local
andsvc.cluster.local
.resolve.conf
configuration has to do with looking up local services inside the cluster.foundryvtt.com
is performed inside of a container, all of those search domains are checked first. For example,foundryvtt.com.svc.cluster.local
thenfoundryvtt.com.cluster.local
andfoundryvtt.com.mylocaldomain.tld
. Finally, if none of those other domains āresolveā, thenfoundryvtt.com
is checked....cluster.local
domains are rejected by CoreDNS inside of the cluster, I guess. No beef with those.foundryvtt.com.mylocaldomain.tld
escapes the cluster and gets to my internal Unbound DNS server.mylocaldomain.tld
part and asks Cloudflare how to resolve it because Cloudflare is the authority on that particular domain.Here are some links that helped me figure this out:
I could verify that this was a problem and that my fix worked using alpine/git and dig.
Before fix:
(note that
github.com
did not resolve inside an Alpine based container inside of the cluster)(note the
NOERROR
response)After fix:
(note that
github.com
resolved inside an Alpine based container inside of the cluster)(note the
NXDOMAIN
response)In my case, it was an easy decision to disable DNSSEC because the domain is only used internally and Iām not using Cloudflare for normal records. If you want to keep DNSSEC on, you may have to get creative or switch away from Cloudflare.
I ported to node:12-slim to successfully work around the problem. Iām running into a lot of DNS issues on alpine based images. Not sure if itās my k8s clusterās configuration, or what.
Iām seeing the same issue running in Kubernetes. Might be related to this bug in Alpine. Edit: scratch that. I rebuilt using
node:12-alpine3.10
and still had the problem.Iāll test again this on my 3 k8s clusters with the Alpine image (my default), and update here and in the other thread too. Iām still have the 8.8.8.8 on my CoreDNS so Iāll try both, and edit this post
My 3 clusters runs today K8S Version
Runnin CoreDNS k8s.gcr.io/coredns/coredns:v1.8.4
Confirmed with the same
I have found something interesting that may solve the issue.
Though the call to dns.lookup() will be asynchronous from JavaScriptās perspective, it is implemented as a synchronous call to getaddrinfo(3) that runs on libuvās threadpool. This can have surprising negative performance implications for some applications, see the UV_THREADPOOL_SIZE documentation for more information. and from:
https://nodejs.org/api/cli.html#cli_uv_threadpool_size_size more here: https://medium.com/@amirilovic/how-to-fix-node-dns-issues-5d4ec2e12e95
This solved my issue running 200 deployments.
I have not been able to fixt his yet but I suspect this may be an issue with core DNS.
Lookups for foundryvtt.com appear to be failing because passthrough does not seem to be working
from coredns logs
no lookups for foundryvtt.com though.
In case this is helpful. I noticed that the
felddy/foundryvtt:improvement-debian
worked fine, however the following are errors infelddy/foundryvtt:latest
Results Locally
inside k3s: (yaml included) (this also worked setting the dns server to 8.8.8.8)
Sure thing. Iāll test it this evening (or possibly tomorrow if I run out of time) and Iāll post back here
I also have this networking issue in my k3s cluster. @jdmarbleās repo worked š