nginx-proxy: Error: no servers are inside upstream in
I’ve updated my proxy image today, and tried to restart all my other containers behind the proxy, however all of them failed, am I doing something wrong? ( i did follow explanations in issue #64, however that didn’t help)
proxy
docker run -d --name nginx-proxy \
-p 80:80 -p 443:443 \
--restart=always \
-v /opt/my-certs:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
my dev container (nodejs) built locally and it exposes port 8181
docker run -d --name www.dev1 \
--restart=always \
--link db --link redis \
-e VIRTUAL_PORT=8181 \
-e VIRTUAL_PROTO=https \
-e VIRTUAL_HOST=dev1.mysite.com \
-v /opt/my-volume/web/dev1/:/opt/my-volume/web/ \
-v /opt/my-certs:/opt/my-certs:ro \
-w /opt/my-volume/web/ localhost:5000/www \
bash -c 'npm start server.js'
Right before I run dev container, i can see output of nginx -t
root@fba41f832f35:/app# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
After I start dev container, i see the following
root@fba41f832f35:/app# nginx -t
2016/05/02 07:15:49 [emerg] 69#69: no servers are inside upstream in /etc/nginx/conf.d/default.conf:34
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/default.conf:34
nginx: configuration file /etc/nginx/nginx.conf test failed
when I check /etc/nginx/conf.d/default.conf i see empty upstream
upstream dev1.mysite.com {
}
Is there anything I am doing wrong? I’ve been using same startup script for a good 6 month and it used to work right before I pulled the new image, did anything changed? Please help
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 91 (2 by maintainers)
Commits related to this issue
- Updates Readme for using Separate Containers - Makes the first command more readable - Gives the docker-gen container a name - Adds the /etc/nginx/certs volume needed by the latest nginx.tmpl - Re... — committed to Giymo11/nginx-proxy by Giymo11 8 years ago
- Updates Readme on using Separate Containers - makes first command more readable - adds name to docker-gen container - adds volume for /etc/nginx/certs, which is needed by the latest .tmpl - remove... — committed to Giymo11/nginx-proxy by Giymo11 8 years ago
- always add server to upstream (hot)fix #438 #565 — committed to schmunk42/nginx-proxy by schmunk42 8 years ago
- always add server to upstream (hot)fix #438 #565 — committed to schmunk42/nginx-proxy by schmunk42 8 years ago
- always add server to upstream (hot)fix #438 #565 — committed to unleashedtech/nginx-proxy by schmunk42 8 years ago
- always add server to upstream (hot)fix #438 #565 — committed to Laski/nginx-proxy by schmunk42 8 years ago
@wader Yep, exposing a dummy port did the trick.
For the sake of completeness: I patched nginx.tmpl by changing
into
which yields the following output:
I hope that debugging approach makes any sense…
@DeKugelschieber use https://raw.githubusercontent.com/jwilder/nginx-proxy/a72c7e6e20df3738ca365bf6c14598f6a8017500/nginx.tmpl instead of the one on master and you’ll be fine. Below is my nginx-gen.service if it helps
Bumping this, I’m working on quite a few changes (1, 2, 3) in docker-gen that should clear up a few issues here. There are breaking changes in it in order to support Docker APIs before & after networking changes (removing deprecated NetworkSettings config items, replacing with RuntimeContainer.Networks, emulating if pre-v1.21 to keep configs consistent across Docker versions). Also addressing CurrentContainerID pointing to docker-gen in a separate container setup, meaning relying on -only-exposed and being able to get the correct RuntimeContainer struct for, say, nginx will be possible.
@AHelper I guess you now run into the issue i describe in https://github.com/jwilder/nginx-proxy/issues/438#issuecomment-216954866. Try run docker-gen without
-only-exposed
or export a dummy port for the docker-gen container. There should probably be a better fix for this.I had to revert back to a72c7e6e20df3738ca365bf6c14598f6a8017500
I fixed this by stopping the container and removing the volume that contained the config (
docker volume rm nginx_conf
, butdocker volume ls | grep nginx
to find the name of the volume on your machine). After that, I was able to start the container normally and everything worked again.In my case this happened because I started a container with
docker-compose up
that hadVIRTUAL_HOST=xxx
defined in the env vars. However sincedocker-compose
creates a new network, this container wasn’t reachable by thejwilder/nginx
container ( which was started separately ).It couldn’t fetch the ip address for this container, created an empty value for this
VIRTUAL_HOST
domain and then failed with the error messageno servers are inside upstream
.Shutting down the new container with
docker-compose down
and restarting the proxy brought everything back to life.I can confirm that upstream is empty. As a workaround, I mounted conf.d to a volume and edited default.conf manually.
I did some debugging.
$CurrentContainer
is undefined in nginx.tmpl.Same problem here. docker 1.9.1cs2 on docker cloud