nginx-proxy: Error: no servers are inside upstream in

I’ve updated my proxy image today, and tried to restart all my other containers behind the proxy, however all of them failed, am I doing something wrong? ( i did follow explanations in issue #64, however that didn’t help)

proxy

docker run -d --name nginx-proxy \
    -p 80:80 -p 443:443 \
    --restart=always \
    -v /opt/my-certs:/etc/nginx/certs \
    -v /var/run/docker.sock:/tmp/docker.sock:ro \
    jwilder/nginx-proxy

my dev container (nodejs) built locally and it exposes port 8181

docker run -d --name www.dev1 \
    --restart=always \
    --link db --link redis \
    -e VIRTUAL_PORT=8181 \
    -e VIRTUAL_PROTO=https \
    -e VIRTUAL_HOST=dev1.mysite.com \
    -v /opt/my-volume/web/dev1/:/opt/my-volume/web/ \
    -v /opt/my-certs:/opt/my-certs:ro \
    -w /opt/my-volume/web/ localhost:5000/www \
    bash -c 'npm start server.js'

Right before I run dev container, i can see output of nginx -t

root@fba41f832f35:/app# nginx -t  
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

After I start dev container, i see the following

root@fba41f832f35:/app# nginx -t        
2016/05/02 07:15:49 [emerg] 69#69: no servers are inside upstream in /etc/nginx/conf.d/default.conf:34
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/default.conf:34
nginx: configuration file /etc/nginx/nginx.conf test failed

when I check /etc/nginx/conf.d/default.conf i see empty upstream

upstream dev1.mysite.com {
}

Is there anything I am doing wrong? I’ve been using same startup script for a good 6 month and it used to work right before I pulled the new image, did anything changed? Please help

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 91 (2 by maintainers)

Commits related to this issue

Most upvoted comments

@wader Yep, exposing a dummy port did the trick.

For the sake of completeness: I patched nginx.tmpl by changing

{{ range $knownNetwork := $CurrentContainer.Networks }}

into

# XXX {{ $CurrentContainer }} XXX
{{ range $knownNetwork := $CurrentContainer.Networks }}

which yields the following output:

# XXX <no value> XXX

I hope that debugging approach makes any sense…

@DeKugelschieber use https://raw.githubusercontent.com/jwilder/nginx-proxy/a72c7e6e20df3738ca365bf6c14598f6a8017500/nginx.tmpl instead of the one on master and you’ll be fine. Below is my nginx-gen.service if it helps

[Unit]
Description=Automatically generate nginx configuration for serving docker containers
Requires=docker.service nginx.service
After=docker.service nginx.service

[Service]
ExecStartPre=/bin/sh -c "rm -f /tmp/nginx.tmpl && curl -Lo /tmp/nginx.tmpl https://raw.githubusercontent.com/jwilder/nginx-proxy/a72c7e6e20df3738ca365bf6c14598f6a8017500/nginx.tmpl"
ExecStartPre=/bin/sh -c "docker inspect nginx-gen >/dev/null 2>&1 && docker rm -f nginx-gen || true"
ExecStartPre=/usr/bin/docker create --name nginx-gen --volumes-from nginx -v /tmp/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/docker-gen -notify-sighup nginx -watch -only-exposed -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
ExecStart=/usr/bin/docker start -a nginx-gen
ExecStop=-/usr/bin/docker stop nginx-gen
ExecStopPost=/usr/bin/docker rm -f nginx-gen
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

Bumping this, I’m working on quite a few changes (1, 2, 3) in docker-gen that should clear up a few issues here. There are breaking changes in it in order to support Docker APIs before & after networking changes (removing deprecated NetworkSettings config items, replacing with RuntimeContainer.Networks, emulating if pre-v1.21 to keep configs consistent across Docker versions). Also addressing CurrentContainerID pointing to docker-gen in a separate container setup, meaning relying on -only-exposed and being able to get the correct RuntimeContainer struct for, say, nginx will be possible.

@AHelper I guess you now run into the issue i describe in https://github.com/jwilder/nginx-proxy/issues/438#issuecomment-216954866. Try run docker-gen without -only-exposed or export a dummy port for the docker-gen container. There should probably be a better fix for this.

I had to revert back to a72c7e6e20df3738ca365bf6c14598f6a8017500

I fixed this by stopping the container and removing the volume that contained the config (docker volume rm nginx_conf, but docker volume ls | grep nginx to find the name of the volume on your machine). After that, I was able to start the container normally and everything worked again.

In my case this happened because I started a container with docker-compose up that had VIRTUAL_HOST=xxx defined in the env vars. However since docker-compose creates a new network, this container wasn’t reachable by the jwilder/nginx container ( which was started separately ).

It couldn’t fetch the ip address for this container, created an empty value for this VIRTUAL_HOST domain and then failed with the error message no servers are inside upstream.

Shutting down the new container with docker-compose down and restarting the proxy brought everything back to life.

I can confirm that upstream is empty. As a workaround, I mounted conf.d to a volume and edited default.conf manually.

I did some debugging. $CurrentContainer is undefined in nginx.tmpl.

Same problem here. docker 1.9.1cs2 on docker cloud