interlock: Interlock does not generate/save proxy config files
I’m deploying interlock onto a swarm in Azure using the Azure Container Service template. It is basically a swarm with one master and a couple of nodes.
Here is my compose file:
version: '2'
volumes:
haproxy:
services:
# nginx service to serve static client files
nginx:
image: boilerangularloop/boilerangularloop_nginx
restart: always
ports:
- '8080'
labels:
- "interlock.hostname=boilagents"
- "interlock.domain=eastus.cloudapp.azure.com"
#interlock
interlock:
image: ehazlett/interlock:1.1.0
environment:
INTERLOCK_CONFIG: |
ListenAddr = ":8080"
DockerURL = "${SWARM_HOST}"
[[Extensions]]
Name = "haproxy"
ConfigPath = "/usr/local/etc/haproxy/haproxy.cfg"
PidPath = "/run/haproxy.pid"
MaxConn = 1024
Port = 80
AdminUser = "admin"
AdminPass = "interlock"
command: -D run
ports:
- '8080:8080'
volumes:
- haproxy:/usr/local/etc/haproxy
#loadbalancer
haproxy:
image: haproxy:latest
ports:
- '80:80'
- '443:443'
labels:
- "interlock.ext.name=haproxy"
depends_on:
- interlock
volumes:
- haproxy:/usr/local/etc/haproxy
I can see the haproxy volume being created and mounted however interlock doesn’t seem to generate the haproxy.cfg file and hence the haproxy service dies. Same happens if I use the nginx extension instead.
Here are the logs:
interlock_1 | time="2016-04-06T13:10:35Z" level=info msg="interlock 1.1.0 (8a68c99)"
interlock_1 | time="2016-04-06T13:10:35Z" level=debug msg="loading config from environment"
interlock_1 | time="2016-04-06T13:10:35Z" level=debug msg="docker client: url=tcp://172.16.0.5:2375"
interlock_1 | time="2016-04-06T13:10:35Z" level=debug msg="loading extension: name=haproxy"
interlock_1 | time="2016-04-06T13:10:35Z" level=info msg="interlock node: id=6a62b5554c670e663ae42b1d4f46bd46c4701e049fe252c8d8bfd6853f100f1e" ext=lb
interlock_1 | time="2016-04-06T13:10:35Z" level=debug msg="starting event handling"
interlock_1 | time="2016-04-06T13:10:35Z" level=debug msg="event received: status=interlock-start id=0 type= action="
interlock_1 | time="2016-04-06T13:10:35Z" level=debug msg="notifying extension: lb"
interlock_1 | time="2016-04-06T13:10:35Z" level=debug msg="triggering reload" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="reaping key: reload"
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="triggering reload from cache" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="checking to reload" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="updating load balancers" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="generating proxy config" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=info msg="boilagents.eastus.cloudapp.azure.com: upstream=10.0.0.5:32799 container=vagrant_nginx_1" ext=haproxy
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="alias domains: []" ext=haproxy
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="adding host name=boilagents_eastus_cloudapp_azure_com domain=boilagents.eastus.cloudapp.azure.com" ext=haproxy
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="proxy config path: /usr/local/etc/haproxy/haproxy.cfg" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="saving proxy config" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="calculating restart across interlock nodes: num=1" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="proxy containers to restart: num=0 containers=" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="signaling reload" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="dropping SYN packets to trigger client re-send" ext=haproxy
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="&{/sbin/iptables [/sbin/iptables -I INPUT -p tcp --dport 80 --syn -j DROP] [] <nil> <nil> <nil> [] <nil> <nil> <nil> <nil> false [] [] [] [] <nil>}" ext=haproxy
interlock_1 | time="2016-04-06T13:10:36Z" level=warning msg="error signaling clients to resend; you will notice dropped packets: exit status 3" ext=haproxy
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="resuming SYN packets" ext=haproxy
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="&{/sbin/iptables [/sbin/iptables -I INPUT -p tcp --dport 80 --syn -j DROP] [] <nil> <nil> <nil> [] <nil> <nil> <nil> <nil> false [] [] [] [] <nil>}" ext=haproxy
interlock_1 | time="2016-04-06T13:10:36Z" level=warning msg="error signaling clients to resume; you will notice dropped packets: exit status 3" ext=haproxy
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="triggering proxy network cleanup" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=info msg="reload duration: 32.68ms" ext=lb
interlock_1 | time="2016-04-06T13:10:36Z" level=debug msg="checking to remove proxy containers from networks" ext=lb
haproxy_1 | <7>haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -f /usr/local/etc/haproxy/haproxy.cfg -Ds
haproxy_1 | [ALERT] 096/131038 (7) : Could not open configuration file /usr/local/etc/haproxy/haproxy.cfg : No such file or directory
haproxy_1 | <5>haproxy-systemd-wrapper: exit, haproxy RC=256
vagrant_haproxy_1 exited with code 0
As you see, it logs both generating and saving proxy config. But if I exec into the interlock container and look at the mounted volume, it remains empty. Any ideas?
Thanks!
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 27 (13 by maintainers)
Commits related to this issue
- Issue https://github.com/ehazlett/interlock/issues/114 demo setup — committed to cnadeau/swarmer by deleted user 8 years ago
- Issue https://github.com/ehazlett/interlock/issues/114 demo setup — committed to cnadeau/swarmer by deleted user 8 years ago
- Issue https://github.com/ehazlett/interlock/issues/114 demo setup — committed to cnadeau/swarmer by deleted user 8 years ago
- Issue https://github.com/ehazlett/interlock/issues/114 demo setup — committed to cnadeau/swarmer by deleted user 8 years ago
- Issue https://github.com/ehazlett/interlock/issues/114 demo setup — committed to cnadeau/swarmer by deleted user 8 years ago
- Merge pull request #114 from ehazlett/test-docker-updates minor fixup for tty in docker testing — committed to ehazlett/interlock by ehazlett 7 years ago
Here is a local setup to reproduce the original issue mentioned by @inf-rno
https://github.com/cnadeau/swarmer/tree/interlock_issue_114_setup
had to tweak the script to remove the master from the swarm worker nodes and expose non TLS ports to be closer to our own setup
To create the full local swarm setup: ./deploy.sh
docker exec -it swarm-agent-n1/swarmer_interlock_1 sh
haproxy fails at first because it doesn’t find the file before interlock generates it, but even after the log mentioning it’s created, the file is not generated inside the container:
PS: huge thanks to @everett-toews for the swarmer script
I have the same problem when using volumes and haproxy. interlock will not generate haproxy.cfg
@iiezhachenko FWIW, you won’t need to build Interlock yourself just to customize the templates once https://github.com/ehazlett/interlock/pull/113 merges (and there’s an image for it).