moby: Cannot reference syslog-address by service name
As suggested, I transfer the issue from https://github.com/docker/compose/issues/2657
When I test with a compose file like this one
version: '2'
services:
#
# Application services
#
app:
image: hello-world
depends_on:
- logstash
links:
- logstash
logging:
driver: syslog
options:
syslog-address: "tcp://logstash:5514"
#
# ELK services
#
elasticsearch:
image: 'XXX/elasticsearch:latest'
ports:
- '9200:9200'
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
kibana:
image: 'XXX/kibana:latest'
links:
- elasticsearch
depends_on:
- elasticsearch
ports:
- '5601:5601'
logstash:
image: 'XXX/logstash:latest'
command: 'logstash -f /etc/logstash/conf.d/logstash.conf'
links:
- elasticsearch
depends_on:
- elasticsearch
ports:
- '5000:5000'
- '5514:5514'
volumes:
elasticsearch-data:
driver: local
I have the ‘ERROR: Failed to initialize logging driver: dial tcp: lookup logstash on 8.8.8.8:53: no such host’ error.
Seems that in the logging option of the app, the logstash service cannot be referenced in syslog-address: “tcp://logstash:5514”. Any idea of a workaround ?
About this issue
- Original URL
- State: open
- Created 8 years ago
- Reactions: 37
- Comments: 23 (10 by maintainers)
You cannot use the container-container network for logging; logging is handled by the docker daemon, not by the container; the container sends logs to
stdout
/stderr
, and the docker daemon captures that output, and passes it to the logging driver. The daemon’s networking (and the logging driver) are not part of the container-container network, thus cannot send the logs there.Here’s a simple example of the issue when trying to use overlay networks, ELK, and the GELF logging driver on a Swarm.
Running it produces the error
I’ve confirmed that the ELK portion is working using this GELF Client. It’s the logging driver running on the host network stack as @mavenugo explained.
There are some solutions floating around that say to change
gelf-address: "udp://logstash:12201"
togelf-address: "udp://localhost:12201"
but that doesn’t work with an overlay network. The error message does go away but no logs are actually written.Relevant versions
@gprivitera you can still use log-drivers, however, sending logs to a container won’t work if the container is only accessible over a docker network (as in the example @everett-toews showed). Logs are collected by the daemon. The daemon itself is not part of the docker-network, so cannot resolve / directly connect to a container, unless the container is exposing/publishing a port. See https://github.com/docker/docker/issues/20370#issuecomment-185229591
Though this is an interesting use-case, It is not easy to achieve (unless we deploy special tricks).
Reason : Name resolution between containers (via container name, links, aliases) are handled by the embedded DNS server (or the expanded server) in the context of the logical network formed between the containers. These networks are in isolated space managed and handled by the network driver. In some cases (such as overlay driver), the isolated network space is not even accessible from the host network namespace directly. Since the docker daemon operates in the host network stack, it may not even be able to access the service exposed within an application network stack. We could deploy some special tricks such as operating the daemon in container’s network stack. But it is trickier and complex.
The simpler alternative is to do port-mapping and access the mapped service from the docker daemon on the host-networking stack.
The solution was to publish ports: - “127.0.0.1:12201:12201/udp” instead of only 12201:12201. My problem is solved. Now I only have the problem that conntrack cashes the connection even after the receiving container dies.
On dockerCE for Windows it worked like
I have been running in the same issue trying to run a similar compose file, this time on swarm (with a special network dedicated to logging). It felt like a good idea to leverage overlay network to spawn ELK in the same compose file than my app (not so sure on security point of view though …)
I have the same problem. I am using fluentd, but this probably applies to syslogd and journald as well. Containers with fluentd logging-driver can not connect to a fluentd container within the same stack in swarm mode.
As the fluentd architecture suggests you should have a fluentd proxy for the containers of your application, then “forward” it to a “global collector” that runs in front of elasticsearch and kibana. The recommended way to map the proxy to port 24224 of the host and let all containers log to this host port works, but it does not scale well.
If I want to run a stack on my dev machine it requires either a preinstalled fluentd daemon, or a free port 24224 which means I cannot run 2 stacks in parallel with default config. Both options are not desireable because they introduce an external dependency to my docker deployment. Workarounds with other ports require additional configuration and administration for every environment/swarm this stack is deployed to.
I’d love to see this working as a minimal configuration:
second stack in swarm might be the log analysis, which picks up logs from all stacks running in the swarm