logspout: Broken Pipe when Sending TCP Data to Loggly
I am using logspout to stream docker logs into Loggly over TCP. This is done using the syslog+tcp://logs-01.loggly.com endpoint:
/usr/bin/docker run --name logspout \
-v=/var/run/docker.sock:/tmp/docker.sock \
-e SYSLOG_STRUCTURED_DATA=[LOGGLY_CUSTOMER_TOKEN]@41058 \
gliderlabs/logspout:master \
syslog+tcp://logs-01.loggly.com:514
Way too frequently, however, I am seeing a broken pipe message and data streaming from logspout stops. From the logspout logs:
2015/06/02 01:04:07 syslog: write tcp 54.236.68.122:514: broken pipe
I am assuming this occurs when the Loggly Route 53 ingestion endpoint changes IP and the TCP connection is lost.
Logspout should be updated to attempt to reestablish this connection if it is lost.
About this issue
- Original URL
- State: closed
- Created 9 years ago
- Reactions: 5
- Comments: 47 (9 by maintainers)
@schickling Yes, but not good news unfortunately.
We decided to use the syslog log driver to send the logs to the host where we configured rsyslog to send them through to Loggly.
We just deployed it today, will report back on how it works.
We use ECS, and our host rsyslog config file is at:
After reading this thread, it appears there may be multiple issues happening here. But the original issue would appear to be caused by the CNAME or A record changing (the IP is in AWS and would likely be a front end server or load balancer changing IPs).
Some debugging that would help confirm this would be to resolve logs-01.loggly.com when the error occurs to confirm that the IP from error and DNS record are indeed different.
The fix would need to be a retry mechanism that attempts to resolve the name again and reconnect (maybe a max retries + interval between retries option).
Suggestion / alternative to #150: instead of logging
broken pipeand continuing execution, just exit (with whatever exit code makes sense). Docker and Kubernetes can restart the container automatically, and will keep a count of restarts.